We are hearing from customers of all sizes that more data-rich workloads are moving to the cloud. Customers are collecting more data than ever before and want it to be centralized and normalized before they can run analytics. Data lakes, modeling and simulating big data, AI and machine learning are all becoming common platforms for storage. These applications require object storage with file management and performance of block storage.
We will continue to evolve with your requirements and deliver products that are enterprise-ready in performance and scale that support data-driven apps. These products enable business insight while being simple to manage and protect your data against loss and disaster.
We made it easier to have continental-scale applications available and centralize data by increasing the number of Cloud Storage Dual regions and adding the turbo replication feature. This feature is now available in nine regions and three continents. This feature allows you to have a single bucket that is continent-sized. It delivers a RTO (or optional RPO) of zero and a time frame of 15 minutes. This makes app design simpler with high availability and one set of APIs, regardless of where data is stored.
How storage in the cloud is evolving to meet changing needs
We announced many storage innovations today at the A Spotlight On Storage digital customer event. Here are some examples that show our commitment to you.
Advancing our enterprise-readiness, we announced Google Cloud Hyperdisk, the next generation of Persistent Disk, bringing you the ability to easily and dynamically tune the performance of your block storage to your workload. Hyperdisk allows you to provision IOPS and throughput for individual applications, and can adapt to changing performance requirements over time.
For Google Kubernetes Engine, we also launched Filestore Enterprise multishare. Administrators can create Filestore instances and reorganize storage so that it can be used across thousands or more GKE clusters.
The service also provides non-disruptive storage upgrade options in the background, while GKE is running. It also has a 99.99% SLA for regional storage availability. Combining this with Backup to GKE, enterprises can modernize their systems by bringing in stateful workloads to GKE.
We continue to improve our storage to better support data-driven applications based on your feedback. We have a new Cloud Storage feature, Autoclass. This automatically moves objects based upon last access time and policy to colder or warmer storage classes. This is an automated, policy-based way to maximize Cloud Storage costs. We’ve seen this done many times.
It would not only cost us valuable engineering resources to create cost-optimization, but it could also expose us to costly mistakes that could result in retrieval fees for data that has been prematurely archived. Autoclass is a tool that helps us to reduce storage costs and achieve price predictability in an easy and automated manner .” –Ian Mathews (co-founder, Redivis)
We are focused on providing you with more business insight from your storage options, making it easier for you to optimize and manage your stored data. The new Storage Intelligences gives you actionable insight into the objects in Cloud Storage.
You can easily answer questions such as “How many objects do I have?” regardless of how many you manage. With products like BigQuery, it’s possible to imagine companies creating unique dashboards that provide insights into their stored data. These possibilities are endless.
To help you protect your most important applications and data, we also announced Google Cloud Backup and DR. This service provides a complete data-protection solution that protects critical applications and databases (e.g. Google Cloud VMware Engine, Compute Engine and databases such as SAP HANA).
It allows you to centrally manage disaster recovery and data protection policies within Google Cloud Console, and can fully protect applications and databases with just a few mouse clicks.
There are many storage options, but here’s what makes us different
Google Cloud is the same foundation Google uses to build businesses like Photos, YouTube and Gmail. This 20-year-old approach has allowed us to provide high-performance and exabyte-scale services to digital-first companies and enterprises. This storage infrastructure is built on , which is a global cluster-level file system that stores, manages, and provides the availability, performance and durability of Google Cloud storage services like Cloud Storage, Persistent Disk and Hyperdisk.
Our state-of the-art dedicated Google Cloud backbone network has almost 3x the throughput as AWS and Azure 1 and 173 network edge location. This allows you to see why our infrastructure is fundamentally different. It’s our global network that pairs with disaggregated compute storage and built on Colossus, which gives you the benefits for speed and resilience in your applications.