Continuing with its increased, laser-focused approach towards making its cloud database services ready for enterprise and production use, Google is making several announcements today around its cloud platform. Most important of them all though, is the general availability of all of its database storage products.
With that, comes a new performance and security support for Google Compute Engine as well.
For the databases part, all of Google’s cloud database storage services are now generally available. That includes Cloud SQL, Cloud Bigtable and Cloud Datastore. With that, Google is also announcing improved performance, security and platform support for databases.
Starting with Cloud SQL, Google’s fully-managed database service offering easy-to-use MySQL instances, has completed a successful beta and is now generally available. Since beta, the search giant has added a number of enterprise features such as support for MySQL 5.7, point-in-time-recovery (PITR), automatic storage re-sizing and setting up failover replicas with a single click.
Also out of beta is Cloud Bigtable, Google’s scalable, fully-managed NoSQL database for large analytics to deal. The wide-column database service with Apache HBase client compatibility is now generally available. Since beta, many of Google Cloud’s customers such as Spotify, Energyworx and FIS (formerly Sungard) have built scalable applications on top of Cloud Bigtable for workloads such as monitoring, financial and geospatial data analysis.
The 15 Trillion requests a month serving Cloud Datastore is also generally available. For those of you who aren’t aware of this service, it is Google’s fully-managed NoSQL document database. Its v1 API for applications outside of Google App Engine has reached general availability.
All of these services — most importantly — are now backed by SLAs (Service Level Agreements)
Performance and Security Improvements
While all of Google’s Cloud database services go public, the company has also announced a slew of improved performance and security measures for all of its services.
Starting with Microsoft server integration, if you are looking to power up your Microsoft SQL server over Google Cloud, the company is soon bringing up an environment to run Microsoft SQL Server featuring images with built-in licenses (in beta), as well as the ability to bring your existing application licenses. There aren’t more details on this though, and more announcements are expected later.
This particular move is critical for Google. Now Google is looking to capture a market, where in it isn’t comparable to Microsoft’s dominance in the current timeframe. Thus giving its new enterprise clients an option to bring in their existing applications and workloads to its own cloud will go a long way in helping Google establish itself as a dominant player in the segment.
The company is also ramping up maximum read and write IOPS for SSD-backed Persistent Disk volumes from 15,000 to 25,000 at no additional cost, servicing the needs of the most demanding databases. This — very clearly — is yet another Google manoeuvre to lure in customers on the price front. And history has it, that Google has won a lot of its customers by offering price points, significantly lower than its competitors.
Nearline gets faster
A significant announcement coming out from Google is the lowering latency of its Nearline storage service for “cold” data. The service — which competes directly with Amazon’s Glacier storage — was suffering a 3 to 5 seconds of latency per object. That is getting lower now.
Google adds,
We’ve been continuously improving Nearline performance, and now it enables access times and throughput similar to Standard class objects. These faster access times and throughput give customers the ability to leverage big data tools such as Google BigQuery to run federated queries across your stored data.
1 comment