Three and a half years ago, due to a delivery mistake at Shelby where Google runs its data centre, world came to know about “Pluto Switch”, the mysterious network switch custom made by Google for moving digital data across its massive computing centers. Google is now opening up the same to external developers.
Tech geeks all over the world have always speculated about the data centre technology used by Google for connecting its servers because for several years, rather than buying traditional equipment from the likes of Cisco, Ericsson, Dell, and HP, Google had designed specialized networking gear for the engine room of its rapidly expanding online empire.
However after keeping its technology under wraps for all these years, Google has now revealed, for the first time, the details of five generations of its in-house network technology and has also opened up this powerful and transformative infrastructure for use by external developers through Google Cloud Platform. This announcement was made at the recent Open Network Summit, 2015 and officially posted on Google Cloud Platform blog by Amin Vahdat, Google Fellow and Technical Lead for Networking, Google.
According to the blog, from the first-gen Firehose network to the current Jupiter network, Google has increased the capacity of a single datacenter network more than 100x.
The blog post read,
Our current generation — Jupiter fabrics — can deliver more than 1 Petabit/sec of total bisection bandwidth. To put this in perspective, such capacity would be enough for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second.
Vahdat further notes,
our network control stack has more in common with Google’s distributed computing architectures than traditional router-centric Internet protocols.
There are three principles described for designing the Google data centre network:
- Use of Clos Topology network configuration for arrangement of a network of small switches into a much larger logical switch.
- Centralized software control to manage thousands of switches in the data centre effectively
- Use of custom protocols rather than using standard internet protocols for building own software and hardware using silicon from vendors
According to Vahadat, Google began designing its own gear under the project called “Firehose” in 2004 before which it used to build its networks like everyone else did by buying enormous “cluster switches” from companies like CISCO. However these switches cost the company “hundreds of thousands to millions” of dollars and they could accommodate only so many of Google’s computers.
So Google moved towards a model where it could mould the hardware and software ends to suit its needs by using commodity networking chips to build hardware devices that could run whatever software it wanted. Buying chips from companies such as Broadcom, Google built its own “top-of-rack” switches.
And using the same chips, it pieced together larger cluster switches that could serve as a network backbone. It built specialized “controller” software for running all this hardware and built its own routing protocol, dubbed Firehose, for efficiently moving data across the network. Vahdat says,
We couldn’t buy the hardware we needed to build a network of the size and speed we needed to build. It just didn’t exist.
This approach termed as “Software Defined Networking” is the significant change which is coming into the world of computer servers and data storage where instead of buying enormously large and complex machines, companies now buy lots of little machines that can string together into a larger whole and can run software which allows these machines to connect together. “You scale out rather than scale up,” Vahdat says, echoing what has become standard lingo in the world of computing.
Facebook has designed a similar breed of networking hardware and software and many other notable names such as Amazon, AT&T and Microsoft are moving on the same lines. “We’re not talking about it,” says Scott Mair, senior vice president of technology planning and engineering at AT&T. “We’re doing it.”
More recently, Google had announced its plans to expand its current data centers in Asia, specially in Singapore and Taiwan. That expansion takes Google’s total investment in Singapore to $500 million and add the $600 million worth Taiwan data center, Google’s total investment in Asia amounts to over $1 Billion.