Google has launched a brand new version of its TensorFlow machine learning system with the aim of improving machine learning and reducing the time consumed while running programs.
If you havent heard of TensorFlow before, it is an open source software library for numerical computation using data flow graphs. The system was developed by researchers at the Google Brain Team, to conduct machine learning and deep neural networks research.
TensorFlow in its version 0.8, can run the training processes for building machine learning models in parallel, across hundreds of different machines. How would that help, you may ask. Well, here is what google says on the topic.
In order to continually improve our models, it’s crucial that the training process be as fast as possible. One way to do this is to runTensorFlowacross hundreds of machines, which shortensthe training process for some models from weeks to hours, and allows us to experiment with models of increasing size and sophistication
And this is exactly what 0.8 is capable of delivering. So now, something takes too much time, you simply add more brains to the process and voila!
Distributed learning has been one of the most requested features, and Google has finally brought it to its users. The company has also added Python support to facilitate library creation in version 0.8.
Defining handy new compositions of operators is as easy as writing a Python function and costs you nothing in performance.
The TensorFlow system is powered by the high-performanceg RPC library and you can scale the number of machines to boost performance. For example, TensorFlow speeded up Inception training by a factor of 56, on deploying 100 GPU.
The company is now working upon further improving the performance of its systems. Meanwhile, you can know more about the topic, by visiting github. To experience TensorFlow and machine learning firsthand, visit Google’s browser based simulator, right here.