caffe2

On the first day of its F8 developer conference, Facebook announced that it is providing the open-source developer community with the first production-ready version of Caffe2. This deep learning framework, a new type of artificial intelligence (AI) technology, aims to bring the intelligence to create certain models to hand-held mobile devices.

The release of Caffe 2 follows in the footsteps of the original Caffe project launched by researchers at the University of California. The framework has been optimized and is now lightweight, portable enough to integrate artificial intelligence features into smartphones or tablets. The primary benefit of the platform being:

You can bring your creations to scale using the power of GPUs in the cloud or to the masses on mobile with Caffe2’s cross-platform libraries.

Caffe2 itself isn’t an AI program itself but a significantly important tool to program AI into phones or low-power computers like the Raspberry Pi. You just need to write down a couple lines of code to create a learning model for the intelligence system and bundle it with your mobile app on Android, iOS or other connected devices. Caffe2 makes the apps capable of recognizing images, video, text, and speech — being more substantially aware. They can learn about the user and draw conclusions based on data they are collecting each day.

We’re committed to providing the community with high-performance machine learning tools so that everyone can create intelligent apps and services. Caffe2 is deployed at Facebook to help developers and researchers train large machine learning models and deliver AI-powered experiences in our mobile apps.

Facebook believes that developers powerful GPUs and large-scale cloud farms (servers) are necessary but you don’t always need its support to integrate AI solutions into your apps. With regards to the same, the developer teams at Facebook realized the need for a robust, flexible, and portable deep learning framework that can essentially compute all differents kind of information at a massive scale – but without hogging down on memory or power requirements. Thus, Caffe2 came into existence.

Now, developers will have access to many of the same tools [as Facebook], allowing them to run large-scale distributed training scenarios and build machine learning applications for mobile.

With the official release of the first Caffe2 version, Facebook is providing developers with tutorials and examples that demonstrate how you can make your networks learn at a massive scale leveraging multiple GPUs in one machine or many machines with one or more GPUs. As Facebook now revolves around community, the Caffe2 zoo aims to bring together developers and researchers who share their work — making it easy for others to understand the applications of the same.

Facebook has also worked with several hardware partners, such as NVIDIA, Qualcomm, Intel, Amazon, and Microsoft, to further optimize their deep learning framework Caffe2 for mobile and cloud platforms. This has enabled them to go beyond the previously released Caffe2Go — a mobile CPU and GPU optimized version of Caffe2 — and build+deploy more complex models.

If you’re psyched about integrating deep learning models (AI) into your mobile, check out the source code for Caffe2 on GitHub. Its proper usage and training have also been explained thoroughly in the documentation and tutorials at caffe2.ai.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.