Google’s API listing is getting longer every day. The company offers developer tools for everything– from basic app development tools to complex pattern recognition. Today, the company announced the public beta launch of its Cloud Natural Language API. This set of tools will go on to join other pre-trained machine-learning APIs like the Cloud Speech API, the Vision API and the Translate API.
Google says that the Cloud Natural Language API gives developers access to three Google-powered engines– sentiment analysis, entity recognition, and syntax analysis. The service is currently available in open beta and is based on the company’s natural language understanding research. It will initially support three languages– English, Spanish and Japanese and will help developers reveal the structure and meaning of your text in the given language.
As previously stated, the new API will support three different types of analysis. Sentiment analysis will help you understand the overall sentiment of a block of text, entity recognition will allow you to identify the most relevant entities for a block of text and label them with types such as person, organization, location, events, products and media and will perform the basic function of identifying parts of speech and creating dependency parse trees for each sentence to reveal the structure and meaning of text.
Entity recognition isn’t anything new. It has been in use for almost a decade. Many leading services have offered this feature for a long time. The same can be said about sentiment analysis. Syntax analysis, however, is a different story. This service isn’t as widely used as the other two. Usage pricing will depend on which one of the three different services you are using and how many records you plan to analyze.
Google says the new API is optimized to meet the scale and performance needs of developers and enterprises in a broad range of industries.
Cloud Speech API also turns beta
Google also announced that its Cloud Speech API will also enter open beta today. The service is available in 80 different languages and it is the core that powers products such as Google Search and Google Now. Apparently, more than 5,000 companies signed up for Speech API alpha.
The services that were available then include HyperConnect, a video chat app with over 50 million downloads in over 200 countries that uses a combination of our Cloud Speech and Translate API to automatically transcribe and translate conversations between people who speak different languages and VoiceBase, a leader in speech analytics as a service that uses Speech API to let developers surface insights and predict outcomes from call recordings.
The beta version brings forward two new features that are Word hints and asynchronous calling.
This is a good news! Google never disappoints us on their innovations. Thanks for posting this.