The lines are beginning to become blurred as machines gain artificial intelligence capabilities based on Watson’s popular API set.
IBM has just announced the beta release of three new APIs, which could help revolutionize the way we interact with machines. The APIs are called Tone Analyzer, Emotion Analysis and Visual Recognition.
When developers implement these APIs, machines can be trained to hear changes in a person’s voice, analyze a person’s emotional state and machines can even be trained to recognize objects presented to them using a picture or a real time image capturing device.
Another API called TTS (Text to Speech) is being re-released under a different name. Text to Speech is now referred to as Expressive TTS. IBM says that with these APIs, organizations can create devices that think, perceive and empathize alongside humans.
IBM has made its software developers kits much more streamlined and accessible than in the past. IBM says that SDKs for Node, Java, Python, Unity and iOS Swift are now available. Developers can gain access to Watson APIs through the Watson Developer Cloud located within BlueMix.
Each of the features of the new APIs have their own unique implementations. Application developers now have a basic set tools that allows them to write apps that can successfully interact with a human. The re-branding of TTS into Expressive TTS seems like a promising innovation, given the fact that Watson provides an emotional IQ score of the human while listening to the speech.
“We continue to advance the capabilities we offer developers on IBM’s Watson platform to help this community create dynamic AI infused apps and services,” mentions David Kenny, GM of IBM Watson.
“We are also simplifying the platform, making it easier to build, teach and deploy the technology. Together, these efforts will enable Watson to be applied in many more ways to address societal challenges,” Kenny added.
Ready to know more about how IBM Watson plans on changing the world? Read more about Watson at IBM.com/Watson