Enlarge

SEATTLE—We've been tracking Microsoft's work to bring its machine learning platform to more developers and more applications over the last several years. What started as narrowly focused, specialized services have grown into a wider range of features that are more capable and more flexible, while also being more approachable to developers who aren't experts in the field of machine learning.
This year is no different. The core family of APIs covers the same ground as it has for a while—language recognition, translation, and image and video recognition—with Microsoft taking steps to make the services more capable and easier to integrate into applications.
The company's focus this year is on two main areas: customization and edge deployment. All the machine learning services operate in broadly the same way, with two distinct phases. The first phase is building a model: a set of test data (for example, text in one language and its translation into another language, or photos of animals along with information about which animal they are) is used to train neural networks to construct the model. The second phase is using the model: new data (say, untranslated text or an image of an unknown animal) is fed into the model and an output is produced according to what the neural nets learned (the translation, or the kind of animal pictured).

Read 13 remaining paragraphs | Comments


More...