Teachable Machine enables a playful introduction to machine learning. Google has now extended the web browser program. The 2.0 version allows the export of the trained models. In addition to gestures now also noises and larger poses can be trained.
Creating machine learning models with the tool requires no code knowledge. The program recognizes gestures via webcam, which are linked to a color at the push of a button – and trained by repetition or several recordings. Afterwards, Teachable Machine can recognize these patterns. With the first version, only certain models could be created, such as hand gestures. This has now been expanded: Google shows in a tutorial video how the AI between a picture with and without a cuddly dog different, so the frame has become larger. In addition, audio models can be imported and recognized. Instead of the previous limitation to three colors, so three patterns, more categories can be created.
Download and integration
The learned models can then be exported and integrated into websites and apps. Teachable Machine is running according to a Google blog posts exclusively in the browser, no data is passed on. The program is based on the open-source machine learning framework TensorFlow, Google has just launched TensorFlow Enterprise for large companies and TensorBoard, an open source visualization toolkit.
For further development, Google has worked with teachers, artists and students. The target group are students who are supposed to break down contact fears with artificial intelligence by means of the teachable machine. The AI learning offer is also aimed at advanced users – students and Google employees would also use the program. The company also offers the program for professional use Cloud AutoML,
. (tagsToTranslate) Google (t) Artificial Intelligence (t) Machine Learning (t) Teachable Machine