MODULE  – ML Speech to Command.



Hello world,

This element of the artwork is testing speech commands. A “Machine Learning” technique for sound classification. It is done by analysing the sound fed through the ML model and making judgements on the voice of the detected subject. Estimating the words received through the microphone into the model, which then compares it for the best match with the words trained beforehand.
I was fiddling with the out of the box model from Google, but I will probably train my own model if I decide to use this technique in my final artwork. Google’s model is good enough for basic things, but biased and trained on other people, so the accuracy is not as great and personalised as it could get.
I was exploring the utility of giving commands via words in the context of this project. I imagine I will possibly assign some visual elements, activate modes of operation, give commands, change variable values or other functionality not foreseen at this point.
Thes visuals not resembling the aesthetic direction of the project, but simply illustrating an exercise.


    


Testing the machine learning model along with the visuals.