Hello world,
This is a brief update on the progress of the project.

This project is about creating a space and a tool for human-machine interaction as a means for creating art.
Since the proposal, I manage to narrow down* the ML elements from the initial four, down to possible two (pose estimation and voice command), already incorporated into the base code.
The final piece going to be exhibited as documentation of an existed performance in the form of a multimedia piece.
The next steps are involving adding more expressive elements like flocking, building in sound synthesis and deciding the choreography (if any) and assigning which elements to control by both the code and the artist.


* the other two ML styles could potentially add to the artwork, however the contribution is not proportional to the complications they adding to it. Face detection would make more sense with more than one performing artist, as it could classify and deploy different aesthetics to each dancer. In this case, the object detection does not have any unique function, so it could be delegated to the pose estimation technique.


A short video with pose estimation and style transfer.