top of page

Oakamatic (2024)

Max/Msp art project

Technique: Max/MSP/Jitter, Adobe Photoshop, Adobe Premiere, Teachable machine

Note: Team project

For our project, "Oakamatic", we want to connect the audience with nature. The audience will follow on-screen prompts to perform specific actions in front of the camera. Each movement will be detected and trigger different visual effects related to flowers because we think flowers can clearly show the beauty of nature, making it feel like they are interacting with nature and seeing its beauty directly. We hope this project will help the audience appreciate and protect nature by recognizing its beauty. For the sound component, we chose calm and lyrical music to enhance the visual experience. By combining these sensory elements, we aim to create a peaceful and immersive environment that strengthens the connection between humans and nature.

Screenshot 2024-10-08 at 4.14.40 PM.png
Screenshot 2024-10-08 at 4.02.04 PM.png

The images above show the process of using Teachable Machine to train our pose model. We began by creating four categories in Teachable Machine: ‘default,’ ‘arms up,’ ‘arms out,’ and ‘star.’ Each team member then recorded their movements in front of the camera, capturing approximately 150 images per pose. Ensuring the clarity of key points in each image was crucial for the model to accurately learn and identify the movements. Once the model was trained, it was incorporated into our JavaScript code. When running Max/MSP/Jitter, this added an additional feature capable of recognizing our postures in real time.

Default.gif

default

armsup.gif

Arms up

armsout.gif

Arms out

star.gif

Star pose

The images above show the visual effects we designed for each position. The ‘default’ pose is simply the regular standing position, serving as a guide to instruct the participant on which pose to perform.

The images above show how the our project works. The process begins with OpenSoundControl, which receives external input signals. These signals are categorized via OSC-route into four action types: Default (a standing pose), Armsup (arms raised), Armsout (arms extended), and Star (a star-shaped pose). Each action type has its own logic node for processing, and in the diagram, the signal threshold is adjusted using a slide control (slide 20. 20.). When the signal exceeds 0.1, the corresponding action category is activated. Logic conditions are handled by if statements to ensure that actions are only triggered when the input meets certain criteria. The state of each action is then sent to the corresponding subpatch.

 

In the signal processing section, the subpatches (p Default, p Water, p Sunflower, and p Star) are responsible for triggering different visual effects. Since the Max/MSP system can handle up to six video signals simultaneously, these subpatches manage and control the specific video content associated with each action.

 

Once the motion recognition module detects a signal, it passes the information to designated video effect modules via receiver endpoints such as r_StartScreen, r_Water, and r_Sunflower. The four-channel video mixer (4MIXR) combines the input signals based on user-defined mixing ratios, generating a composite output signal. This final mixed video signal is displayed to the user through the video player (VIEWR).

© 2024 Ming Kong. All rights reserved.

bottom of page