After a semester of learning about Max / MSP, I began this ongoing project to create a musical interface where the user takes a picture on a DSLR and it is immediately turned into a unique song based on the colors in the photograph.
The way this works is that a DSLR is connected to the computer running the software via USB cable. Using the “shell” plugin, Max / MSP is given direct access to the terminal. Then, the gphoto library is employed to trigger the camera to capture an image and save it to the computer immediately. The software finds this image on the computer, displays it, and then also displays a pixelated version. It is pixelated because the program reads RGB data from pixels and right to left, from top to bottom. By pixelating, the software is able to group colors together and make the song not as long for each photograph. Once all pixels are read, the camera takes another picture. By this point, the user will have had to choice to shift the camera to a new angle.
Here’s how the code is laid out:
As you can see at the bottom, there is a line of squares that functions almost as a playhead. The square on the left is the current color being “played” and the next square is the one that will be played next. This allows users to anticipate changes in sound and have a little bit more of an understanding of what is going on.
The only thing I haven’t been able to accomplish yet is making this automata sound good musically. The RGB values are fairly random and make some pretty dissonant sounds when hooked up to a synthesizer. What I hope to do soon is map the values to a certain key and tonal range, as well as try to mimic some orchestral instrumentation in synth.