On sound and movement.

A concept I’d been kicking around for a while is the dynamics of something like music performance and audience response. Getting a sense of what a crowd responds to or is into might influence the alteration of a set on a micro- or macroscopic level. There’s a few vectors of hardware and interaction I’d like to get at (I’ve been eyeing community drivers for the Kinect for a long time to look at crowds en masse), but when focusing more directly on wearable media I wanted to dig more deeply into movement as an expressive response to media, and ways to increase reciprocity with what inspired said movement.

Given its ability to act as a USB HID device, I went with Adafruit’s FLORA as my microcontroller.  It was my hope that this would allow me to not need a back end on any computer, and could just program the microcontroller to put out the MIDI that I wanted to express based on whatever sensory input. It wasn’t until after I’d received my FLORA and began digging for code that I noticed the complete lack of available information on any such project with that particular microcontroller. I was hard-pressed to find any example code at all of MIDI HID on the Arduino platform in general, and that which I did find seemed to be relatively inapplicable.I’d like to take the time to research and contribute to such a library for the FLORA, but that was outside of the scope of the project.

For sensory input, I’ve so far settled upon using Adafruit’s complementary FLORA accelerometer. After some flirtations with trying to get Firmata mode set up with the Node.js library Johnny-Five, my limited low-level experience made trying to interpret the i2c signals coming from the accelerometer also outside the project’s scope. I opted instead to use Adafruit’s provided library for the accelerometer and format it for easy digestion by node-serialport. At this point, the FLORA is simply kicking out accelerometer data along the three axes as a single line that the Node.js back-end interprets as a stream. This stream parses out that serial text into numeric values, which I can then scale up and interpret.

Currently the output vector for this is the coremidi module, allowing me to push interpreted accelerometer data to MIDI in OS X. I am still exploring the best ways to scale and manipulate data, and then how to have it affect music. Having the accelerometer output tied to something like the modwheel would make sense for altering the sound but not having to worry about particular notes or timing (none of the Node.js MIDI libraries I could find had clock functionality built in; quantizing would have to happen inside of whatever music software). The practically hard part of creating a motion input has been completed, but the manipulation of that input and in what was to necessarily output it is not something I’ve entirely settled upon. I will be making the source code public in the near future for the project as it exists now.

I’ve purchased and will be adhering a Sparkfun BlueSMIRF Silver Bluetooth  adapter (Adafruit has insisted a sewable module of their own is coming) to push the serial data wirelessly to a computer in order to get rid of a physical cable and to open up the possibility that there can be more than one of these devices in play at once. It would be incredibly fun to get a dancefloor full of people wired up with these sensors, and to crunch data about their physical response to music that will then influence how the music ends up getting expressed. Unfortunately, this seems very cost-ineffective at this time. Using a full-size FLORA, accelerometer, Bluetooth module and coin battery-based power supply puts material costs well above what seems practical or viable to do with any sort of scale. The cost also kept me from embedding or folding the hardware into any sort of more elaborate or particular clothing for fear of making it too specific; a wristband was as simple an item of clothing I could imagine.

As it stands, this either serves as a neat proof of concept for the dynamic I was seeking to explore, or it’s something that I as a producer of music can wear while either recording or performing that will allow another facet of control and expression that responds to my physical body. I’m still excited about this idea, but it ends up feeling less novel to me. The goal for this concept in the future will be how to play with the movement or expressions of a crowd without having to have a 1:1 relationship between micorocontrollers (or sensors) and individuals. To push in the more wearable direction, it would take cheaper parts, which maybe wouldn’t even have a dancefloor context. Perhaps much like many “quantified self” devices, it would be possible to record the rhythm of one’s daily movements, and to use those to algorithmically compose a soundtrack out of any given day.


Posted

in

by

Tags:

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.