3

New and improved! now with Lightsaber mode!

Progress is a cruel mistress.

i have come to discover somethings lately.  1st thing being the psychological dilema of creating a sound producing interface/prosthesis.  If you can produce an endless array of hardware, software and sonic permutations, i am finding, that this becomes the new "songwriting" except that the songs are programming and design.  sitting (!?) and recording a song-the aural type- is of less and less interest to me currently.  maybe one day i will need to document these ideas in carbonite, but right now the song is in the variation of the project.

this is just a little demo of my test bed controller, v1.1.  I discovered that if i 3d print the controller with clear plastic, it traps the light and makes the units glow.  it is not neccessary for the entire system to glow to get the effect.  just key pieces around the mouthpiece and one part of the hand unit, so i am incorporating these design modifications into the units that will go out to contributors once the software is up to a decent alpha stage.  and on that topic, for those that have contributed and are wondering,  the perks are dependent of completion of the beatjazz software that i promised would be given to everyone.  i feel that the few controllers  that i released to contributors earlier this year were released too soon and dont have the feel of a full cohesive system (although they will recieve the upgrades).  

on the topic of the software,  I made a brief video just to show you the direction that the project is going.  in this video,  i have incorporated what i call "lightsaber" sounds.  this is a buzzing sound that reacts to the movement of the y and z axis of the accelerometers to give a charactieristic "shooshing" sound that is very sci-fi.  it also serves as a means of developing a concept of what is happening in the 3d space around you.  this sound will evolve to become much more "haptic", which means able to convey parameter feedback in a natural fashion.

The last two weeks of hell have produced unintended fruits.  i had been working to incorporate ubuntu studio as my linux test bed.  one thing led to another and next thing i know, i had a corrupted hard drive and i lost 80% of all data i had produced this year which didnt really bother me as much as i had thought it would as i had been reducing my dependence of many differnt software apps for a few weeks prior to that.  so after another week of trying to make linux do what i wanted it to do on a much smaller drive, i deided to dual boot win7 and linux on a new drive, with an 8gb partition for linux so it can be easily shared on a usb stick.  i had heard about solid state  drives-a hard drive made of memory chips rather than a spinning disk-and found one on sale so i bought it, long story short....all the issues i had had with pure data on win 7, were gone.  it is stable, fast and sounds amazing right now so developemtn has picked up considerably.

specifically i have a very basic vocoder, breath controlled synth and a basic drum machine.  each of these is editied live in performance using my gestural editing system.  basically, i use the top right joystick to put the system into one of 2 modes currently;  instrument edit mode,  and performance mode.  when in instrument (element) edit mode, all the keys and joysticks  are dedicated to using the accelrometers to change parameters based on key/joystick combinations.  I have gone thru 4 differing editing metaphors-which is the reason i am only just now showing it-but now have one that "feels" right.  each key and joystick position produce a unique 12bit number that when added, produce unique numbers.  this means that i can press keys 1,2 and 7 and the number that is produced can be produced by no other key combination,  and that number can be assigned to specific set of parameters.  this also means that just resting my fingers on the keys, does nothing unless the keys that are being pressed have been assigned to a set of parameters.

the hardest part now is the mouthpiece abstraction.  i am attempting to progran a level of stabilty into the lip sensor without reducing the dynamic potential of it.  i am working to create some basic physical models that can allow me to investigate the creation of space and volume using filter arrays so it will be an ongoing topic  but i have enough toys now to take it out in public to play.

Much of this will be premiered at an event here in Berlin at the end of august called Campus Party where i will be performing and also giving a "cyborg" workshop on 3d design and wearable technology for musical performance.  i am pretty excited about that!  we shall see if the progress fairy hangs a round a bit longer.  in the meantime, at least i have a swooshy glwoing glowstick musical interface to keep me occupied...
Click to share thisClick to share this