Stage flight test 1, public demonstration 2-BJSS-alpha-v1
so, I finally got to get the new system on a stage to test it. stage iteration is a different beast from busking or home. not only do you have people watching you on purpose, you have a much more powerful sound system that, with untamed software, can blow up and kill people. ok, maybe not kill them, but kill their ears and kill the speakers which results in people wanting to kill you and killing just becomes a primary theme. so i wanted to make sure that it would suck within a very controlled dynamic range.
the beatjazz system consists of (eventually) 8 sound producing "elements". things like drums, bass, vocoder, etc, but since i am designing a new kind of system, i am including categories like resampling, complex, simple and beatbox. why? because i can. these elements have their own looping buffer which (soon) can be manipulated in interesting ways. the sound variations that i am capable of creating, are "coarse", so i am throwing thing in in the order that they intrigue me.
my most basic synth at the moment is a sine wave bass synth. it is simply a sine oscillator whose frequency is being controlled the note value it recieves. the volume is currently controlled by breath pressure. thats all. any frequency over 30hz gets played and heard. it is super simple and just right for most instances but i think that adding this to other sounds rather than as a standalone bass is in its future.
my first synth was a simple additve synth that i plan to frankenstein into something big and mean and brooding. it is made up of a sinewave oscillator being oscillated by a couple of other oscillators. in the video, this is the sound that changes drastically when i move my hand. this is using my gestural control interface. i am discovering how adding and multiplying the signals together creates different tonilities. i am experimenting with gestural trajectories that i am modelling on woodwind playing. i can explain it better later after i have some new examples.
And this synth introduces harmonics by way of multiplying the frequency value by the intensity value of the breath input. this starts drifting towards heavier math than i have allowed myself to play with up to this point so i am toying with the variables t get an idea of how each piece interacts with the other. right now i am using breath pressure to take this sound thru its range but soon i will use this as the beginning of a mouthpiece abstraction using lip pressure and breath pressure rather than breath and note value.
the drum synth is from a pd excercise by andy farnell (pd guru who wrote the book, "Designing Sound" which, currently, is my bible). it is a collection of simple enveloped noise generators going thru a range of filters that give each drum its character. i have added a "gesture grid" to each sound so that i might investigate ways to manipulte each drum within the context of whatever piece i am navigating.
the last one i am using currently is the vocoder. i started with a design that simply multiplies my voice signal with a synth signal. it wasnt very controllable but sounded great in a "Cybotron" kinda way. i then grabbed the vocoder example that came with pd-extended and used it. i see that it has arrays of bandpass filters that "contain" a frequency range based on the values in the abstraction arguments. this one is the most exciting for me because i can use what i am learning by dissecting how filter arrays to design better virtual spaces and and cavities like vocal tracts, horns, or instruemnt "bodies" that cant exist in the real world.
Notes from the first stage test:
- -daylight visibility is better than expected.
- -some latching functions take more effort now so maybe some of the hardwre is loosening a bit .
- -having "per drum" fx racks is a pain in the ass. may regress to a drumkit fx rack with per drum controls.
- -i must create a default state and a prefered state for each element and use this as the basis of my preset system.
- -audio and vibratory feedback helped ALOT. must take time in the future to design the haptics to be less utilitarian with more personality.
- -i melted the earhanger pieces to turn them slightly inward and cured the "elf-ears" problem (yaaaay!!)
- -replacing the hard drive with the solid state drive is AMAZING ( i think i already said that before but it is worth repeating:p)
so, long story short, IT WORKS!! i am currently working to fix a few glaring interface issues then jump right into
- -replacing the "placeholder" fx, whose sole purpose was to make sure the routing was going to work, with better sounding more gesturally integrated fx
- -designing a tracking eq ( a wide bandpass filter) that will tame the frequency ranges of each element, automatically, based on what i play.
- -integrating the lippressure into the sound concepts. it will require a bit of a rethink as i am finding that trying to emulate an existing lip modlation metaphor isnt as enticing as i had hoped for. will think out of the box on that problem.
- -add polyphony so i can begin playing monophonic chords, which have been missed greatly
- -create a couple of synths that will follow a gestural edit trajectory, which has yet to be designed but i feel can be achieved thru a few continuous days of tinkering and "Playgramming";-)
- -sort out element default and prefered states
- -add the resampling element. this will be a bit of an engineering project to do it the way i envision it but the results "should" be breathtaking.
the next fw posts will focus on investigation of each element. hope this is enough to chew on for now. l8r.