Username:

Password:

Fargot Password? / Help

What would CYBORG mean if the word were invented today?

Part 1 of 3 Read More

Bitcoin, Art and Thermodynamics

The first of a series of notes and thoughts on the evolution of self realization through sonic pattern creation and feedback.(2 of 3) Read More

A 21st Century Musicians Guide to Busking-Free Ebook

Ever thought about performing in public with electronic gear? I wrote an ebook on the subject which is available for free download. click this link and happy busking! Read More

TED TALKS

A playlist of all the TED and TEDx talks Ive done over the last few years Read More

Through the looking glass...
"I am a guy i thought up when i was a kid..."

Latest News

0

Time to wrap it up…

ok, in the vein of one of my favorite dave chapelle skits, i think its time to wrap it up...

next week (time and date to be announced) i will be doing a...presentation/lecture/performance/thing at the UC-Berkeley center for new music and audio technology (CNMAT), then on August 1st, will be a proper concert in the same venue, at which time i will wrap this joint up and call this phase of the project "ready to release", meaning that all the contributors will get their perks and soon thereafter, all associated files will be uploaded for free download for anyone interested in investigating this project on their own.

it is with trepidation that i make this announcement, considering how many "mission accomplished" moments ive had i the last year, but as i prepare to leave the bay area, i dont want to keep lingering on this project endlessly. baby is ready to fly...um...or will be...

0

The Nomadic diary of Onyx Ashanti-Episode #5 “Boogie up…Boogie down in Oaktown

It hasn’t rained once in the 2 months ive been here.  I hear that there is a drought.  I had forgotten what endless summer is like.  well, hmmm…it s more like endless “spring” mornings, “summer” afternoons, and “autumn” evenings in the Bay area.  this region has a very strange micro climate that is unique but you get used to it.  I used to be…but I find myself consistently surprised by the complete lack of rain since arriving here in may.

there are many things that are unique about this region, for me.  some of which made me come here to live almost 20 years ago…some that made me stay or come back for much of that time, and some that remind me of why I keep leaving.  one of those things here is busking.  when I moved here, the bay area was alive with buskers of all sorts.  as a busker myself, I thought it was paradise.  things like turf wars(not real ones…mostly just slightly irritating ones involving loud sound systems) and cops made me leave on numerous occasions over the last couple of decades but the busking scene here is very relaxed when youre not trying to hit the hot tourist spots.  if you hang out enough, you will be treated to some of the most unique (I seem to use that word a lot when describing this area) buskers in the world…until the politics make them pack it up for somewhere else.  its not all good or all bad…it just “is”.  and it “is” in a region that is sunny everyday currently, so there is not much to complain about.

a pattern has emerged over the years.  I live somewhere else then come back to the bay area to regroup, artistically as a street busker, before taking my re-honed skills back out into the world.  7 years ago almost to the day, I had a choice of going to Berlin in the summer of 2007 or going back to the bay area.  I chose to come back to Oakland because I had an idea for a new kind of live looping improvisation and I wanted to be fluid with it before I went overseas, so I came back here to play on the street and refine beatjazz before I finally moved exactly one year later.

and so it is again…back in the bay area with new ideas that need to be assembled in the peace that is playing on random street corners in a city where my friends treat me as if they just saw me yesterday; comfortable like a glove.  a tiny little rig designed to be compact and unobtrusive as I iterate my process.  a laptop, little speaker and battery, a wireless router and the exo-voice.  that is all.  light, sound and physical motion…..and boogie…

Boogie

a couple of years ago, I had a dream about playing robots.  rather than the robots making sound themsleves (yet), they conveyed, physically, the data that I was producing when I interface with the exo-voice.  I designed and built one. its job was to respond to accelerometer position with its two servos; side to side motion was x-axis, and attached to it was another servo that would rotated up and down on the y-axis. I named it “boogie”, because even the slightest twitch from the accelerometers would make it “dance”, and thus was the beginning of imbuing this mechtron with descriptions reserved for “life”.  here is boogie v1.0 in all its glory.

As you can see, boogie is very cute but also very low resolution for the purposes of expressing gestural data.  I even built 2; one for the left hand and one for the right.  it got the job done but it was, for all intents and purposes, a baby.  it was a move in a direction but not quite what was needed.

eventually it occurred to me that maybe two, low-res bots were not what was needed.  maybe I needed a singular, higher resolution bot; a singularity of parameter expression; accelerometer data from both hands, a color synth and led light parameter expression.  the first port of call was investigating those 6 axis car building monsters like these…

vlcsnap-00006

I took the servos from the two previous boogies and added two more to make a total of 6 servos, and built one based on what I had researched online.  it was cool to look at but it had no real reason to exist.  it didn’t do anything that was crucial or necessary; cardinal sin numero uno-everything must have a function. no superfluous expressions.  so in rethinking the purpose of such a bot, it occurred to me to put the two existing bots together as one bot.  so rather than a 6 axis bot, I would have a 4 axis bot-one servo for each used accelerometer axis.  in hindsight, this was easier to imagine now than it was then. 

the problem was determining how to hinge them together.  the first thing I tried was x-y, like the original bot, with another x-y attached to it.  this was amazingly shitty.  it flopped around all spastically.  there was no grace or way to discern function.  but after a bit of playing around with axial connections, it occurred to me to connect them x to y, like the original, controlled by the right hand accelerometer, to y but with the x connected, technically as “yaw”, like a gyroscope, or rather, it looked like a head that could look side to side.  the two y’s together work beautifully.  it feels very organic.  its movements mimic biological kinematics.  so how does this relate to functions of any sort?

in my previous post I described quadvector gestural synthesis.  briefly, each accelerometer axis controls a whole synth of its own-4 in total.  the hand units are designed to make the hand want to go naturally to the center of the gestural range-the half way point between the minimum and the maximum axial range.  the hand should be in a sort of “pointer”position as if you are pointing at something.  this takes practice and attentiveness.  its very easy for the hand to drift into more comfortable stationary positions that are slightly “tilted”, so it occurred to me that I could calibrate boogie to point straight up when all accelerometer axi are at gestural center.  this allows me to create synth and parameter controls that are now calibrated. and they all lived happily ever after…but wait…

robotics for all

on a whim last week, after I had finally gotten around to designing some legs for boogie that could deal with its powerful swinging motion, I was showing my friend jay the bot, which he had not seen except online.  I gave him the hand units.  he did what everyone does-put their hands into some strange contortion that has no relation to controllable gestural anything.  but then, I merely turned on boogie, explained briefly what axis controls what servo and told him to make boogie stand straight up.  within 30 second of puppeting the bot with the hand units, he had boogie standing straight up, and his hands were in perfect gestural center.  boogie had, in 30 seconds, done what words could not do in minutes, or hours. 

intrigued, I replicated this experiment with 3-4 other friends and the results were the same-using a robot as a gestural feedback system, yielded singularly positive results.  in addition, after achieving gestural center, they began to puppet boogie in very organic ways.  once they had discovered this calibration concept, the rest was easy for them.  I now have a number of gestural center-robot calibrated sound elements that relate directly to the motion of boogie.  this surprising discovery leads me to think that possibly, everyone who gets into gestural synthesis will benefit from having a parametric gestural feedback robot.  I will know more this week as I bring the light feedback system into the fold.

in other news…

the mask design is being tweaked constantly and now being put on other peoples noggins, which is gleaning much useful data (spoiler alert: I have a very large head), and after putting the hand units on a dozen peoples hands, I have a few “universal” part variations that I will be testing this week (with associated video and blog documentation).  ive got a number of prototype models for this but did most of the design experimentation on, what could be called, a “test prosthesis”.  there is a brief shot of it in episode 5.  it will form the basis of the next episode, but I want to see if anyone notices it Winking smile 

the primary goals with the sound synthesis lately has been to regain a sense of fragility through making the functions more dynamic.  things like making the breath sensitivity pick up the slightest breath and for that slight breath to sound “slight” rather than to sound like it is “triggering” something.  the sound has tended toward functional necessity for about a year now.  if you hear some of my music from a few years ago, there was an ability to investigate a certain emotional vulnerability because of the layers and layers of well programmed software synthesizers (VSTi’s) I used back then. my designs for this system were for it to be much more dynamic than the previous systems and if judged on pure function, it is, but based on the ability to convey wide contrasting emotion, it has been stuck in prototype mode for 2 years, until just recently.

being able to go out and play to no one in particular, any time I feel like, has allowed me to investigate “the now”.  meaning, I go out and “play”, fully, what the system is capable of right now, rather than perpetually “program” toward the future attractor.  now the programming session are more focused because they are informed by a regular playing schedule.  a schedule that isnt determined by who is paying me ( but by what I want to investigate on that day…for as long as I need/want to investigate…or until the battery dies-whichever comes first.

the current evolution in the sound is coming out of the investigation of gestural interaction with the filters and the harmonic relationships, formerly referred to as chords.  I say formerly because once I learned how to produce them gesturally, they ceased to be chords and became more of a harmony array…or put another way, the reason a clarinet and a saxophone sound so different is because of the harmonic relationships created by the material they are made of, and their size.  what was imagined as chords became harmonic series.  this has made me rethink the filter array to create a sort of modulating “cavity” for the sound, like how a saxophone body is the cavity that defines its sound.

here is what this stage sounds like, and even looks like. 

you don’t have to grasp any of what I just wrote to see and hear (with headphones) what it means.  there is a coherence to the sound that kind of surprised me.  especially the filter sweeps, where the most interesting phase patterns happen.  below are a few of the audio files I recorded last week and they also exhibit interesting sonic qualities.  I predict that tiny code changes from now on, will produce predictable unpredictability.  the system is finally reaching a certain functional stability that will make it easier to distribute as something that is learnable and playable.  all the pieces are in place.  now is just a matter of making what I understand, understandable for those that choose to investigate beatjazz in the (near) future.

0

Quadvector Gestural Fractal synthesis-current implementation: part 1

in an attempt to not unload streams of incomprehensible information all at once, every 2-3 months, I will try to unload more bite sized streams of incomprehensible information at a slightly more regular pace…youre welcome. as we near the event horizon of the singularity refered to as “releasing the project to the public”, little things become big things, as if their mass increases by virtue of time compression.

one such thing is the synthesis engine concept, currently refered to as a quad-vector gestural synthesizer.  this is one of a few “fractal” synth concepts that I have been working on for the last 2 years. the idea with this one is of a single synthesizer, tentacled with branching gestural parameter interfacing,  that can, through various combinatory means such as key sequencing, hand position and mode switching, mutate into any sound I can imagine.  this is accomplished using the multimodal gestural capabilities of the exo-voice prosthesis, whose interaction matrix is fractal. by “fractal”, i mean systems that evolve to higher states of “resolution” over time, from simple origin states.  this applies to everything from design to programming to performative interaction.  20140613_143909

in the design of the hand units, following pure function, I needed “keys” that were spaced to be comfortable for my fingers, while also being attached to my hands unobtrusively, which was accomplished in the first, cardboard, version.  over time, the design iterated toward things like better ergonomics, better light distribution, ease of printability, and so on, so the design and function evolved, while still being based on the the idea of the first version.  since all aspects of the exo-voice share this design concept, the entire design evolves fractally; as new states are investigated, the resolution of each state increases toward a state of “perfection” at which point it forms the basis of a new state of investigation.20140613_143621

each hand unit has one 3 axis accelerometer.  the x axis measures tilt, side to side, like turning a door knob.  the y axis measures tilt forward to backward, like the paint motion from karate kid (hope that is not too old of a reference for some of you).  the easiest way to picture it mentally is to think about the range of motion your hand would have if your wrist was stuck in a hole in a thin glass wall; you could turn your wrist and you could move your hand, from the wrist, up and down, as well as combinations of those two motions.  this forms the basis of the gestural kinematics the system is built on.  the x-range is one parameter and the y range is a second one.  the z is unused right now.  it’s the functional equivalent of using a touch pad in 3d space with x and y acting as two semi-independent control vectors.

there are a number of ways to use these vectors. 

  • Individually(per hand)-x is one “knob” and y is a seperate”knob”
  • Together(per hand)-the combined x/y position is considered to be one control point, similar to how a computer mouse works.
  • Together (both hands)-2 x/y control grids
  • Quadvector-x/y-left and x/y-right are combined into a 4 dimensiona control structure.  think of this as video game mode or similar to how flying a quadcopter works; the left hand is, in this metaphor, forward, backward, left, right, and the right hand is up, down, turn left, turn right.  no one control is enough.  all vectors must be interacted with simultaneously to control the position of the quadcopter in 3d space. 20140613_155205

 

the exo-voice system works by routing accelerometer data to parameters which are determined by which “keys” are pressed when in edit mode. ( a key in this system is a function initiated by pressing a sensor with the finger, like how wind instruments work, but here is more symbolic since the same function could be achieved in any number of ways.  since I was trained as a sax player, my mind gravitates toward that modality.) so in “play” mode, the keys are interacted with as saxophone fingerings on a basic level.  when the edit mode button is toggled, the entire system switches to edit mode and all the keys become function latches, ie, if I press key 1, the accelerometer controls parmeter1 and parameter 2.  if I press key 2, the accelerometer controls parameter 3 and parameter 4, and so on.  there are 8 keys so that is 16 base level parameters that can be controlled.

in previous iterated versions of the software system, each key could instantiate a seperate synthesizer when pressed hard (HP1-8), each with an edit matrix that was specific to each synth which meant there were 16 paramters to control  for each of the 8 synths.  now though, there is one synthesizer with 4 oscillators-each assigned to its own accelrometer axis. the HP modes are for “timbral routing” which means that the synth is routed though a filter which makes it sound timbrally unique in ways such as being plucked or being blown. some parameters are already set, such as pan position, gain, delay and pitch offset, each taking on key, so that leaves 4 keys (8 parameters) to paint with, which is not a lot.  this is where the fractal concept begins to pay off.20140613_143815

each key sends either a 1 or a 0 when pressed. in my system each key press is multiplied by 2, ie., key 1 =1, key 2=2, key 3=4, key 4=8, key 5=16, key 6=32, key 7=64, and key 8=128. this is an 8bit number sequence. one useful aspect of this is that each combination of key presses creates a completely unique number that you cant not achieve by any other combination of key presses. so, key 1, key2 and key4 (1,2 and 8) equal 11. there is no other combination of key presses that will generate 11.  in addition to 4 keys per hand, there are two button arrays-3 buttons on top, controlled by the index finger and 4 buttons on the bottom, controlled by the thumb.  the upper buttons are global, so they can not be mode-switched (change function in edit mode), but the bottom button array on the left hand is used to control octave position in play mode.  in edit mode all keys become parameter latches, so octaves arent necessary, but in edit mode, if you press key 1, while at the same time pressing lower button 1 (LB1) you get the 8bit number 257 and can control 2 new parameters not achievable with either button alone.   this is how I was able to create relateable sax fingerings. in play mode, I use the octave button presses to add or substract 12 or 24 from the midi note number that is genrated from the key presses, but in edit mode, I simply gave each of the 4 octave buttons, their own 8 bit number (256, 512, 1024, 2048) which now means that when I press key 1, I have the original parameter position as well as 4 new ones, totalling 10 parameters per key press and 80 parameters overall.  this is stage 2 fractal resolution.

so, as you can imagine, there is ample opportunity for this to become a class A headfuck.  it is not enough to simply throw parameters in just because one can.  functions have vectors as well, otherwise they become unwieldly.  the question that must be asked is “what function naturally emerges as “next”?”  it is at this point that I use a mental construct called a “perceptual matrix”.  this is the mental projection of the flow of parameters within the system.  I discovered it by accident after I completed the first working version and discovered that the act of building it had created a mental model in my mind.  this freed me from having to reference any visualizations.  now I design the functions to make sense within this perceptual matrix. so far, I have not found a limit at which I can no longer interact with its growing complexity.  in fact, by following a fractal design concept, the perceptual matrix guides the design evolution since the mind is the origin fractal of the matrix itself.  as the name exo-voice implies, it is a fractal prosthesis for the mind, both in creation and interaction.  the job of the visualizations is to convey this information to observers and collaborators as well as providing a means of interacting with the data after the fact.

im having to investigate these issues because fractal synthesis makes sense now. 2 years ago, it was two words spoken one after the other.  now it is a protocol. every parameter vector can be mapped now and iterated through a  continuously evolving key sequencing modality (every key sequence can have 5 positions of 10 parameters, which could end up being over 200 parameter vectors per hand and 600 between both hands, without adding any new hardware).  but what goes where becomes the grasp-worthy straw.  20140613_143653

my percussive concept is begging for evolutionary adaptation but its not as easy as just throwing something in there because, since it is one synth with parameter vectors as sort of tentacles, or synapses, one connection has to lead to a next connection and a next, fluidly.  my mind is pulled in a dozen directions.

I needed to share this modality with whoever is interested in knowing it because a lot of what I will be expressing in the coming months will be based on this and without this information, there is no way to know what the hell I am talking about.  in additon, I expect that within the next year, these parameter trajectories will evolve to more resemble biological expressions like nervous systems, which are fractal as well. 


Warning: file_get_contents(http://search.twitter.com/search.atom?q=from:onyxashanti&rpp=1) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 410 Gone in /home3/solarpt/public_html/onyx-ashanti.com/wp-content/themes/Onyx/footer.php on line 44

Latest Tweet