Fargot Password? / Help

rokettube porno mobil porno mobil porno rokettube porno porno



Sonomorphology and Transdimensional Perception Machines powered by Intent


(Play this while you read, preferably with headphones. this post was created with the intention of being heard simultaneously.)

Since arrivng in Detroit, I have been crawling down into the rabbit hole trying to comprehend things.  this place has a palpable energy.  it is no coincidence that so much crazy shit has happened here throughout time.  My host has given me space to be me and my drastically modified diet of mostly spirulina and rice has cleared fog that i didnt know was there.  i wasnt able to put it into words until today.  i feel like i need to document this moment while it is still fresh in my mind. handunit v2.2-2handunit v2.2

for the last few months, i have been refining the exo-voice prosthesis, in all relevant dimensions; biomechanical design, sonological design and sonomorphological construction(more on these last two in a moment).  ive learned that each properly realized stage of development usually becomes a singularity which then becomes the new base-level-the new normal upon which every other stage is built.

the most recent singularities have been the realization  of a hand unit form that is biomechanically neutral while enabling as much gestural expression as possible. once the hands were free to express themselves fully, a new singularity expressed itself almost immediately.  having fixed the thumb positioning, palm angle and index finger height, a very new cohesion in the sonomorphology has been attained. Picture 282Picture 280

what is music?  when music is not "constructed" properly, you can percieve it, even if your are not trained in creating it. once i finally had a hardware construct that was  working, i turned my attention fully back to sonomorphology (sonic form) and sonologic (the "why" of the sonic form).   the idea of good sound is less aesthetic and more architectural now. so i have been playing with some basic interactions of nature; hot/cold, wet/dry.  these produce the commonl known; earth, air, fire and water. I had been looking for a conceptual underpinning for the sonic idea of the exo-voice system. it should be able to gesturally construct any sound form conceivable, from properly expressed representations of these modalities. earth-air-fire-water-1i


recently, someone i care about informed me that they were going to the hospital for a procedure.  i hate hospitals.  the word itself induces anxiety in me, so to dissipate said anxiety and project a strong constructive energy to that person, i created a “sonomorph” for that person.  what this means is that i projected as much constructive intent as i could, into the sonic fractal.  whether it meant anything or not, is not the point.  the sonomorph was not constructed to entertain.  it was the sonic representation of an intent...a sort of sonic fractal prayer, if you will.  upon listening to it later, i sensed a morphology that was distinct and different from previous forms but i couldnt quite determine what it was…until i made a recent soujourn into hyperspace.

into hyperspace

upon entering the state, i meditated on the sonomorph by lying down and listening to it with headphones, in the dark.  the first listen was strongly visual.  the second listen 30 minutes later had a curious effect;  the visuals were the same but more realized. by the 3rd listen something occured in my perception;

the realization that the sonomorph was creating a repeatable volumetric space within my mindspace. 

the "song" was a place. 

but moreover, inside that place were what could only be described as machines.  not machines like on this plane of perception.  these machines had tentacles and mounting mechanisms that held them in place and each tentacle was multicolored with each color being a sort of circuit pattern, like scales on a snake but varying in geometry and in bright, almost tacky colors.  the reason i knew the sonomorph was creating it was that as soon as the piece ended, everything in the space kind of dissipated into a more randomized field of colors and squigglies, but as soon as the piece was restarted, the structure reasserted itself within the disorder, bringing back the previous structural morphology. some of the form can be seen in this phase scope representation of the piece.(will post a video later today) its not representative of what is seen in hyperspace but it does give some idea of the order present within the piece. (note the toroidal shapes at the beginning of the piece. )

the conclusion i came to was that this newer sonomorph, modulated by focused intent, IS a machine. a machine that resides in mindspace.  i also listened to the piece "duality" that i posted a few days before creating this piece for comparison.

duality was assembled from a dozen or so recordings, done at different times over a two week period rather than a single piece created with intent.  within hyperspace, it was  "broken".  it did not create a space nor did it produce machines in that space.

this blew my mind a bit.

it suggested that it was not merely the style of sound or how it is played but rather "why" it is played. 

within this matrix, the previously mentioned elements (air, earth, water, fire) produce the form; water is easily expressed as the low complexity, gurggling filters ive been using. fire is represented by distortion, clipping, harmony or energetic conceptualization.  fire "radiates" energy in any form.  fire can be applied to "air", which is represented by the more flowing melodic elements which tend to make up the "machines"  those elements that have a function and purpose that emerge out of and distinct from the generalized background of the form. its opposite is "earth" or solid, which finds it most obvious expression as hits and thuds.  in addition, air and earth are “inputs” while hot and cold are modulators! so combinations of these elements  promises some interesting resulting expressions.

what is also interesting is that although the structure dissipates at the end of the sonomorph, like a cloud of smoke, the "perception" of the structure does not. the intent machines continue to be perceptible and functional after leaving hyperspace.

what if it is possible to create “self programmable” sonomorphs that build programmable mind machines within mindspaces,  which could be used to structure our own focus on what we determine our intent to be? music as transdimensional architecture.  like a string of song hooks with syntactic structure. the exo-voice becomes a recursive "intent amplifier".

so back in "this" space

what the exo-voice is here on this plane, the sonomorph is as an exo-technological interface for intent. the exploration and design and evolution of sonic fractals that gravitate toward  “architectural mechanics” hence the shift in terminology from sonic fractal to sonomorph.  all elements on “this” side should create a "multi" dimensional expression that consciously enables “trans”dimensional sonomorphology.

the expression of light, sound and design on this plane of reality, can imply higher dimensional forms that create sonological machines of intent within the mind construct, which then feeds back into design changes of light, sound, and design, meaning…that with each focused design upgrade the resolution of the sonomorph increases, which then influences the next design change, rinse…repeat…

sonomorphology is a bridge between this plane of reality and hyperspace. 




Notes from hyperspace

ahhhh….what a gorgeous month!  I came back to mississippi in need of some trees, and squirrels and long spans of quiet contemplation and it has been that and more.  I am beginning to see and feel the oscillations of my process.  its no longer a surprise when I begin work on one thing but end up finishing something else.  I get it now.  all the talking to myself and the evolving sketches and the dream time design iteration…things feel consolidated.

For some reason, I always feel like apologizing for the dearth of posts I make now.  but trust me…its for the best.  I think about sharing my process in the moment but, especially as the winter creeps up, I now know better.  those that know me personally know that I am not short on words or ideas and can talk endlessly about them (especially in the winter), which was a reason I loved berlin.  it’s a city for such people.  but when I am swimming in idea mutations and variations on themes that have nothing to do with anything I am working on, it gets to be too much even for my more talkative friends.  BUT, it is necessary…I cant stop the process…I wouldn’t even try.  its too amazing.  swirls of ideas, history, cosmic forms, algorithms, all mutating amongst each other as if they are the same.  no limitation on possibility.  universes expanding and collapsing a thousand times a day.  but the problem is that I might need to invent a universe that will spawn a concept that doesn’t exist here then, as Sun Ra would say, “Transmolecularize” it into something that could exist here on this plane, but even that might be transitional so in the end it was just a thought experiment for the sake of creating a new sound or form.  if I shared all or even some of them, it would become a confusion of sketches never meant for expression outside of my own mind-modelling construct.  so now I wait until the process cools a bit and then share a concise overview.

Mississippi Hacking

case in point; upon arriving back in mississippi, I was all ready to investigate 3 things specifically;

  • conductive ABS plastic filament
  • the atmel328 microprocessor, effectively turning the system into arduinos rather than purchasing already made arduinos
  • the ESP8266 wifi tranciever


Picture 133

I ordered and received a roll of conductive ABS a few days before returning to the south.  while at Noisebridge Hackerspace in san francisco, I was “conducting” electrical conductivity tests to determine how best to incorporate this new material.  as a carbon-plastic composite, this material can conduct electricity weakly, but well enough to create simple touch and temperature sensors.  this was my focus.  some of the more unique properties of the material, like not letting it cool in the nozzle because the carbon will settle in the end and clog on next use, blowing up the hotend, did set me back a couple of times, but thus is experimentation.  I began to glean valuable data.  enough data to recognize that I could, with a bit more experimentation, use this material for touch sensors.  this would make the hand units so sexy!  the design is currently restricted by the size and shape of the sensors, switches and other components necessary to get basic functionality.  by printing most if not all of those components, the scope for design zooms off at warp speed!

so it was that I calibrated my reprap a bit more, just to ensure that the size tolerances were precise.  printing sensors into the design would be a multistep process until I adapt my reprap for multimaterial printing.  I would have to print the nylon part then the sensor/circuit parts seperately.  I had my sketches and my coffee and a time horizon of 3 solid uninterrupted weeks and on the first post calibration test print…the reprap stopped printing and nothing I did would get it to start back up………………………………….its like someone mushing a booger into a slice of pizza you have been standing in line and salivating for for hours except there was no one to punch in the face.  so after the usual “why do I do this to myself?/Whats wrong with my life?” episode, I ordered a new different board, but more on that in a bit.

so at this point I have a week…a WHOLE fucking week  to kill but lucky me, there is never a dull moment with this system so I reached for the electronic components I had just ordered and received;  the atmega328 microprocessor and the esp8266 wifi tranciever.  the arduino uno is based on the atmega328 and for those that don’t know, the arduino is not only a product in itself but an open source project as well.  the majority of the work inside the arduino is done with the atmega chip.  add a couple resistors and a 8 or 16mhz crystal for timing and you can build in full arduino functionality for about $5.  from there you can customize it for the exact functionaity you want and save money as well as design space since the chip is about the size of a 6 year olds pinky finger.

vlcsnap-00012I was able to get basic code onto it to do the well known blinky light test. but ran into issues getting standard firmata onto it.  this is the basis of my systems IO. so I shifted gears to the ESP8266, since I really wanted to be printing right now, and  didn’t have the desire to fiddle with error messages. vlcsnap-00013 the ESP8266 is the superstar of 2014 maker-dom.  32bit processor, upgradable firmware, 20db broadcast strength (my current trancievers are 12db max) and did I mention  FIVE DOLLARS EACH!!  by contrast, my rn-xv’s are $35 each.  there are demos online of people getting 4km range out of these things!  and…AND each one can act as an accesspoint!!  what this means is that I will be getting rid of the wireless router as soon as I get this thing to do my bidding.  the entire network can then be worn on body and cost less than one of the trancivers I use today, with increased reception!  my experiments got me up to the point where the exo-voice connected with one of them as an access point.  I flashed the firmware and studied the datasheets but was distracted by the arrival of my new reprap brains arrival in the mail.


my reprap was acquired from the guy who invented the category-adriam bowyers-Emaker.  the board I received with the printer kit back in late 2011 was arduino variant designed specifically for 3d printing called the “melzi ardentissmo” vlcsnap-00011 it was designed to be be an all in one, easy to replicate 3d printer board.  just wire in your motors and hotend and print.  and it has been a pretty nice board, but with one major flaw; it’s a single board, or what I like to call, a single point of failure.  if one thing dies, the board could be toast.  I say “could be” because numerous things have died over time.  the bed, the hotend, the extruder.  luckily I could reroute around blown circuits to other, unused pins, but with everything on a single specialized board, it meant that eventually youd have to buy another whole board, which I had already done a year earlier and they are not the cheapest option.

vlcsnap-00014so with this failure I decided to move over to what seems to be the closest thing to a standard platform that repraps have- the RAMPS board.  this is a shield that plugs onto an arduino mega.  I like this board a lot so far.  it is very modular and each part is very inexpensive.  chinese made arduino megas are around $15-20 and the RAMPS shield, with small replaceable motor shields, cost $17.  I paid $34 total for everything compared to $90 for the melzi.  luckily the same firmware I used for the melzi, Marlin, I could use on this board, so I loaded the firmware and did surprisingly minimal calibration.  probably because I did all the firmware calibration before the previous board blew up, but I digress because I’m back in the game.

 vlcsnap-00015this board allows me new features and freedoms as well. there is an extra empty slot for another extruder, which I will need to integrate printed circuitry into the my next designs (yaaaaay!!).  I can use my heated bed again as well.  the power to it had fried a long time ago and I havent really needed it but  its nice to know I can use the headed bed if I really need it.  and finally, I can power it with anything from 12v to 35v.  I decided to continue using my existing 19v power brick but if neccessary, I could use almost any desktop computer power supply.  this flexibility is greatly appreciated.  and just to be sure, I added a new hotend heater cartridge and thermistor and have a backup just in case because I don’t want to deal with anymore unintended downtime for the forseeable future.

the “cicada” mask

Picture 160while in memphis Eric swartz and I came upon a cicada, a large rare strange looking insect that only emerge from whereever the hell they come from, every 7 years and then , only for 7days before they die.  most of their life is spent in a sort of, as eric put it, “telepathic dream state” and to be honest, 7 years is a hell of a long time to be in any state so we figured that theirs is a multidimensional existence.  telepathic dream state for most of their lives, then blasting onto this plane in an explosion of flying, buzzing reproductive brilliance then onto the next plane 7 days later…..

Picture 151vlcsnap-00016

everyday I go outside and sit in the grass and go places.  I just let my mind journey whereever it chooses, until it gets too hot or the bugs realize that im stationary, at which point I sketch things.  one such thing was a new kind of mask.  in the last couple of years, more and more of my assumptions of what “should” be, are dissolved.  I realize that most of those forms are from my previous programming by way of mass media.  it was the reason for my helmet fixation all those years ago. helmet I felt that a cool head covering…a functional head covering must be a helmet of some sort because…just because.  I never really thought much about it til someone asked online, upon seeing my choice of headgear at the time, “ why are you wearing a helmet? are you planning on getting hit in the head?”  of course said commenter was trolling me but he/she had a point;  why was I repurposing military gear for my expression?  primarily, I think, because I didn’t have a self realized construct that could do the job any better.  “the job”.  what job?  functionally, the helmet I had allowed me to consolidate my various electronic components in a singular appartus-mic, headphones, goggles (because I like goggles), all in one form.


Picture 164on this particular day, sitting in the grass I resketched a concept I had investigated a number of different ways over the last year.  to revisit the form-fitted mohawk “arms” of my previous design but make them more integral to the design beyond just hanging goggles on the end of them.  the mohawk has acted as an anchor in most of my laterday designs. Picture 35 it keeps my designs from shifting around on my head and has an easthetic I love.  but the previous designs always returned to being held to the head by straps on the sides of my head.  the previous mask design broke me of this habit.

 vlcsnap-00017I found the previous mask to be functionally very good but aesthetically hideous.  it was a perpetual battle in my mind between function and aesthetic.  I kept getting pulled in one direction or the other. while in california, i t gave me the chance to investigate my software more but the box-on-my-face thing offended my sense of symmetry and flow.  I  obviously didn’t freak out about it but it was  continual thing I would revisit when I dreamed.

from the dreams come sketches and on this day I imagined a form that left the sides of my face, head and neck open and placed  everything down the middle of my head.  because why not? 

Picture 150Picture 152

what is interesting about the modelling process now is that the CAD work and printing work are the last 10% of the process.  the preceeding 90% consists of eyes closed visioning of, first, the world the design is most natural in, and then the design that expresses that place.  in that world there was no need for militaristic overtones or inelegant function for function sake. in that space, insect inspired design is very natural and mid-face mechanics are the norm.

 Picture 154

as a side note, the two small feet in the very back , sit at a perfect angle directly on the mastoid bone at the back-base of the skull

in that world, the purpose was not for “performance”.  it was for self-o-nautics; investigating the self-“I”-verse.  the pads along the ridge of the head were primarily for electrical stimulation.  function expresses itself in the dimension of aesthetic as well as computer interfacing.  we, this mask and I,  went on many many adventures in many worlds, where I got to see how the design would respond to differing situations;Picture 146 I could see and feel when it would slip or bite or be too bulky-and ever so often I would emerge from this place to add or deduct a line or a curve to the sketch until it became an interconnected collection of pieces that could be realized with my tiny little single material reprap.

I must say, I am ALWAYS amazed when the designs materialize from this process looking and acting the way I designed them in mind-space.  when the sketch first appears on paper, I see it and think, “hmmmm…that is going to be way outside of my capabilities” but it never is.  it just takes FOREVER to imagine properly.  the design work is done with eyes closed, sometimes for months.  one tweak that I am especially happy with is the addition of 3 mm to the tip of the mouthpiece which allows me to bite the mouthpiece to hold it in place while I play.  this steadies the mask and the pitch control simultaneously.


the next phase is to integrate the atmega/esp8266 combo into it and refine that.  once that works there, it will be much easier to integrate into the hand units.  currently I mounted the arduino box from the previous design, into the chin while shortening the mouthpiece to make it flush with the front of the mask. Picture 156

but I wont be changing anything else until after my presentation at the detroit entheogen conference on saturday.  I am doing a presentation of multidimensional expression and hyperdimensional artifacts Open-mouthed smile do I even have to paint a picture of how crazy that presentation is going to be with this mask?  makes me want to cry, im so exicted.  afterwhich, I plan to investigate detroit and explore the birthplace of techno.  I have assembled a few new nomadic tools into my gear bag.  more on that in the next post.


Last flight of the cicada and the evolution of kinetic polyrhythm

the above episode is actually #8.  i plan to go back and build episode #7 in the next couple of days.  I just wanted to get it out while it was fresh on my mind.  it felt opportune.  I was in memphis for a “thing” (concert/talk/q&a) at Forge Hackerspace, which I will get to in a minute, but after said thing, I and my host, Eric, walked around in an area near the river and came upon this Cicada on the sidewalk and proceeded to carry it and talk to it until if decided it had had enough of us talking to/about it.  it just so happened that I had my camera and my lenses with me on this little excursion and I was able to get this strange (high resolution I might add) footage of this strange creature.  Eric knew so much interesting information about these creatures that I just had to get it on film.

I didn’t realize until later that this encounter meshed perfectly with serendipitous sonic interactions that happened during the “thing”.  for the last few months I was in self-imposed exile in california, busking down by lake merritt in an attempt to find something…a sound…an idea in the form of a function or gesture or…I don’t know what to call it but I knew that I needed to play-program-play-program to find it and then in mid august I found this…

interestingly, before I went out on this day, I had spent a few hours watching ravi shankar videos on youtube.  the over lapping polyrhythms had been very intersting to me.  so when I hit the pitch, this layered, filter-modulated polyrhythmic kinesis seemed to be just hanging out waiting for me, as if it had just been waiting just under the surface for its time to emerge from its previous state, just like the cicada, which stay under ground for 7 years before emerging in vast brilliance.

one effect of this polyrhythmic filter interaction is a pronounced insect like quality to the rhythm…to the sound.  so much so that it makes me wonder in two directions;

  1. are combined insect songs a singular fractal sound propagation system?
  2. should I begin to model in the direction of how insects propagate their song?

I’m mostly just thinking out loud on those tips, but the new interaction makes the SPOR(single point of origin) concept of fractal synthesis, make more sense.  every aspect of sound is easily iterated now so, you know…insect vectors?  why the fuck not?  I imagined insect vectors a few years ago but had no refernce for how they would manifest…until now. 

The “thing”

first off, I walked into see this made out of metal


this is the stage!  the artist, Elisha Gold, a local metal artist, designed it based on pictures of my tattoos!  I always love coming to memphis because it is always this level of energy!  later in the evening we “jammed” together; me on my exo-voice and he on metal grinder!!  the awesomeness never stops! 

before the show, Eric Swartz of eventone media, and i , did a bitcoin transaction onstage to show how easy it could be.  all together, my experiences of late, with hackerspaces, is making me want to really invest some time and energy into investigating possible synergies.  it would almost seem obvious that the first sonic fractal matrices would evolve most naturally from such environments.

for this post I really just wanted to post the video from from the “thing” and give it a bit of context before I begin digging into an unreal amount of footage from the last two months in california and into a new conceptual episode style. 

Last flight of the cicada by Onyx Ashanti on Mixcloud


Time to wrap it up…

ok, in the vein of one of my favorite dave chapelle skits, i think its time to wrap it up...

next week (time and date to be announced) i will be doing a...presentation/lecture/performance/thing at the UC-Berkeley center for new music and audio technology (CNMAT), then on August 1st, will be a proper concert in the same venue, at which time i will wrap this joint up and call this phase of the project "ready to release", meaning that all the contributors will get their perks and soon thereafter, all associated files will be uploaded for free download for anyone interested in investigating this project on their own.

it is with trepidation that i make this announcement, considering how many "mission accomplished" moments ive had i the last year, but as i prepare to leave the bay area, i dont want to keep lingering on this project endlessly. baby is ready to will be...


The Nomadic diary of Onyx Ashanti-Episode #5 “Boogie up…Boogie down in Oaktown

It hasn’t rained once in the 2 months ive been here.  I hear that there is a drought.  I had forgotten what endless summer is like.  well, hmmm…it s more like endless “spring” mornings, “summer” afternoons, and “autumn” evenings in the Bay area.  this region has a very strange micro climate that is unique but you get used to it.  I used to be…but I find myself consistently surprised by the complete lack of rain since arriving here in may.

there are many things that are unique about this region, for me.  some of which made me come here to live almost 20 years ago…some that made me stay or come back for much of that time, and some that remind me of why I keep leaving.  one of those things here is busking.  when I moved here, the bay area was alive with buskers of all sorts.  as a busker myself, I thought it was paradise.  things like turf wars(not real ones…mostly just slightly irritating ones involving loud sound systems) and cops made me leave on numerous occasions over the last couple of decades but the busking scene here is very relaxed when youre not trying to hit the hot tourist spots.  if you hang out enough, you will be treated to some of the most unique (I seem to use that word a lot when describing this area) buskers in the world…until the politics make them pack it up for somewhere else.  its not all good or all bad…it just “is”.  and it “is” in a region that is sunny everyday currently, so there is not much to complain about.

a pattern has emerged over the years.  I live somewhere else then come back to the bay area to regroup, artistically as a street busker, before taking my re-honed skills back out into the world.  7 years ago almost to the day, I had a choice of going to Berlin in the summer of 2007 or going back to the bay area.  I chose to come back to Oakland because I had an idea for a new kind of live looping improvisation and I wanted to be fluid with it before I went overseas, so I came back here to play on the street and refine beatjazz before I finally moved exactly one year later.

and so it is again…back in the bay area with new ideas that need to be assembled in the peace that is playing on random street corners in a city where my friends treat me as if they just saw me yesterday; comfortable like a glove.  a tiny little rig designed to be compact and unobtrusive as I iterate my process.  a laptop, little speaker and battery, a wireless router and the exo-voice.  that is all.  light, sound and physical motion…..and boogie…


a couple of years ago, I had a dream about playing robots.  rather than the robots making sound themsleves (yet), they conveyed, physically, the data that I was producing when I interface with the exo-voice.  I designed and built one. its job was to respond to accelerometer position with its two servos; side to side motion was x-axis, and attached to it was another servo that would rotated up and down on the y-axis. I named it “boogie”, because even the slightest twitch from the accelerometers would make it “dance”, and thus was the beginning of imbuing this mechtron with descriptions reserved for “life”.  here is boogie v1.0 in all its glory.

As you can see, boogie is very cute but also very low resolution for the purposes of expressing gestural data.  I even built 2; one for the left hand and one for the right.  it got the job done but it was, for all intents and purposes, a baby.  it was a move in a direction but not quite what was needed.

eventually it occurred to me that maybe two, low-res bots were not what was needed.  maybe I needed a singular, higher resolution bot; a singularity of parameter expression; accelerometer data from both hands, a color synth and led light parameter expression.  the first port of call was investigating those 6 axis car building monsters like these…


I took the servos from the two previous boogies and added two more to make a total of 6 servos, and built one based on what I had researched online.  it was cool to look at but it had no real reason to exist.  it didn’t do anything that was crucial or necessary; cardinal sin numero uno-everything must have a function. no superfluous expressions.  so in rethinking the purpose of such a bot, it occurred to me to put the two existing bots together as one bot.  so rather than a 6 axis bot, I would have a 4 axis bot-one servo for each used accelerometer axis.  in hindsight, this was easier to imagine now than it was then. 

the problem was determining how to hinge them together.  the first thing I tried was x-y, like the original bot, with another x-y attached to it.  this was amazingly shitty.  it flopped around all spastically.  there was no grace or way to discern function.  but after a bit of playing around with axial connections, it occurred to me to connect them x to y, like the original, controlled by the right hand accelerometer, to y but with the x connected, technically as “yaw”, like a gyroscope, or rather, it looked like a head that could look side to side.  the two y’s together work beautifully.  it feels very organic.  its movements mimic biological kinematics.  so how does this relate to functions of any sort?

in my previous post I described quadvector gestural synthesis.  briefly, each accelerometer axis controls a whole synth of its own-4 in total.  the hand units are designed to make the hand want to go naturally to the center of the gestural range-the half way point between the minimum and the maximum axial range.  the hand should be in a sort of “pointer”position as if you are pointing at something.  this takes practice and attentiveness.  its very easy for the hand to drift into more comfortable stationary positions that are slightly “tilted”, so it occurred to me that I could calibrate boogie to point straight up when all accelerometer axi are at gestural center.  this allows me to create synth and parameter controls that are now calibrated. and they all lived happily ever after…but wait…

robotics for all

on a whim last week, after I had finally gotten around to designing some legs for boogie that could deal with its powerful swinging motion, I was showing my friend jay the bot, which he had not seen except online.  I gave him the hand units.  he did what everyone does-put their hands into some strange contortion that has no relation to controllable gestural anything.  but then, I merely turned on boogie, explained briefly what axis controls what servo and told him to make boogie stand straight up.  within 30 second of puppeting the bot with the hand units, he had boogie standing straight up, and his hands were in perfect gestural center.  boogie had, in 30 seconds, done what words could not do in minutes, or hours. 

intrigued, I replicated this experiment with 3-4 other friends and the results were the same-using a robot as a gestural feedback system, yielded singularly positive results.  in addition, after achieving gestural center, they began to puppet boogie in very organic ways.  once they had discovered this calibration concept, the rest was easy for them.  I now have a number of gestural center-robot calibrated sound elements that relate directly to the motion of boogie.  this surprising discovery leads me to think that possibly, everyone who gets into gestural synthesis will benefit from having a parametric gestural feedback robot.  I will know more this week as I bring the light feedback system into the fold.

in other news…

the mask design is being tweaked constantly and now being put on other peoples noggins, which is gleaning much useful data (spoiler alert: I have a very large head), and after putting the hand units on a dozen peoples hands, I have a few “universal” part variations that I will be testing this week (with associated video and blog documentation).  ive got a number of prototype models for this but did most of the design experimentation on, what could be called, a “test prosthesis”.  there is a brief shot of it in episode 5.  it will form the basis of the next episode, but I want to see if anyone notices it Winking smile 

the primary goals with the sound synthesis lately has been to regain a sense of fragility through making the functions more dynamic.  things like making the breath sensitivity pick up the slightest breath and for that slight breath to sound “slight” rather than to sound like it is “triggering” something.  the sound has tended toward functional necessity for about a year now.  if you hear some of my music from a few years ago, there was an ability to investigate a certain emotional vulnerability because of the layers and layers of well programmed software synthesizers (VSTi’s) I used back then. my designs for this system were for it to be much more dynamic than the previous systems and if judged on pure function, it is, but based on the ability to convey wide contrasting emotion, it has been stuck in prototype mode for 2 years, until just recently.

being able to go out and play to no one in particular, any time I feel like, has allowed me to investigate “the now”.  meaning, I go out and “play”, fully, what the system is capable of right now, rather than perpetually “program” toward the future attractor.  now the programming session are more focused because they are informed by a regular playing schedule.  a schedule that isnt determined by who is paying me ( but by what I want to investigate on that day…for as long as I need/want to investigate…or until the battery dies-whichever comes first.

the current evolution in the sound is coming out of the investigation of gestural interaction with the filters and the harmonic relationships, formerly referred to as chords.  I say formerly because once I learned how to produce them gesturally, they ceased to be chords and became more of a harmony array…or put another way, the reason a clarinet and a saxophone sound so different is because of the harmonic relationships created by the material they are made of, and their size.  what was imagined as chords became harmonic series.  this has made me rethink the filter array to create a sort of modulating “cavity” for the sound, like how a saxophone body is the cavity that defines its sound.

here is what this stage sounds like, and even looks like. 

you don’t have to grasp any of what I just wrote to see and hear (with headphones) what it means.  there is a coherence to the sound that kind of surprised me.  especially the filter sweeps, where the most interesting phase patterns happen.  below are a few of the audio files I recorded last week and they also exhibit interesting sonic qualities.  I predict that tiny code changes from now on, will produce predictable unpredictability.  the system is finally reaching a certain functional stability that will make it easier to distribute as something that is learnable and playable.  all the pieces are in place.  now is just a matter of making what I understand, understandable for those that choose to investigate beatjazz in the (near) future.


Quadvector Gestural Fractal synthesis-current implementation: part 1

in an attempt to not unload streams of incomprehensible information all at once, every 2-3 months, I will try to unload more bite sized streams of incomprehensible information at a slightly more regular pace…youre welcome. as we near the event horizon of the singularity refered to as “releasing the project to the public”, little things become big things, as if their mass increases by virtue of time compression.

one such thing is the synthesis engine concept, currently refered to as a quad-vector gestural synthesizer.  this is one of a few “fractal” synth concepts that I have been working on for the last 2 years. the idea with this one is of a single synthesizer, tentacled with branching gestural parameter interfacing,  that can, through various combinatory means such as key sequencing, hand position and mode switching, mutate into any sound I can imagine.  this is accomplished using the multimodal gestural capabilities of the exo-voice prosthesis, whose interaction matrix is fractal. by “fractal”, i mean systems that evolve to higher states of “resolution” over time, from simple origin states.  this applies to everything from design to programming to performative interaction.  20140613_143909

in the design of the hand units, following pure function, I needed “keys” that were spaced to be comfortable for my fingers, while also being attached to my hands unobtrusively, which was accomplished in the first, cardboard, version.  over time, the design iterated toward things like better ergonomics, better light distribution, ease of printability, and so on, so the design and function evolved, while still being based on the the idea of the first version.  since all aspects of the exo-voice share this design concept, the entire design evolves fractally; as new states are investigated, the resolution of each state increases toward a state of “perfection” at which point it forms the basis of a new state of investigation.20140613_143621

each hand unit has one 3 axis accelerometer.  the x axis measures tilt, side to side, like turning a door knob.  the y axis measures tilt forward to backward, like the paint motion from karate kid (hope that is not too old of a reference for some of you).  the easiest way to picture it mentally is to think about the range of motion your hand would have if your wrist was stuck in a hole in a thin glass wall; you could turn your wrist and you could move your hand, from the wrist, up and down, as well as combinations of those two motions.  this forms the basis of the gestural kinematics the system is built on.  the x-range is one parameter and the y range is a second one.  the z is unused right now.  it’s the functional equivalent of using a touch pad in 3d space with x and y acting as two semi-independent control vectors.

there are a number of ways to use these vectors. 

  • Individually(per hand)-x is one “knob” and y is a seperate”knob”
  • Together(per hand)-the combined x/y position is considered to be one control point, similar to how a computer mouse works.
  • Together (both hands)-2 x/y control grids
  • Quadvector-x/y-left and x/y-right are combined into a 4 dimensiona control structure.  think of this as video game mode or similar to how flying a quadcopter works; the left hand is, in this metaphor, forward, backward, left, right, and the right hand is up, down, turn left, turn right.  no one control is enough.  all vectors must be interacted with simultaneously to control the position of the quadcopter in 3d space. 20140613_155205


the exo-voice system works by routing accelerometer data to parameters which are determined by which “keys” are pressed when in edit mode. ( a key in this system is a function initiated by pressing a sensor with the finger, like how wind instruments work, but here is more symbolic since the same function could be achieved in any number of ways.  since I was trained as a sax player, my mind gravitates toward that modality.) so in “play” mode, the keys are interacted with as saxophone fingerings on a basic level.  when the edit mode button is toggled, the entire system switches to edit mode and all the keys become function latches, ie, if I press key 1, the accelerometer controls parmeter1 and parameter 2.  if I press key 2, the accelerometer controls parameter 3 and parameter 4, and so on.  there are 8 keys so that is 16 base level parameters that can be controlled.

in previous iterated versions of the software system, each key could instantiate a seperate synthesizer when pressed hard (HP1-8), each with an edit matrix that was specific to each synth which meant there were 16 paramters to control  for each of the 8 synths.  now though, there is one synthesizer with 4 oscillators-each assigned to its own accelrometer axis. the HP modes are for “timbral routing” which means that the synth is routed though a filter which makes it sound timbrally unique in ways such as being plucked or being blown. some parameters are already set, such as pan position, gain, delay and pitch offset, each taking on key, so that leaves 4 keys (8 parameters) to paint with, which is not a lot.  this is where the fractal concept begins to pay off.20140613_143815

each key sends either a 1 or a 0 when pressed. in my system each key press is multiplied by 2, ie., key 1 =1, key 2=2, key 3=4, key 4=8, key 5=16, key 6=32, key 7=64, and key 8=128. this is an 8bit number sequence. one useful aspect of this is that each combination of key presses creates a completely unique number that you cant not achieve by any other combination of key presses. so, key 1, key2 and key4 (1,2 and 8) equal 11. there is no other combination of key presses that will generate 11.  in addition to 4 keys per hand, there are two button arrays-3 buttons on top, controlled by the index finger and 4 buttons on the bottom, controlled by the thumb.  the upper buttons are global, so they can not be mode-switched (change function in edit mode), but the bottom button array on the left hand is used to control octave position in play mode.  in edit mode all keys become parameter latches, so octaves arent necessary, but in edit mode, if you press key 1, while at the same time pressing lower button 1 (LB1) you get the 8bit number 257 and can control 2 new parameters not achievable with either button alone.   this is how I was able to create relateable sax fingerings. in play mode, I use the octave button presses to add or substract 12 or 24 from the midi note number that is genrated from the key presses, but in edit mode, I simply gave each of the 4 octave buttons, their own 8 bit number (256, 512, 1024, 2048) which now means that when I press key 1, I have the original parameter position as well as 4 new ones, totalling 10 parameters per key press and 80 parameters overall.  this is stage 2 fractal resolution.

so, as you can imagine, there is ample opportunity for this to become a class A headfuck.  it is not enough to simply throw parameters in just because one can.  functions have vectors as well, otherwise they become unwieldly.  the question that must be asked is “what function naturally emerges as “next”?”  it is at this point that I use a mental construct called a “perceptual matrix”.  this is the mental projection of the flow of parameters within the system.  I discovered it by accident after I completed the first working version and discovered that the act of building it had created a mental model in my mind.  this freed me from having to reference any visualizations.  now I design the functions to make sense within this perceptual matrix. so far, I have not found a limit at which I can no longer interact with its growing complexity.  in fact, by following a fractal design concept, the perceptual matrix guides the design evolution since the mind is the origin fractal of the matrix itself.  as the name exo-voice implies, it is a fractal prosthesis for the mind, both in creation and interaction.  the job of the visualizations is to convey this information to observers and collaborators as well as providing a means of interacting with the data after the fact.

im having to investigate these issues because fractal synthesis makes sense now. 2 years ago, it was two words spoken one after the other.  now it is a protocol. every parameter vector can be mapped now and iterated through a  continuously evolving key sequencing modality (every key sequence can have 5 positions of 10 parameters, which could end up being over 200 parameter vectors per hand and 600 between both hands, without adding any new hardware).  but what goes where becomes the grasp-worthy straw.  20140613_143653

my percussive concept is begging for evolutionary adaptation but its not as easy as just throwing something in there because, since it is one synth with parameter vectors as sort of tentacles, or synapses, one connection has to lead to a next connection and a next, fluidly.  my mind is pulled in a dozen directions.

I needed to share this modality with whoever is interested in knowing it because a lot of what I will be expressing in the coming months will be based on this and without this information, there is no way to know what the hell I am talking about.  in additon, I expect that within the next year, these parameter trajectories will evolve to more resemble biological expressions like nervous systems, which are fractal as well. 


The nomadic diary of onyx ashanti-Episode#3 and Episode #4-exploring the digital memory banks and exo-voice developemnt updates.

As I mentioned in my previous post, I was gifted a BAD ASS Blackmagic pocket cinema camera to document my travels.  so in addition to all of the other stuff I am learning, I am also working on developing more of an “eye” when it comes to visual expression. 20140610_144319 This is not the first time I have pushed into the video/film realm.  I edit all of my own videos.  but I do it more from the point of view of a technician rather than as a form of expression unto itself maninly because I never had a nice enough camera that warranted the expenditure of cognitive resources.  but I have been reaching into the void of visual expression lately, to see what I find.

over the last 2 months I have accumulated 1.5terabytes of footage of a range of subject matter.  I had been contemplating doing another “range of time” video, ie.  footage from a place and time, singularly, like berlin or amsterdam or atlanta.  but then it occurred to me that what I have is a digitized visual and audio memory bank.  as such, it would be interesting to interact with it as the brain interacts with memory.  so instead of, say, a video project concerning my time in Athens at the Slingshot Festival, or from my time in mississippi, I could think more about the totality of experiences from holland to where I am currently and single out particular conceptual constructs like, as this project presents, all of the representations of motion over the last 2 months.  times in cars, trains, planes and buses, just watching the world go by.

this makes the memory bank much more random access and interesting as a store of experiences.  ive got LOADS of footage  festivals, conferences,  maker faire, singing robots, thunder storms, strange insects…it is a more interesting way to access the digitally stored memory of my nomadic travels and makes me think very differently about what I capture on film and as audio.

Episode#3 is entitled “memory of motion” and is a montage of moments in motion between the time I landed in atlanta to go to Athens (GA) for the slingshot fest, then from atlanta to nashville to mississippi to memphis, san jose, san francisco and finally to oakland, where I am currently working on a few things that will be the subject of future nomadic diary episodes. Episode#4 is me playing in Berkeley, just above the BART station.  I was very lucky to have my friend Tony Patrick there the impart his camera skills on the occasion.  The purpose of the sound track is to give an aural reference to the state of synth development at the time of editing, for posterity.  in these videos, the synth functions have stabilized but are more simplistic than in previous posts.  this is because I took out many premade objects and replaced them with my own versions, but more on this shortly.

Busking in Oakland

berkeley2now that I am fully re-submerged back into busking, the iteration of systems is going waaaaaaaaaaay faster.  busking has always helped to accelerate idea iteration because you can practice more, in more relevant situations and get paid for it.  you become an entrepreneurial singularity; you only need think of an idea and try it in stages everyday that you go out. 

as I have been pushing myself to get the contributor/open source version out into the hands of people, the fast design-build-test-revise-integrate cycle that busking affords, has been invaluable. when busking, glaring fuckups or just “not-so-nice”ups, become unbearable because you are interacting with it everyday for anywhere from 1-5 hours a day.  a crappy sound is bearable if one only has to rock it for a 15-40 minute show on a nice stage with a nice sound system.  but, after a week of 5 hour days, the issue is  brought into laser-like focus and you are able to hone in precisely on errant gremlins.

luckily, since I was literally “from” here before I moved to Berlin, some of my old busking gear was still stored in my friends basement.  I had only to acquire a new battery.  my berlin battery was horrendously large-25kg (approx 50lb) so when I went shopping, size was a major consideration.  I found a nice, small sealed battery and borrowed the small Roland monitor that I also used to use back then (thanks Jaswho? !) and hit the road.20140525_141817


busking has supported me since I first moved away from home as an adult in the early 90’’s.  I knew a total of 3 songs when I started. busking allows you to develop.  it is the like a relationship with someone you might not see very often but when you do, its familiar and comfortable. 

the last few months that I was in Berlin, I was beginning to feel “soft”.  most seasoned buskers can rock it at a moments notice, anywhere…ANYWHERE.  I was starting to feel like I was losing that edge.  I had spent so much (completely awe-inspiringly necessary) time inside my own head for the last couple of years, that I felt I was losing sight of the ability to just rock out. but I KNEW I was losing that edge last july when I was fortunate to find out that DubFX and Rico Loop were going to be rocking out for free in Mauer Park. they are both seasoned buskers and I was awed by these cats!  they rocked it non-stop, in the hot sun for 4-5 hours with footpedals and beatboxing!!  it was amazing!  and neither of these cats has to busk at all anymore.  they are both famous with full touring schedules, but they wanted to give something back, and they really did!  that show stuck with me to this day.  I thought “why am I not doing my thing, somewhere…anywhere!?”  (the answer was because I was in the middle of revising the code and the hardware design to be more easily printable, but still…)

so now that I am “exploring the now” –partly from the street-, the boldness is coming back…that thing that a busker has that says “yeah sure, play right here, right now!”, but it is mixing with this scientific thing that has emerged In the last 3 years where things get interesting.

one system that is benefiting from this the most is sound design.  although the novelty factor, can work on passersby, sub-evolved sound and/or it parameter interfacing, drive me insane in a very short span of time-usually 3-4 days.  if I feel trapped by my parameter interface, I get bored.  if the sound doesn’t evolve toward a future attractor where it conveys its character with efficiency and beauty, I get irritable.  playing daily compounds these feelings.  so having a system that I designed from the ground up to be in a perpetual state of iteration, makes for exciting investigation.  shitty sound used to just be shitty sound.  now, shitty sound is a daily-updated list of “to-do’s”.  now that stability is reasonably common and expected, I can add the last synthesis piece…

the last synthesis piece

in March, I began using the new fractal synth ive been rambling about for almost 2 years.  by “fractal” I mean that it is truly fractal;

  • the system logic is an iterative branching protocol consisting of latching key sequences.  this means that by pressing one set of keys takes the system into a new mode and from this mode, a different set of keys, changes aspects of the current mode, from which yet another mode can be instantiated.  and since programming pure data is part of the expression, this branching system can continue to the limits of my computing power or cognitive abilities.
  • I imagine “functions” rather than specific sound outcomes, so I begin with the absolute simplest idea of the the sound function, then revise it toward higher states of expression, as I learn more and expect more from the sound.
    (this is the reason for the sound quality regression evident in my latest recordings; everything has been put into its simplest possible state so it can be iterated more easily toward higher states, rather than using too many objects that I didn’t build so as to sound “good”. )

the fractal synth the system is designed around, can be other types of sounds, based on how it is routed.  for instance, I can press one button and the sound is routed through a drum filter, so now, it is a drum.  but because it is gestural, the “drum” has a ripping/morphing kind of quality that is part function and part discovery-through-interaction.

20140610_145247pic 1: the fractal audio module object20140610_145630pic2: quad-vector synth with a few of the gestural control modules to the right20140610_145332pic3: the quad-vector synth closeup showing the routing20140610_145704pic4: the x and y vectors from one hand20140610_145554pic5: quad-oscillator selector patch
pic6: Drum filter

the last synthesis piece is the other timbral modalities ive been working on.  what this means is that when I press the “drum” button + another button, the sound now goes through a “string” filter, and/or a “pluck” filter, and/or a “wind turbulence” filter for wind instrument modes.  this is a bit of a holy grail for me now that I am comfortable with the concept of a synthesizer that I must effectively reprogram on the fly this sense, the term “quad-vector gestural synthesis”-which means that each x and y accelerometer vector, per hand, has its own complete synth- is the attractor that is pulling all development toward it. if the synthesis cant be done with gestures, then it is not gestural synthesis.

once this is working, it will be time to transcribe all the code and prepare it for distribution.  transcription will make it modular and understandable.  this will act to standardize the color synth parameters for the lights and the visualizations as well.

The modular mask

20140429_14174020140429_141723the mask has reached design stability now.  the final issues were;

  • snap together construction for ease of assembly and modification
  • designing a flip down mouthpiece so it could be moved away from the mouth without having to take the mask off.
  • modularity

in April I ordered a printer filament called tglase from Taulman.  this is PET, the same plastic used to make water bottles.  itits properties kind of slots between ABS plastic and Nylon.  its more rigid than nylon but more flexible than ABS and it doesn’t warp while printing.  but, most importantly, it refracts light beautifully, so I decided that it would be a great starting place for the mask design finalization stage.

20140429_14160220140429_141618the key design innovation for this was the simple idea of "”small interlocking pieces”.  I took a couple of weeks away from mask design to work on a side project; 3d printed orthotics.  this allowed me to investigate how one might create a design meant to be snapped together by hand and attached to a dynamically moving, shifting body, like the foot (I will show this as soon as it is functional).  it came down to the relationship of its interlocking small pieces.  so I took this new insight back over to the mask and started from scratch, guided by function only, and this was the result.

the design only touches the head at 7 points and is very easy to modify. the mouthpiece now shifts up, out and down out of the way and the circuit area isnt mashed against the jaw.  when not in use, it is easily partially disassembled.  the material is flexible and robust enough to withstand assembly/disassembly repeatedly. and although the contributor version will not have the goggles, which are form fit to my head size, it was a matter of a 5 minute design tweak to incorporate them.

the design is very very playable and comfortable for long stretches of time, even in direct sunlight, which was a weak point in previous designs.  having so much plastic against the skin is not comfortable under hot lights or sun, but this design solves that issue. currently, the facemount electronic components look boxy, but since they are now lego-like in upgradeability, the final versions will be much more streamline.


beta testing the hardware on people

I have begun printing parts for a beta-build of the exo-voice.  this is a full exo-voice, printed with design revisions, for the purpose of putting on other peoples head and hands to get a feel for what needs to be changed so it can be used by the greatest number of people initially, or easily changed by the pilot, to suit their own personal needs or tastes.

being able to take the prosthesis out into the real world, playing on the street, gives me much more confidence in its design.  its been bouncing around in an un-padded bike bag for weeks now and has been extremely stable, ie., low incidence of broken parts, and faulty wiring.  as I print what will probably be eventually posted online/shipped to contributors, fit, finish and material properties will be defined and there will be much footage of people of various size, age, race and sex to see how well the design translates so it will be a “print-and-use” as possible.  I am planning to make this aspect of the project more social by taking some aspects of the fullfillment stage, around to local hackerspaces and doing it there, rather than in an apartment by myself.  more info on that as I wrap up this stage.  time to go play a bit. 




Episode#2-Double Dutch Bus(ride)-The nomadic diary of Onyx Ashanti-Exploring the exponential now

During this whole phase of the trip out of Berlin and into the US, I had very little hard drive space for new footage.  I was very “precious” with the time I had to record with the new camera and as such, I don’t feel that I captured everything I wanted to.  besides that, I am now seeing that I need to create some camera mounts for my bike so I can get the truly cool stuff that happens while I am riding around.  I have plans in that regard and will report on them when I have a model to show/share.

Sitting in the bus on my way to Holland, I felt that stomach-dance of anticipation that I had missed for sooo long.  When being nomadic, there is a point at the beginning or even at some point during the adventure where you say to yourself, something to the effect of, “ well…here’s another fine mess you’ve gotten us into!”, preemptively. there was a bit of that.  but not much, surprisingly.  My inner dialog was very focused on “getting to the point” with “point” being significant steps on this iterative trajectory.  in other words, focusing on what's next, leading to what's next, next, and so on…in this case, Holland was home to two significant entities that I feel are crucial to the “nexts”…”nextsus?”(…hmmm, nextsus didn’t make any sense but I wanted to try it on anyway. )


in the summer of last year, I was at Betahaus in Berlin, having a coffee and doing a bit of geek watching, when I was approached by a man named Keez Duyves.  he seemed to have a similar “vibrating-while-not-moving” energy to my own and we began chatting about a variety of topics relating to technology and berlin and our place in this “thing”.  he then proceeded to show me some of the work he does in virtual reality.  holy shit, it was amazing!  rather than approaching the concept from a game engine-“walk around and shoot things” perspective, he was creating what could only be described as “sculpture”, there.  and his work had many many dimensions, some involving motion capturing the input of entire theaters full of people, performance art overlays, total immersive constructs that made me begin to see the occulus rift as a necessary way to interface with data (if Facebook doesn’t fuck it up, but I digress on that point for now…).  we just happened to be playing at the same event in Wedding some time past this meeting and I got the chance to experience it myself and was completely blown away so when he graciously offered to house me, “if you ever decided to come to Amsterdam”, I made note of said offer, in INK, rather than my usually metaphorical pencil scribbling's.

around the same time and for most of last year, I was conceptualizing ways of separating the expression of sound from the expression of data so I could experiment with how they were to be reassembled.  I find it criminal that venues and police should have the right to dictate whether or not music can be performed and/or shared, based on outdated propagation modalities, i.e.. speakers transmitting sound through the air.  I've felt that this was inefficient for quite some time.  I investigated various ways of transmitting sound over the years, going so far as to build up a 15 headphone listening station on the street in san Francisco back in the 90’s.  it worked…kinda.  It weighed less than a speaker system but it was too soon for such a concept at that time and once the novelty wore off, I went back to speakers. 

now though, everyone carries some sort of network connected sonic prosthesis in the form of media players and smart devices.  so I set out to build a REALTIME media streaming system.  I capitalize real-time because I soon discovered that real-time in the media server world is measured in “seconds”, not milliseconds.  the lag time was too great and the overhead for running the servers I was discovering, offended my minimalist sensibilities.  I wanted the process to run in the back ground of the system I was already using, NOT on a separate, heavy, hot server.  if that was the case, I could simply stick with speakers.

it just so happened that out of all the servers I investigated, there was one that was low on cpu overhead and memory, blazingly fast, and, most importantly, open source, so I could share it once I got it to work (sonic fractal matrix 0.1 -alpha? Open-mouthed smile )  Its called Mistserver and it is made by DDVTECH who just happened to also be located in Holland!! 

so, the crazy VR guy is in Holland, the super-media server guys are in Holland and being that Amsterdam was the first place I stepped foot into Europe, the first person I ever met in Europe, my friend Phillip, also lived in Holland, it seemed that all roads lead to Holland!!

the first day back on the road.


I brought the boom box with me, just incase I wanted to busk with it.  I took the bus from Berlin to Amsterdam around 7:30pm, so I could get some sleep and be fresh for my meet up with the DDVTECH guys the next morning.  I had already had one amazingly productive online session with one of the head guys, Jaron Viëtor, a couple of months earlier.  he helped me get it basically working, but I was using windows at the time and I was migrating the system over to debian Linux, so they offered to help me setup an optimized system for that. 


the ride to Holland was meditative.  I didn’t distract myself with music or excessive note taking.  I just sat and explored my memories of my time I berlin while creating new “future memories” of the things that needed to be done in the coming days and weeks. sleep came easily, as did awakening early the next morning, to retrieve my stuff from under the bus, assemble the trailer and bike and regroup inside the bahnhof (train station).  it was at this moment that I realized fully that I was no longer in berlin and that everything I needed in life was on that trailer.  no anxiety.  only the desire to peel away the day, efficiently, elegantly, and interestingly.20140312_175049

it still bugs me that I didn’t have enough space on the cameras SD card to take video of the trip from this station to the station in Leiden, where the DDVTECH offices were, nor did I get footage of biking to their offices.  it was a sunny morning and I always love Dutch bike paths, so that sucks, but shit happens. 

Google maps led me directly to their offices and one of the guys was there to meet me at the obscenely early hour I decided to ride over there, which I thought was really nice of him.  this set the tone for the rest of the day.  and I do mean the ENTIRE rest of the day!  20140313_113654they showed me some of the interesting ways they were evolving the code in mistserver, based on its extreme efficiency and modularity and even went so far as to incorporate unreleased code into the customized version they compiled for me.  through much trial and error, we were able to get mistserver to broadcast in a stable fashion, with 400ms latency!  it must be noted that this is to devices with no prior app download or optimizations.  the client smart device merely has to log into my local wireless network, by clicking on a qr code, then log into the stream by clicking a second qr code!  viola!  a dimensional bridge between my sonic fractal matrix to the client sonic prosthesis (smart phone).  this is what we are doing in the video and as you can hear, it is a surreal sound that I look forward to investigating and iterating.

later that night, they dropped me off at the airport, where I caught the train back into Amsterdam, before Jaron and his brother zoomed off to the band rehearsal they almost missed, helping me achieve a dream that I've had for over a year.  I cant wait to show it off next month in California!


I got back to Amsterdam kinda late but luckily

a. I had my bike

b. I know my way around Amsterdam and

c. I had only to take a ferry from central station to the area that put me within 10 minutes bike ride from pipslab.

the only way to describe PIPSLAB is that it is the place that you dream, as a kid, that you will have as an adult.  this was obviously the workspace of spatialized minds; tools, toys and tidbits, all within thought and physical reach, but with enough space for the mind to be able to imagine wider, deeper thoughts.  a perfect workshop…and I was going to be living in it for the next 4 days!  needless to say, I was feeling very confident that my need to leave berlin for a nomadic existence, was a good call so far. 

pipslab is a collective of artists.  they seem to be highly professional in many fields of interest, which seem to gravitate around the artistic investigation of virtual realities using motion capture, but not limited to this modality.  I was exposed to theater, to pre and post production work, to concept development…there was a lot of energy in the place, even when I was alone there.  their process mirrored some of my own in that once I arrived, I was folded into construct almost instantly and since I came with the mindset of such an interaction, we just ramped right into doing shit.

most interesting were our jam sessions.  being that keez had designed his system himself, in open frameworks and I had designed mine entirely in pure data,  we were able to mesh our systems at multiple levels of abstraction.  I could play sound but also send  note, tempo and gestural data, in multiple formats, over multiple types of network connection.  he could translate that data into forms of expression I needed video to show because I don’t have the language to describe what is possible with his constructs.  I think the data IS the language because words don’t do it justice and could act as an impediment to where this type of thing can go.  I think if we had had to discuss it all first, it would have taken weeks, but since I knew my programming environment and he knew his and those could speak computer-speak, the low-resolution of spoken words, were only minimally necessary except to gawk at the results.  needless to say, there was very little in the way of sleep, for the days I was there. and since then, my mind is consumed with expressing a part of itself in virtual space. 

the next day we biked over to the Lumasol studio.  there, two guys, Remco and Boris, had a crazy 3d capture rig for creating high resolution 3d models of people and things…of COURSE such a thing was a bike ride away…where else would it be?!  this is the same kind of stuff that was used to create the fight scenes in the original Matrix movie.  60 cameras go off in perfect synchronicity.  you can see the results of our session in the above video.

for the rest of the time, I got to hang out with old friends, like Phillip, who graciously modeled the mask so I could take notes on the adaptability of the design to different sized heads, and musician/artist/DJ’s rawdee lewing and Nobunaga Panic who came down and jammed with me/us in the lab, bringing some amazing vibes and sounds along to bless the space. 


I also, finally, got a chance to drop by STEIM institute.  I've had so many people tell that I had to go by there, that I decided to bike over, with my stuff of course, and pay the place a visit.  just so happened that there were only two students there that day. unfortunately, I ran out of space, AGAIN, on the SD card of the cinema camera, before I could get the proper amount of footage this place deserves.  but I found the basement to be the most interesting of all.  so many stored projects in sound propagation.  I could feel the morphosonic field of generations of artists, expressing themselves in this space.  I had hoped there to be a possibility to see what many have described as an ancestor to my exo-voice-Michael Waisvitz’ “the Hands”, somewhere in the facility, since he was head of STEIM until his death in 2008, but unfortunately they weren't on location.  maybe another trip for another time.


to say that the intensity and beauty of the 5 days I spent in Holland, “recharged” me, would insinuate that I wasn’t already charged up before arriving.  instead, I will say that my time in Holland, “reaffirmed” the idea that the “now” I am seeking to investigate, definitely doesn’t reside in a single place.  I know now that not only was it the right decision, but in a way, it was the only decision.  there was no way to get this experience without being part of it.  without going to it and partially creating it enroute.  like in video games where the treasure is hanging there in mid air, waiting for someone to grab it and no one had to be defeated to get it!  it is with said treasure in hand that I head back to the US to reconnect with my roots and grow some new branches…


How to design a wireless 3d printed digital musical interface (quick and dirty tutorial)

Picture 279A friend asked me, this morning, if I could give her some pointers on how to start designing a 3d printed digital musical interface. realizing that the answer would be more than a few paragraphs I decided to write a blog post on the topic, for posterity. then as I started mind mapping an outline I began to realize how little of the process is digital or computer-y in any way. and when I say “the” process, I really mean “my” process. As I am a self trained designer, I developed processes which work amazingly for me but which, I have been told by engineer friends, are strange but OK, since the results work well. I will attempt to keep this post short enough not to need to take a break but long enough to be thorough.


realizations from my own exploration into form and function

When the y2k bug failed to send the earth hurtling off into space, all attention focused on another date in the future; the end of the Mayan calendar; Dec 21, 2012 when the end of time was supposed to happen. as I type this in April of 2014, I can safely say that the movie-esque version of this conceptualization of “end of time” didn’t happen, but, examined another way, the world is very different than a few years ago. maybe it wasn’t down to the day or even year of said prediction, but it must be noted that we live in an age of exponential change unlike anything we could have predicted and the rate of change is increasing, as well as the rate of the rate of change. thus is exponentiality. but what does this have to do with creating a 3d printed digital musical interface. EVERYTHING. we are swimming in a sea of perpetually changing and mutating technologies, this article would have been completely different 5 years ago and would have been science fiction 10 years ago. some of the techniques in this article that are so common place now, are thoroughly based in the now, not the past. of course it all builds on the past, but we are on the other side of a tipping point of possibility and must approach our expression exploration from that point of view.

The key technologies that we are building on here are the reprap and the arduino platform. I could exchange the term “reprap” with “3d printer” and keep it general, but I choose not to. I think the reprap platform is more important than 3d printing because it is 3d printing with the open source ethos built into its DNA. every reprap can build other repraps. if it can not, it is merely a 3d printing appliance which falls in line with consumer products and market forces. the reprap, lives in digital space and when called out of that space as a physical machine, it allows the creation of form, from mind, in 2-3 steps, max! as a comparison, before repraps, which kickstarted the whole 3d printer craze, manufacturing, plastic or otherwise, took a workshop with specialized tools which were usually expensive and or dangerous, and years of training, which was usually expensive in either time, money or both. with the open source reprap project, one could download the plans and build one for less than $1000. a reasonably dedicated amateur could create something, realistically, within weeks or months, not years. and once you’ve built one reprap, you can build another, usually better one, easily. not so with commercial 3d printers, so for me, reprap is the way to go.

many repraps electronic systems are based on an open source platform called arduino. this was intended to be a way for children to learn about electronics but soon satisfied a need that didn’t exist before it provided a solution; an easy to investigate electronics platform. all of a sudden, sufficiently interested individuals could buy a $20 circuit board and create or adapt things that were of importance to them. or maybe just something they thought was cool. in either instance, the arduino allowed a low barrier of entry and increasingly sophisticated use cases based on its open source genome; there is an endless amount of information on how to adapt it to do anything…ANYTHING. flying robots, weather tracking, small satellites, autonomous vehicles and for our purposes, wireless digital musical interfaces.

the less significant but still very noteworthy technologies that we are spoiled with here in the future are plentiful cheap sensors, discoverable in internet based shops, deliverable anywhere in the world, and equally accessible CAD design environments and programming languages that talk to these systems and allow you to talk to your new lifeforms and in some cases, teach it to talk back with increasing syntactical ability. the one I will be gushing over is called Pure Data and specifically, pd-extended. this is a “data flow” language that you program by calling functions called “objects”, which are onscreen boxes with the name of the function in them, joined together by onscreen wires/cables. I love pd because at its heart, it is really just a math program with sound outputs. it is open source, free and can also communicate visually, as or over a network and can talk to anything that can be connected to a computer, like the arduino mentioned above.

all of the above mentioned tools are great singularly but for our purposes we need to make them work together. a musical interface must allow for the investigation of Self, through sound. that is my definition. if you want to play “an instrument” then there are are many to choose from. but the desire to create a musical interface from scratch, at such a low level, is really the investigation and expression of Self; how you move and think about what expression is for YOU. so going into this exercise with this goal in mind, will speed up your design process and result in something that evolves and grows as you do, possibly, for the rest of your life. it is not the creation of a thing, rather…

 it is the physical manifestation of the beginning of an ongoing, evolving process trajectory.

let’s begin

Picture 288

the most important tool to begin with is a trinity of mind, paper notebook and pencil. your mind is your most important tool. it is your primary design software. if you can not build it in your mind first-imagining how it works, imaging playing it, how it feels, what it sounds like-then everything after that is a crap shoot. to do this part, my suggestion is to take large swaths of time to simply sit somewhere and let the mind wrap itself around the idea of what you want to create. at first it may be hard, maybe, but after a while, it becomes much easier to imagine your designs and iterations in multiple dimensions (cad design, code, sound, etc). (there is more I could put here about this stage, but it could fill a book. if there is interest in such a thing, let me know and I can look into it).

Picture 94Picture 287
after doing this for a while(sometimes hours or days, sometimes weeks, months or years, depending on the idea), the pencil pretty much guides itself to the paper without having to force yourself into long drawn out session where you attempt to coax yourself through some selfhelp/mystical bullshit. the key thing to remember, is that…
 your drawings are NOT art.
they are notes for YOUR mind.
a squiggly line might be a cable…or it might be your way to remember to pick up some eggs on the way home…this is YOUR space outside of your mind to begin to give your idea form.


I find that sometimes a drawing isn’t enough so I like to keep modeling clay around. I can, from my drawings, make a basic 3d form from clay then use my calipers to measure it before I take it into my CAD environment for 3d modeling. clay is your friend.



Picture 221Picture 58

now that you have a basic idea of the form that you are shooting for, you will need to shift gears. it is at this point I like to shift to electronic concerns. once the form is developed, don’t move straight to CAD modeling because you now need to wrap that form around a basic function set and those functions are expressed using electronic parts, which have a defined shape and sometimes orientation. if you try to design and print the form now, without knowing what is going to go in/on/around it, you will inevitably have to start over to do this step any way, and will have wasted time and materials in the process.

with a basic drawn form you can now investigate the sensors, boards, lights and batteries required to make your system work. so first is to determine what sensors will allow you to do the thing you want to do. developing strong Google search-fu and proper online forum etiquette is crucial.

organize your thoughts at this time. I can not live without mind mapping software. I currently use xmind, but there are dozens of open source, free mindmapping programs for all platforms. what it does is allow you to get down to the precise questions you must ask yourself or someone on a forum. the gods of forumdom have little patience for overly broad questions or overly general ones that have been asked ad nauseum by “noobs” who haven’t gone to the trouble of reading the FAQ or searching the archives. do those things first and make note that you have done so before you ask your question. said question should be to the point and detailed enough that someone who knows can, if they are so inclined, check your research and give you an answer. remember, most of the people on them are like you, but maybe a bit more advanced. they have jobs and hobbies too, so don’t be an asshole.

create a mind map and drill down into what you are trying to accomplish (another topic for the book, but suffice it to say, there is lots of info on mindmapping properly, online). in the center of the mindmap is the subject but as you branch sub topics into more detailed subtopics, you end up with a constellation of very specific questions around the outside edge of your mind map. it is these specific questions you should ask, if necessary, after you have researched all that you can on your own and there is no other recourse. once you post the question, if you find the answer your self, it is proper etiquette to post said answer in the thread and mark it SOLVED for those that will inevitably search for it in the future. you become a part of the community in this way, instead of an anonymous leech. this is part of the value proposition built into the idea of open source. respect it.

so at a certain point, your sensors are working with your arduino. note; I use a firmware for arduino called firmata, which turns it into a digital input/output board that only sends and receives to the computer. my concept of digital interfacing is the interface as dumb sensor io, connected to the computer wirelessly. in this way, all the heavy processing is done on the computer, which I find to be more powerful than a purely arduino based system, but that Is pure preference, and not a rule of any sort. this is a good time to connect the arduino to the computer using your wireless transceiver. my choice for the last couple of years, has been the arduino fio/roving rn-xv wifi transceiver combo because it works, its upgradable, the arduino is mated to the wireless transceiver using a socket which also allows for things like xbee or an Ethernet “shoe” for cable connection and if one of these parts dies, I don’t have to replace one$50-100 board. each part costs less than $35. it is also quite small and has a built in battery charger. there are other boards out there that seem to be very cool, but I have been pretty satisfied with this one so I haven’t investigated those yet. (I wrote a tutorial for using this setup here)

one benefit of this system is that I coaxed it to use udp protocol bidirectionally, which means that instead of dealing with tcp/p running over serial, which introduces latency by way of constant error checking, udp is “connectionless”. this means that the computer blast data to the arduino and the arduino blasts its sensor data to the computer and neither cares if there are errors. the result is blazingly fast data throughput, which is very necessary for playing music in realtime. if your design is a singular system transceiver, then you can ad-hoc connect it to your laptop and call it a day. if there are more than one, the you will need to get a wireless router, dedicated to being the interface for your transceivers to connect to your computer, NOT the internet. in fact, if you want the lowest latency performance possible, you will need to ensure that NOTHING else connects to that routers ip address or set qos rules to ensure you have a guaranteed minimum bandwidth.



handunit v2.2handunit v2.4-crossjunction

We will now assume that your electronic systems function on a basic level and you know the size of all the components that will go into your interface. at this point, go back to your drawings and/or clay models and revise the design to account for the actual parts that will go into the interface. now you must begin to think about printability in your design. its all good to want the design to be all swirling shapes but the reality of the reprap is that it prints from the bottom up which means that if it is designed in such a way that the bottom doesn’t touch the print surface or there are weird overhangs, ie printing on a surface that isn’t there, the design fails. I design in interlocking flat pieces. flat pieces stick well to the bed. if a part doest stick to the bed, then I make two pieces I can mate together by hand, after the fact. in fact my whole design process is based around the limitations of the reprap print concept: designs that will stay attached all the way through the print process and fulfill their function. I call it “dominant flat side design” each sub part has to have a dominant flat side. think of this from the beginning of the process and you wont end up with wasteful prints.

using the digital calipers mentioned above, measure each electronic component and/or groupings of components and model fitted templates where those components can mount. if you have enough 3d printer filament, print out the template to see if the components fit snugly (you can also use the data sheet for said component, but I like to measure things myself.) then it s merely a case of assembling these parts together in your cad environment, into a form that is as closely related to your drawings as possible.

Picture 229Picture 228Picture 215Picture 216

with my Mayan calendar example firmly in mind, think of the idea of the 3d printer as enabling form to come from mind to this plane of existence, in 2-3 steps. the 3d printer is a dimensional portal through which physical manifestations of ideas come through, but the limitation of this “early stage” portal is the size of the print surface and the materials used. so although form is coming from another dimension, LITERALLY, the portal, in my case, is 140mm (long) x 140mm (wide) x 120mm (high), so although I could, say, manifest a great many things, they will either have to be that size or smaller, or come through in pieces to be assembled on this side of the vortex. so design accordingly.


from here…

As I stated above, this is the super quick run through of my process and at this point, you should have something that basically works, but it will never be perfect in the movie sense of the word. I see “perfect” as a significant point on a process trajectory. if it works at all, then its perfect. yaaaaay, you win! but as soon as it works you will instantly see things that can and should be evolved. welcome to the never ending process trajectory. it can always get better. it is part of you. it evolves as you evolve… as technology evolves. you are all intertwined now. and the most interesting part is that because you built it, you now have “insight”; with commercial “products” you know as much as they want to reveal about their process and “intellectual property”. with your interface, "you know EVERYTHING and can iterate in any direction you please. if something screws up, you can fix it or change it or delete it or start over from scratch. it is yours…it is you. you will begin to think “why isn’t more of my life like this” and your life will evolve along the exponential paths that made your interface possible. welcome to the future.






Exploring the exponential now-Episode#1:Leaving Berlin (The Nomadic Diary of Onyx Ashanti)


For those that don’t know, I have packed myself and project development of the exo-voice and associated projects, into my trusty duffle bag, attached by trailer to my folding bike, and decided to leave Berlin for the time being. It must be noted at this point that primary focus is still on developing and shipping crowdfunding campaign perks to my contributors, but I don’t need to be in berlin to do so.  truthfully, it is quite a bit cheaper to purchase parts and ship from here in Mississippi, than from Germany, but that is not the reason for leaving.  it is simply an assurance that “perk-iteration” is still my primary focus; making the people who supported me, happy, is still the overriding goal Smile .  The reason I left is to explore a “now” that is so profoundly interesting, that I can not be in one place to do it.

Exploring the exponential now.

Over the last couple of years, my mind has been a vast terrain of unexplored adventure.  I never dreamed of what awaited me once I turned off TV, movies, radio, CD’s, etc.  although I have become obsessed with sun ra, Terence McKenna, John Coltrane, Polynoid, online research papers, ancient Egypt, pre-colonial Africa, and Japanese Tech culture, I've done so at my own pace, using the internet, then disconnecting and letting all of these disparate topics interact freely in my mind.  the interactions between them and the hundreds of other topics, has been thrilling in completely unpredictable ways.  not “finding”congruencies, but just sitting quietly long enough for them to drift together effortlessly has created a pattern of its own and this pattern needs stimulation that can no longer be had primarily through a computer screen information interface.  tactility is now necessary.  personal experience.  fields within fields of influence.  the attraction of a future which wants to be experienced physically with mind and body, here in the present.  the now.

But this “now” is different from the previous “now” I remember from the last “nomadic era”, of which there have been 4;

The first one i n the late 90s living in a van in LA,

The second living in a squat in London in the early 2k’s,

#3 was living on the A-train in NYC in 2006DSCN1779, and was the first one I blogged and

#4 was summer of 2010, living in a tent, camping, and busking around Europe and Ibiza, also blogged

In all of those nomadic phases, there was less certainty of what I was doing, where I was going and why I was going.  the “now” was just a place that my body and mind happened to inhabit.  and all of these were pre exo-voice, pre-open source, pre-investigation-of-self.  I was a street performer who was looking for good pitches to play.

Since 2010, “now” has gained several new dimensions based on my expanded perception of it and on the explosive growth of human connectivity and technological expression.  “now” is no longer a tiny point wedged in between a future and a past, but an exponentially growing life-form.  it consumed the past long ago, parameterizing it and making it increasingly easy to parse, but it now also stalks the future like a seasoned predator. 

We have the tools and the minds to create any future we can conceive, right now.  the now has become an increasingly vast multiverse of possibility and expression.  so vast, that one must approach it as artist and anthropologist.  any expression can be constructed as quickly as ones mind can use and/or create tools to construct it, but then one must shift perspective and investigate said construct with a respect previously reserved for ancient artifacts, whose age and detail beg  for serious inspection by trained “professionals”.  the new now…our now, must be played like music while also studied like any other science.  its boundaries grow exponentially and we must look out from its event horizon, to gain even a slight glimpse of the futureverse. 

So, looking at the “now” as a terrain rather than as a point-or maybe a really expansive point, with us as particles on or in it-then the impetus is to map it.  to explore it and see where it leads, and maybe find other explorers and trade notes.  “now” exists in berlin as well but not only in berlin.  the new now is a vast reality topology abstraction that can be experienced in part from any point, but must be traversed physically to try to grasp its scope and depth.  nothing in the virtual sphere replaces personal experience.


Another reason for this exploratory journey, is the birth of a new metaphor for the exo-voice software.  As I have mentioned ad nauseum for the last couple of years, I am obsessed with fractals in all forms.  from mathematical constructs to trees to nervous systems…they have changed my interaction with myself and with the world.  so the guiding principle for the evolution of the synthesis system of the exo-voice has been to make it “fractal”…to give it an evolving “fractal logic”.  to my mind, at a particular point, this was just an abstraction like “funky”  or “swinging”, but then a few months ago it came to mind to create a simplified version of the system for practice.  something that would be a little bit of everything in the system, condensed into a single synth/looper…POW!!!!!(insert matrix-y sound here)ZOOOREEROOOOMMMMMPOP!!!!  THAT WAS IT!!  THAT WAS ALWAYS IT!  make a single point from which to iterate the sound! The iteration is multidimensional in that it is an interaction protocol (how it is played) and a design protocol, which then feeds back into the interaction protocol.  this means that changing a very few things at a time can result in massive sonic personality changes.  basically, instead of changing an instrument metaphor, it is that of changing a “seed” metaphor.  tweaking the sonic genome, produces new morphosonic structures (“morph” is Greek for form).

the way this relates to leaving berlin is that berlin has a very pervasive “morphosonic field”.  a morphosonic field is a term describing the “sound” for which a place is known.  I derived it from the concept of a morphogenetic field put forth by Rupert Sheldrake in his book, “a new science of life”, in which he describes his theory that nature is made up of fields of influence and that each species has fields of memory that guide their collective habits.  Google and YouTube are your friend in researching this further.  from that theoretical point of view, a morphosonic field is the sound habit of a place.  berlin has sound habits , just like London has a sound habits, or Hawaii has sound habits.  so now if one interjects an iterative sonic construct into an existing morphosonic field,  you get very interesting interactions between the two; either the iterative sonic construct changes itself in the face of the exiting fields dominant concrescence, the existing morphosonic field incorporates the new novelty of the sonic construct, OR, both, simultaneously.  one or all of these processes has been happening in every city I have ever lived in but, now, with a fractal sonic iteration system, it is much more profound.  since each place gives its sonic genomic material freely,  a fractal sonic construct, like the exo-voices internal logic, can, theoretically so far, be interacted with in a manner that incorporates this genetic material in novel ways, and if it gets “stumped”, its own iterative, programmable genome can be modified to do so.

so now…substitute Detroit or Rio or Tokyo, or Goa or the Congo, for berlin and my desire to touch these fields, makes much more sense;

It is the traversal of multiple morphosonic fields, all “collectivized” (I just pulled that one out of my ass, but it felt like the right word) within the expanding membrane of the “now”.  rather than attempting to predict a sonic end result, by way of trying to sound “good” or even “musical”, I feel that developing a personal experience based interaction model of swimming through the “now”, with the exo-voice acting as  the snorkel in this metaphor, will result in outcomes that no amount of precognition will produce.  the sound will interact with any morphosonic field it encounters and mutate accordingly.

this sonic fractal is of the first interaction with this new metaphor.


vlcsnap-00002the mask iteration is coming along nicely now.  the minimized mount points, i.e., having as little material touching the skin as possible, have been working very nicely except that abs is no longer considered a suitable material for the high stress mask design.  around the mouth, the stresses are too much for abs.  it snaps eventually.  but nylon in just about any form, fits the bill nicely.  getting the angles right, has been a bit of a pain in the ass.  the teeth must sit on the tip at a certain angle so that when one blows, the air only goes into the air hole, otherwise, there is too much adjustment of the embouchure before and during playing.  now I feel like I can abstract the positioning into the design as a rule rather than a theory.  this is why the design process is taking so long; it must be played enough to develop solid design rules.

the hand units have been mostly stable for months now and are 98% done, as a release-able construct, with the holdouts being adjustable hand mounting, more robust circuit design and adjustable thumb positioning. 

20140217_115223the software took a quantum leap in efficacy a few weeks ago when I FINALLY shifted ALL development of the system, over to a USB flash drive based debian audio distro called Tango Studio.  the difference between this one and many others I had tried was that the real time kernel was enabled by default and after a couple of tutorials, I was able to turn on persistence.  this means;

  1. I can distribute the exact code I use, in the exact environment I created it in, and the end-patternist has only to insert the USB stick into their computer and boot it up.  no config (for reasonably modern machines) required
  2. the distro can be loaded with all my notes and pics
  3. real-time enabled out of the box means it will run well on 90% of computers that its put into (with some addendums, because someone will still try and run it on their commodore 64 or some shit)
  4. said patternist can iterate their distro their way and save it to the stick itself as well as back it up much more easily and transport it to any other computer.



Episode #1 was just the first video of a new series of “nomadic diary” film expressions.  I haven't spoke in this (visual) language in a while so I am relearning what I knew while incorporating new techniques, ideas and motives.  my videographer gave me a Black Magic pocket cinema camera to document my travels with.  as you can see from the video above, the quality is stunning (thanks Tom and Darren!).  I will have to really dig into the capabilities of this camera over time, but definitely expect to see more artistic visual expression over time, starting with this video but extending through episodes #2-4 which document the 2 weeks it took to finally make it back to Mississippi, where I currently am.  loads of crazy footage from PIPSLAB and DDVTECH in Amsterdam, the slingshot festival in Athens, GA and the national society of black engineers conference at Opryland in Nashville Tennessee (which, previously unbeknownst to me, is a massive climate controlled biosphere).  in addition, detailing the final stages of getting the exo-voice and associated perks out to contributors, will be much more interesting as I investigate the visual story hidden within the exercise. 

I can not or will not say what happens after that.  but I feel confident that it will be interesting and now, well documented multidimensionally (blog, film, audio, 3d printed artifact, etc.).  the new now awaits…



Warning: file_get_contents( [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 410 Gone in /home3/solarpt/public_html/ on line 44

Latest Tweet