Username:

Password:

Fargot Password? / Help

rokettube porno mobil porno mobil porno rokettube porno porno

Blog

0

Time to wrap it up…

ok, in the vein of one of my favorite dave chapelle skits, i think its time to wrap it up...

next week (time and date to be announced) i will be doing a...presentation/lecture/performance/thing at the UC-Berkeley center for new music and audio technology (CNMAT), then on August 1st, will be a proper concert in the same venue, at which time i will wrap this joint up and call this phase of the project "ready to release", meaning that all the contributors will get their perks and soon thereafter, all associated files will be uploaded for free download for anyone interested in investigating this project on their own.

it is with trepidation that i make this announcement, considering how many "mission accomplished" moments ive had i the last year, but as i prepare to leave the bay area, i dont want to keep lingering on this project endlessly. baby is ready to fly...um...or will be...

0

The Nomadic diary of Onyx Ashanti-Episode #5 “Boogie up…Boogie down in Oaktown

It hasn’t rained once in the 2 months ive been here.  I hear that there is a drought.  I had forgotten what endless summer is like.  well, hmmm…it s more like endless “spring” mornings, “summer” afternoons, and “autumn” evenings in the Bay area.  this region has a very strange micro climate that is unique but you get used to it.  I used to be…but I find myself consistently surprised by the complete lack of rain since arriving here in may.

there are many things that are unique about this region, for me.  some of which made me come here to live almost 20 years ago…some that made me stay or come back for much of that time, and some that remind me of why I keep leaving.  one of those things here is busking.  when I moved here, the bay area was alive with buskers of all sorts.  as a busker myself, I thought it was paradise.  things like turf wars(not real ones…mostly just slightly irritating ones involving loud sound systems) and cops made me leave on numerous occasions over the last couple of decades but the busking scene here is very relaxed when youre not trying to hit the hot tourist spots.  if you hang out enough, you will be treated to some of the most unique (I seem to use that word a lot when describing this area) buskers in the world…until the politics make them pack it up for somewhere else.  its not all good or all bad…it just “is”.  and it “is” in a region that is sunny everyday currently, so there is not much to complain about.

a pattern has emerged over the years.  I live somewhere else then come back to the bay area to regroup, artistically as a street busker, before taking my re-honed skills back out into the world.  7 years ago almost to the day, I had a choice of going to Berlin in the summer of 2007 or going back to the bay area.  I chose to come back to Oakland because I had an idea for a new kind of live looping improvisation and I wanted to be fluid with it before I went overseas, so I came back here to play on the street and refine beatjazz before I finally moved exactly one year later.

and so it is again…back in the bay area with new ideas that need to be assembled in the peace that is playing on random street corners in a city where my friends treat me as if they just saw me yesterday; comfortable like a glove.  a tiny little rig designed to be compact and unobtrusive as I iterate my process.  a laptop, little speaker and battery, a wireless router and the exo-voice.  that is all.  light, sound and physical motion…..and boogie…

Boogie

a couple of years ago, I had a dream about playing robots.  rather than the robots making sound themsleves (yet), they conveyed, physically, the data that I was producing when I interface with the exo-voice.  I designed and built one. its job was to respond to accelerometer position with its two servos; side to side motion was x-axis, and attached to it was another servo that would rotated up and down on the y-axis. I named it “boogie”, because even the slightest twitch from the accelerometers would make it “dance”, and thus was the beginning of imbuing this mechtron with descriptions reserved for “life”.  here is boogie v1.0 in all its glory.

As you can see, boogie is very cute but also very low resolution for the purposes of expressing gestural data.  I even built 2; one for the left hand and one for the right.  it got the job done but it was, for all intents and purposes, a baby.  it was a move in a direction but not quite what was needed.

eventually it occurred to me that maybe two, low-res bots were not what was needed.  maybe I needed a singular, higher resolution bot; a singularity of parameter expression; accelerometer data from both hands, a color synth and led light parameter expression.  the first port of call was investigating those 6 axis car building monsters like these…

vlcsnap-00006

I took the servos from the two previous boogies and added two more to make a total of 6 servos, and built one based on what I had researched online.  it was cool to look at but it had no real reason to exist.  it didn’t do anything that was crucial or necessary; cardinal sin numero uno-everything must have a function. no superfluous expressions.  so in rethinking the purpose of such a bot, it occurred to me to put the two existing bots together as one bot.  so rather than a 6 axis bot, I would have a 4 axis bot-one servo for each used accelerometer axis.  in hindsight, this was easier to imagine now than it was then. 

the problem was determining how to hinge them together.  the first thing I tried was x-y, like the original bot, with another x-y attached to it.  this was amazingly shitty.  it flopped around all spastically.  there was no grace or way to discern function.  but after a bit of playing around with axial connections, it occurred to me to connect them x to y, like the original, controlled by the right hand accelerometer, to y but with the x connected, technically as “yaw”, like a gyroscope, or rather, it looked like a head that could look side to side.  the two y’s together work beautifully.  it feels very organic.  its movements mimic biological kinematics.  so how does this relate to functions of any sort?

in my previous post I described quadvector gestural synthesis.  briefly, each accelerometer axis controls a whole synth of its own-4 in total.  the hand units are designed to make the hand want to go naturally to the center of the gestural range-the half way point between the minimum and the maximum axial range.  the hand should be in a sort of “pointer”position as if you are pointing at something.  this takes practice and attentiveness.  its very easy for the hand to drift into more comfortable stationary positions that are slightly “tilted”, so it occurred to me that I could calibrate boogie to point straight up when all accelerometer axi are at gestural center.  this allows me to create synth and parameter controls that are now calibrated. and they all lived happily ever after…but wait…

robotics for all

on a whim last week, after I had finally gotten around to designing some legs for boogie that could deal with its powerful swinging motion, I was showing my friend jay the bot, which he had not seen except online.  I gave him the hand units.  he did what everyone does-put their hands into some strange contortion that has no relation to controllable gestural anything.  but then, I merely turned on boogie, explained briefly what axis controls what servo and told him to make boogie stand straight up.  within 30 second of puppeting the bot with the hand units, he had boogie standing straight up, and his hands were in perfect gestural center.  boogie had, in 30 seconds, done what words could not do in minutes, or hours. 

intrigued, I replicated this experiment with 3-4 other friends and the results were the same-using a robot as a gestural feedback system, yielded singularly positive results.  in addition, after achieving gestural center, they began to puppet boogie in very organic ways.  once they had discovered this calibration concept, the rest was easy for them.  I now have a number of gestural center-robot calibrated sound elements that relate directly to the motion of boogie.  this surprising discovery leads me to think that possibly, everyone who gets into gestural synthesis will benefit from having a parametric gestural feedback robot.  I will know more this week as I bring the light feedback system into the fold.

in other news…

the mask design is being tweaked constantly and now being put on other peoples noggins, which is gleaning much useful data (spoiler alert: I have a very large head), and after putting the hand units on a dozen peoples hands, I have a few “universal” part variations that I will be testing this week (with associated video and blog documentation).  ive got a number of prototype models for this but did most of the design experimentation on, what could be called, a “test prosthesis”.  there is a brief shot of it in episode 5.  it will form the basis of the next episode, but I want to see if anyone notices it Winking smile 

the primary goals with the sound synthesis lately has been to regain a sense of fragility through making the functions more dynamic.  things like making the breath sensitivity pick up the slightest breath and for that slight breath to sound “slight” rather than to sound like it is “triggering” something.  the sound has tended toward functional necessity for about a year now.  if you hear some of my music from a few years ago, there was an ability to investigate a certain emotional vulnerability because of the layers and layers of well programmed software synthesizers (VSTi’s) I used back then. my designs for this system were for it to be much more dynamic than the previous systems and if judged on pure function, it is, but based on the ability to convey wide contrasting emotion, it has been stuck in prototype mode for 2 years, until just recently.

being able to go out and play to no one in particular, any time I feel like, has allowed me to investigate “the now”.  meaning, I go out and “play”, fully, what the system is capable of right now, rather than perpetually “program” toward the future attractor.  now the programming session are more focused because they are informed by a regular playing schedule.  a schedule that isnt determined by who is paying me ( but by what I want to investigate on that day…for as long as I need/want to investigate…or until the battery dies-whichever comes first.

the current evolution in the sound is coming out of the investigation of gestural interaction with the filters and the harmonic relationships, formerly referred to as chords.  I say formerly because once I learned how to produce them gesturally, they ceased to be chords and became more of a harmony array…or put another way, the reason a clarinet and a saxophone sound so different is because of the harmonic relationships created by the material they are made of, and their size.  what was imagined as chords became harmonic series.  this has made me rethink the filter array to create a sort of modulating “cavity” for the sound, like how a saxophone body is the cavity that defines its sound.

here is what this stage sounds like, and even looks like. 

you don’t have to grasp any of what I just wrote to see and hear (with headphones) what it means.  there is a coherence to the sound that kind of surprised me.  especially the filter sweeps, where the most interesting phase patterns happen.  below are a few of the audio files I recorded last week and they also exhibit interesting sonic qualities.  I predict that tiny code changes from now on, will produce predictable unpredictability.  the system is finally reaching a certain functional stability that will make it easier to distribute as something that is learnable and playable.  all the pieces are in place.  now is just a matter of making what I understand, understandable for those that choose to investigate beatjazz in the (near) future.

0

Quadvector Gestural Fractal synthesis-current implementation: part 1

in an attempt to not unload streams of incomprehensible information all at once, every 2-3 months, I will try to unload more bite sized streams of incomprehensible information at a slightly more regular pace…youre welcome. as we near the event horizon of the singularity refered to as “releasing the project to the public”, little things become big things, as if their mass increases by virtue of time compression.

one such thing is the synthesis engine concept, currently refered to as a quad-vector gestural synthesizer.  this is one of a few “fractal” synth concepts that I have been working on for the last 2 years. the idea with this one is of a single synthesizer, tentacled with branching gestural parameter interfacing,  that can, through various combinatory means such as key sequencing, hand position and mode switching, mutate into any sound I can imagine.  this is accomplished using the multimodal gestural capabilities of the exo-voice prosthesis, whose interaction matrix is fractal. by “fractal”, i mean systems that evolve to higher states of “resolution” over time, from simple origin states.  this applies to everything from design to programming to performative interaction.  20140613_143909

in the design of the hand units, following pure function, I needed “keys” that were spaced to be comfortable for my fingers, while also being attached to my hands unobtrusively, which was accomplished in the first, cardboard, version.  over time, the design iterated toward things like better ergonomics, better light distribution, ease of printability, and so on, so the design and function evolved, while still being based on the the idea of the first version.  since all aspects of the exo-voice share this design concept, the entire design evolves fractally; as new states are investigated, the resolution of each state increases toward a state of “perfection” at which point it forms the basis of a new state of investigation.20140613_143621

each hand unit has one 3 axis accelerometer.  the x axis measures tilt, side to side, like turning a door knob.  the y axis measures tilt forward to backward, like the paint motion from karate kid (hope that is not too old of a reference for some of you).  the easiest way to picture it mentally is to think about the range of motion your hand would have if your wrist was stuck in a hole in a thin glass wall; you could turn your wrist and you could move your hand, from the wrist, up and down, as well as combinations of those two motions.  this forms the basis of the gestural kinematics the system is built on.  the x-range is one parameter and the y range is a second one.  the z is unused right now.  it’s the functional equivalent of using a touch pad in 3d space with x and y acting as two semi-independent control vectors.

there are a number of ways to use these vectors. 

  • Individually(per hand)-x is one “knob” and y is a seperate”knob”
  • Together(per hand)-the combined x/y position is considered to be one control point, similar to how a computer mouse works.
  • Together (both hands)-2 x/y control grids
  • Quadvector-x/y-left and x/y-right are combined into a 4 dimensiona control structure.  think of this as video game mode or similar to how flying a quadcopter works; the left hand is, in this metaphor, forward, backward, left, right, and the right hand is up, down, turn left, turn right.  no one control is enough.  all vectors must be interacted with simultaneously to control the position of the quadcopter in 3d space. 20140613_155205

 

the exo-voice system works by routing accelerometer data to parameters which are determined by which “keys” are pressed when in edit mode. ( a key in this system is a function initiated by pressing a sensor with the finger, like how wind instruments work, but here is more symbolic since the same function could be achieved in any number of ways.  since I was trained as a sax player, my mind gravitates toward that modality.) so in “play” mode, the keys are interacted with as saxophone fingerings on a basic level.  when the edit mode button is toggled, the entire system switches to edit mode and all the keys become function latches, ie, if I press key 1, the accelerometer controls parmeter1 and parameter 2.  if I press key 2, the accelerometer controls parameter 3 and parameter 4, and so on.  there are 8 keys so that is 16 base level parameters that can be controlled.

in previous iterated versions of the software system, each key could instantiate a seperate synthesizer when pressed hard (HP1-8), each with an edit matrix that was specific to each synth which meant there were 16 paramters to control  for each of the 8 synths.  now though, there is one synthesizer with 4 oscillators-each assigned to its own accelrometer axis. the HP modes are for “timbral routing” which means that the synth is routed though a filter which makes it sound timbrally unique in ways such as being plucked or being blown. some parameters are already set, such as pan position, gain, delay and pitch offset, each taking on key, so that leaves 4 keys (8 parameters) to paint with, which is not a lot.  this is where the fractal concept begins to pay off.20140613_143815

each key sends either a 1 or a 0 when pressed. in my system each key press is multiplied by 2, ie., key 1 =1, key 2=2, key 3=4, key 4=8, key 5=16, key 6=32, key 7=64, and key 8=128. this is an 8bit number sequence. one useful aspect of this is that each combination of key presses creates a completely unique number that you cant not achieve by any other combination of key presses. so, key 1, key2 and key4 (1,2 and 8) equal 11. there is no other combination of key presses that will generate 11.  in addition to 4 keys per hand, there are two button arrays-3 buttons on top, controlled by the index finger and 4 buttons on the bottom, controlled by the thumb.  the upper buttons are global, so they can not be mode-switched (change function in edit mode), but the bottom button array on the left hand is used to control octave position in play mode.  in edit mode all keys become parameter latches, so octaves arent necessary, but in edit mode, if you press key 1, while at the same time pressing lower button 1 (LB1) you get the 8bit number 257 and can control 2 new parameters not achievable with either button alone.   this is how I was able to create relateable sax fingerings. in play mode, I use the octave button presses to add or substract 12 or 24 from the midi note number that is genrated from the key presses, but in edit mode, I simply gave each of the 4 octave buttons, their own 8 bit number (256, 512, 1024, 2048) which now means that when I press key 1, I have the original parameter position as well as 4 new ones, totalling 10 parameters per key press and 80 parameters overall.  this is stage 2 fractal resolution.

so, as you can imagine, there is ample opportunity for this to become a class A headfuck.  it is not enough to simply throw parameters in just because one can.  functions have vectors as well, otherwise they become unwieldly.  the question that must be asked is “what function naturally emerges as “next”?”  it is at this point that I use a mental construct called a “perceptual matrix”.  this is the mental projection of the flow of parameters within the system.  I discovered it by accident after I completed the first working version and discovered that the act of building it had created a mental model in my mind.  this freed me from having to reference any visualizations.  now I design the functions to make sense within this perceptual matrix. so far, I have not found a limit at which I can no longer interact with its growing complexity.  in fact, by following a fractal design concept, the perceptual matrix guides the design evolution since the mind is the origin fractal of the matrix itself.  as the name exo-voice implies, it is a fractal prosthesis for the mind, both in creation and interaction.  the job of the visualizations is to convey this information to observers and collaborators as well as providing a means of interacting with the data after the fact.

im having to investigate these issues because fractal synthesis makes sense now. 2 years ago, it was two words spoken one after the other.  now it is a protocol. every parameter vector can be mapped now and iterated through a  continuously evolving key sequencing modality (every key sequence can have 5 positions of 10 parameters, which could end up being over 200 parameter vectors per hand and 600 between both hands, without adding any new hardware).  but what goes where becomes the grasp-worthy straw.  20140613_143653

my percussive concept is begging for evolutionary adaptation but its not as easy as just throwing something in there because, since it is one synth with parameter vectors as sort of tentacles, or synapses, one connection has to lead to a next connection and a next, fluidly.  my mind is pulled in a dozen directions.

I needed to share this modality with whoever is interested in knowing it because a lot of what I will be expressing in the coming months will be based on this and without this information, there is no way to know what the hell I am talking about.  in additon, I expect that within the next year, these parameter trajectories will evolve to more resemble biological expressions like nervous systems, which are fractal as well. 

0

The nomadic diary of onyx ashanti-Episode#3 and Episode #4-exploring the digital memory banks and exo-voice developemnt updates.

As I mentioned in my previous post, I was gifted a BAD ASS Blackmagic pocket cinema camera to document my travels.  so in addition to all of the other stuff I am learning, I am also working on developing more of an “eye” when it comes to visual expression. 20140610_144319 This is not the first time I have pushed into the video/film realm.  I edit all of my own videos.  but I do it more from the point of view of a technician rather than as a form of expression unto itself maninly because I never had a nice enough camera that warranted the expenditure of cognitive resources.  but I have been reaching into the void of visual expression lately, to see what I find.

over the last 2 months I have accumulated 1.5terabytes of footage of a range of subject matter.  I had been contemplating doing another “range of time” video, ie.  footage from a place and time, singularly, like berlin or amsterdam or atlanta.  but then it occurred to me that what I have is a digitized visual and audio memory bank.  as such, it would be interesting to interact with it as the brain interacts with memory.  so instead of, say, a video project concerning my time in Athens at the Slingshot Festival, or from my time in mississippi, I could think more about the totality of experiences from holland to where I am currently and single out particular conceptual constructs like, as this project presents, all of the representations of motion over the last 2 months.  times in cars, trains, planes and buses, just watching the world go by.

this makes the memory bank much more random access and interesting as a store of experiences.  ive got LOADS of footage  festivals, conferences,  maker faire, singing robots, thunder storms, strange insects…it is a more interesting way to access the digitally stored memory of my nomadic travels and makes me think very differently about what I capture on film and as audio.

Episode#3 is entitled “memory of motion” and is a montage of moments in motion between the time I landed in atlanta to go to Athens (GA) for the slingshot fest, then from atlanta to nashville to mississippi to memphis, san jose, san francisco and finally to oakland, where I am currently working on a few things that will be the subject of future nomadic diary episodes. Episode#4 is me playing in Berkeley, just above the BART station.  I was very lucky to have my friend Tony Patrick there the impart his camera skills on the occasion.  The purpose of the sound track is to give an aural reference to the state of synth development at the time of editing, for posterity.  in these videos, the synth functions have stabilized but are more simplistic than in previous posts.  this is because I took out many premade objects and replaced them with my own versions, but more on this shortly.

Busking in Oakland

berkeley2now that I am fully re-submerged back into busking, the iteration of systems is going waaaaaaaaaaay faster.  busking has always helped to accelerate idea iteration because you can practice more, in more relevant situations and get paid for it.  you become an entrepreneurial singularity; you only need think of an idea and try it in stages everyday that you go out. 

as I have been pushing myself to get the contributor/open source version out into the hands of people, the fast design-build-test-revise-integrate cycle that busking affords, has been invaluable. when busking, glaring fuckups or just “not-so-nice”ups, become unbearable because you are interacting with it everyday for anywhere from 1-5 hours a day.  a crappy sound is bearable if one only has to rock it for a 15-40 minute show on a nice stage with a nice sound system.  but, after a week of 5 hour days, the issue is  brought into laser-like focus and you are able to hone in precisely on errant gremlins.

luckily, since I was literally “from” here before I moved to Berlin, some of my old busking gear was still stored in my friends basement.  I had only to acquire a new battery.  my berlin battery was horrendously large-25kg (approx 50lb) so when I went shopping, size was a major consideration.  I found a nice, small sealed battery and borrowed the small Roland monitor that I also used to use back then (thanks Jaswho? !) and hit the road.20140525_141817

Why?

busking has supported me since I first moved away from home as an adult in the early 90’’s.  I knew a total of 3 songs when I started. busking allows you to develop.  it is the like a relationship with someone you might not see very often but when you do, its familiar and comfortable. 

the last few months that I was in Berlin, I was beginning to feel “soft”.  most seasoned buskers can rock it at a moments notice, anywhere…ANYWHERE.  I was starting to feel like I was losing that edge.  I had spent so much (completely awe-inspiringly necessary) time inside my own head for the last couple of years, that I felt I was losing sight of the ability to just rock out. but I KNEW I was losing that edge last july when I was fortunate to find out that DubFX and Rico Loop were going to be rocking out for free in Mauer Park. they are both seasoned buskers and I was awed by these cats!  they rocked it non-stop, in the hot sun for 4-5 hours with footpedals and beatboxing!!  it was amazing!  and neither of these cats has to busk at all anymore.  they are both famous with full touring schedules, but they wanted to give something back, and they really did!  that show stuck with me to this day.  I thought “why am I not doing my thing, somewhere…anywhere!?”  (the answer was because I was in the middle of revising the code and the hardware design to be more easily printable, but still…)

so now that I am “exploring the now” –partly from the street-, the boldness is coming back…that thing that a busker has that says “yeah sure, play right here, right now!”, but it is mixing with this scientific thing that has emerged In the last 3 years where things get interesting.

one system that is benefiting from this the most is sound design.  although the novelty factor, can work on passersby, sub-evolved sound and/or it parameter interfacing, drive me insane in a very short span of time-usually 3-4 days.  if I feel trapped by my parameter interface, I get bored.  if the sound doesn’t evolve toward a future attractor where it conveys its character with efficiency and beauty, I get irritable.  playing daily compounds these feelings.  so having a system that I designed from the ground up to be in a perpetual state of iteration, makes for exciting investigation.  shitty sound used to just be shitty sound.  now, shitty sound is a daily-updated list of “to-do’s”.  now that stability is reasonably common and expected, I can add the last synthesis piece…

the last synthesis piece

in March, I began using the new fractal synth ive been rambling about for almost 2 years.  by “fractal” I mean that it is truly fractal;

  • the system logic is an iterative branching protocol consisting of latching key sequences.  this means that by pressing one set of keys takes the system into a new mode and from this mode, a different set of keys, changes aspects of the current mode, from which yet another mode can be instantiated.  and since programming pure data is part of the expression, this branching system can continue to the limits of my computing power or cognitive abilities.
  • I imagine “functions” rather than specific sound outcomes, so I begin with the absolute simplest idea of the the sound function, then revise it toward higher states of expression, as I learn more and expect more from the sound.
    (this is the reason for the sound quality regression evident in my latest recordings; everything has been put into its simplest possible state so it can be iterated more easily toward higher states, rather than using too many objects that I didn’t build so as to sound “good”. )

the fractal synth the system is designed around, can be other types of sounds, based on how it is routed.  for instance, I can press one button and the sound is routed through a drum filter, so now, it is a drum.  but because it is gestural, the “drum” has a ripping/morphing kind of quality that is part function and part discovery-through-interaction.

20140610_145247pic 1: the fractal audio module object20140610_145630pic2: quad-vector synth with a few of the gestural control modules to the right20140610_145332pic3: the quad-vector synth closeup showing the routing20140610_145704pic4: the x and y vectors from one hand20140610_145554pic5: quad-oscillator selector patch
20140610_145402
pic6: Drum filter

the last synthesis piece is the other timbral modalities ive been working on.  what this means is that when I press the “drum” button + another button, the sound now goes through a “string” filter, and/or a “pluck” filter, and/or a “wind turbulence” filter for wind instrument modes.  this is a bit of a holy grail for me now that I am comfortable with the concept of a synthesizer that I must effectively reprogram on the fly continuously.in this sense, the term “quad-vector gestural synthesis”-which means that each x and y accelerometer vector, per hand, has its own complete synth- is the attractor that is pulling all development toward it. if the synthesis cant be done with gestures, then it is not gestural synthesis.

once this is working, it will be time to transcribe all the code and prepare it for distribution.  transcription will make it modular and understandable.  this will act to standardize the color synth parameters for the lights and the visualizations as well.

The modular mask

20140429_14174020140429_141723the mask has reached design stability now.  the final issues were;

  • snap together construction for ease of assembly and modification
  • designing a flip down mouthpiece so it could be moved away from the mouth without having to take the mask off.
  • modularity

in April I ordered a printer filament called tglase from Taulman.  this is PET, the same plastic used to make water bottles.  itits properties kind of slots between ABS plastic and Nylon.  its more rigid than nylon but more flexible than ABS and it doesn’t warp while printing.  but, most importantly, it refracts light beautifully, so I decided that it would be a great starting place for the mask design finalization stage.

20140429_14160220140429_141618the key design innovation for this was the simple idea of "”small interlocking pieces”.  I took a couple of weeks away from mask design to work on a side project; 3d printed orthotics.  this allowed me to investigate how one might create a design meant to be snapped together by hand and attached to a dynamically moving, shifting body, like the foot (I will show this as soon as it is functional).  it came down to the relationship of its interlocking small pieces.  so I took this new insight back over to the mask and started from scratch, guided by function only, and this was the result.

the design only touches the head at 7 points and is very easy to modify. the mouthpiece now shifts up, out and down out of the way and the circuit area isnt mashed against the jaw.  when not in use, it is easily partially disassembled.  the material is flexible and robust enough to withstand assembly/disassembly repeatedly. and although the contributor version will not have the goggles, which are form fit to my head size, it was a matter of a 5 minute design tweak to incorporate them.

the design is very very playable and comfortable for long stretches of time, even in direct sunlight, which was a weak point in previous designs.  having so much plastic against the skin is not comfortable under hot lights or sun, but this design solves that issue. currently, the facemount electronic components look boxy, but since they are now lego-like in upgradeability, the final versions will be much more streamline.

 

beta testing the hardware on people

I have begun printing parts for a beta-build of the exo-voice.  this is a full exo-voice, printed with design revisions, for the purpose of putting on other peoples head and hands to get a feel for what needs to be changed so it can be used by the greatest number of people initially, or easily changed by the pilot, to suit their own personal needs or tastes.

being able to take the prosthesis out into the real world, playing on the street, gives me much more confidence in its design.  its been bouncing around in an un-padded bike bag for weeks now and has been extremely stable, ie., low incidence of broken parts, and faulty wiring.  as I print what will probably be eventually posted online/shipped to contributors, fit, finish and material properties will be defined and there will be much footage of people of various size, age, race and sex to see how well the design translates so it will be a “print-and-use” as possible.  I am planning to make this aspect of the project more social by taking some aspects of the fullfillment stage, around to local hackerspaces and doing it there, rather than in an apartment by myself.  more info on that as I wrap up this stage.  time to go play a bit. 

 

20140524_202305

0

Episode#2-Double Dutch Bus(ride)-The nomadic diary of Onyx Ashanti-Exploring the exponential now

During this whole phase of the trip out of Berlin and into the US, I had very little hard drive space for new footage.  I was very “precious” with the time I had to record with the new camera and as such, I don’t feel that I captured everything I wanted to.  besides that, I am now seeing that I need to create some camera mounts for my bike so I can get the truly cool stuff that happens while I am riding around.  I have plans in that regard and will report on them when I have a model to show/share.

Sitting in the bus on my way to Holland, I felt that stomach-dance of anticipation that I had missed for sooo long.  When being nomadic, there is a point at the beginning or even at some point during the adventure where you say to yourself, something to the effect of, “ well…here’s another fine mess you’ve gotten us into!”, preemptively. there was a bit of that.  but not much, surprisingly.  My inner dialog was very focused on “getting to the point” with “point” being significant steps on this iterative trajectory.  in other words, focusing on what's next, leading to what's next, next, and so on…in this case, Holland was home to two significant entities that I feel are crucial to the “nexts”…”nextsus?”(…hmmm, nextsus didn’t make any sense but I wanted to try it on anyway. )

 

in the summer of last year, I was at Betahaus in Berlin, having a coffee and doing a bit of geek watching, when I was approached by a man named Keez Duyves.  he seemed to have a similar “vibrating-while-not-moving” energy to my own and we began chatting about a variety of topics relating to technology and berlin and our place in this “thing”.  he then proceeded to show me some of the work he does in virtual reality.  holy shit, it was amazing!  rather than approaching the concept from a game engine-“walk around and shoot things” perspective, he was creating what could only be described as “sculpture”, there.  and his work had many many dimensions, some involving motion capturing the input of entire theaters full of people, performance art overlays, total immersive constructs that made me begin to see the occulus rift as a necessary way to interface with data (if Facebook doesn’t fuck it up, but I digress on that point for now…).  we just happened to be playing at the same event in Wedding some time past this meeting and I got the chance to experience it myself and was completely blown away so when he graciously offered to house me, “if you ever decided to come to Amsterdam”, I made note of said offer, in INK, rather than my usually metaphorical pencil scribbling's.

around the same time and for most of last year, I was conceptualizing ways of separating the expression of sound from the expression of data so I could experiment with how they were to be reassembled.  I find it criminal that venues and police should have the right to dictate whether or not music can be performed and/or shared, based on outdated propagation modalities, i.e.. speakers transmitting sound through the air.  I've felt that this was inefficient for quite some time.  I investigated various ways of transmitting sound over the years, going so far as to build up a 15 headphone listening station on the street in san Francisco back in the 90’s.  it worked…kinda.  It weighed less than a speaker system but it was too soon for such a concept at that time and once the novelty wore off, I went back to speakers. 

now though, everyone carries some sort of network connected sonic prosthesis in the form of media players and smart devices.  so I set out to build a REALTIME media streaming system.  I capitalize real-time because I soon discovered that real-time in the media server world is measured in “seconds”, not milliseconds.  the lag time was too great and the overhead for running the servers I was discovering, offended my minimalist sensibilities.  I wanted the process to run in the back ground of the system I was already using, NOT on a separate, heavy, hot server.  if that was the case, I could simply stick with speakers.

it just so happened that out of all the servers I investigated, there was one that was low on cpu overhead and memory, blazingly fast, and, most importantly, open source, so I could share it once I got it to work (sonic fractal matrix 0.1 -alpha? Open-mouthed smile )  Its called Mistserver and it is made by DDVTECH who just happened to also be located in Holland!! 

so, the crazy VR guy is in Holland, the super-media server guys are in Holland and being that Amsterdam was the first place I stepped foot into Europe, the first person I ever met in Europe, my friend Phillip, also lived in Holland, it seemed that all roads lead to Holland!!

the first day back on the road.

20140313_063843

I brought the boom box with me, just incase I wanted to busk with it.  I took the bus from Berlin to Amsterdam around 7:30pm, so I could get some sleep and be fresh for my meet up with the DDVTECH guys the next morning.  I had already had one amazingly productive online session with one of the head guys, Jaron Viëtor, a couple of months earlier.  he helped me get it basically working, but I was using windows at the time and I was migrating the system over to debian Linux, so they offered to help me setup an optimized system for that. 

 

the ride to Holland was meditative.  I didn’t distract myself with music or excessive note taking.  I just sat and explored my memories of my time I berlin while creating new “future memories” of the things that needed to be done in the coming days and weeks. sleep came easily, as did awakening early the next morning, to retrieve my stuff from under the bus, assemble the trailer and bike and regroup inside the bahnhof (train station).  it was at this moment that I realized fully that I was no longer in berlin and that everything I needed in life was on that trailer.  no anxiety.  only the desire to peel away the day, efficiently, elegantly, and interestingly.20140312_175049

it still bugs me that I didn’t have enough space on the cameras SD card to take video of the trip from this station to the station in Leiden, where the DDVTECH offices were, nor did I get footage of biking to their offices.  it was a sunny morning and I always love Dutch bike paths, so that sucks, but shit happens. 

Google maps led me directly to their offices and one of the guys was there to meet me at the obscenely early hour I decided to ride over there, which I thought was really nice of him.  this set the tone for the rest of the day.  and I do mean the ENTIRE rest of the day!  20140313_113654they showed me some of the interesting ways they were evolving the code in mistserver, based on its extreme efficiency and modularity and even went so far as to incorporate unreleased code into the customized version they compiled for me.  through much trial and error, we were able to get mistserver to broadcast in a stable fashion, with 400ms latency!  it must be noted that this is to devices with no prior app download or optimizations.  the client smart device merely has to log into my local wireless network, by clicking on a qr code, then log into the stream by clicking a second qr code!  viola!  a dimensional bridge between my sonic fractal matrix to the client sonic prosthesis (smart phone).  this is what we are doing in the video and as you can hear, it is a surreal sound that I look forward to investigating and iterating.

later that night, they dropped me off at the airport, where I caught the train back into Amsterdam, before Jaron and his brother zoomed off to the band rehearsal they almost missed, helping me achieve a dream that I've had for over a year.  I cant wait to show it off next month in California!

PIPSLAB

I got back to Amsterdam kinda late but luckily

a. I had my bike

b. I know my way around Amsterdam and

c. I had only to take a ferry from central station to the area that put me within 10 minutes bike ride from pipslab.

the only way to describe PIPSLAB is that it is the place that you dream, as a kid, that you will have as an adult.  this was obviously the workspace of spatialized minds; tools, toys and tidbits, all within thought and physical reach, but with enough space for the mind to be able to imagine wider, deeper thoughts.  a perfect workshop…and I was going to be living in it for the next 4 days!  needless to say, I was feeling very confident that my need to leave berlin for a nomadic existence, was a good call so far. 

pipslab is a collective of artists.  they seem to be highly professional in many fields of interest, which seem to gravitate around the artistic investigation of virtual realities using motion capture, but not limited to this modality.  I was exposed to theater, to pre and post production work, to concept development…there was a lot of energy in the place, even when I was alone there.  their process mirrored some of my own in that once I arrived, I was folded into construct almost instantly and since I came with the mindset of such an interaction, we just ramped right into doing shit.

most interesting were our jam sessions.  being that keez had designed his system himself, in open frameworks and I had designed mine entirely in pure data,  we were able to mesh our systems at multiple levels of abstraction.  I could play sound but also send  note, tempo and gestural data, in multiple formats, over multiple types of network connection.  he could translate that data into forms of expression I needed video to show because I don’t have the language to describe what is possible with his constructs.  I think the data IS the language because words don’t do it justice and could act as an impediment to where this type of thing can go.  I think if we had had to discuss it all first, it would have taken weeks, but since I knew my programming environment and he knew his and those could speak computer-speak, the low-resolution of spoken words, were only minimally necessary except to gawk at the results.  needless to say, there was very little in the way of sleep, for the days I was there. and since then, my mind is consumed with expressing a part of itself in virtual space. 

the next day we biked over to the Lumasol studio.  there, two guys, Remco and Boris, had a crazy 3d capture rig for creating high resolution 3d models of people and things…of COURSE such a thing was a bike ride away…where else would it be?!  this is the same kind of stuff that was used to create the fight scenes in the original Matrix movie.  60 cameras go off in perfect synchronicity.  you can see the results of our session in the above video.

for the rest of the time, I got to hang out with old friends, like Phillip, who graciously modeled the mask so I could take notes on the adaptability of the design to different sized heads, and musician/artist/DJ’s rawdee lewing and Nobunaga Panic who came down and jammed with me/us in the lab, bringing some amazing vibes and sounds along to bless the space. 

STEIM

I also, finally, got a chance to drop by STEIM institute.  I've had so many people tell that I had to go by there, that I decided to bike over, with my stuff of course, and pay the place a visit.  just so happened that there were only two students there that day. unfortunately, I ran out of space, AGAIN, on the SD card of the cinema camera, before I could get the proper amount of footage this place deserves.  but I found the basement to be the most interesting of all.  so many stored projects in sound propagation.  I could feel the morphosonic field of generations of artists, expressing themselves in this space.  I had hoped there to be a possibility to see what many have described as an ancestor to my exo-voice-Michael Waisvitz’ “the Hands”, somewhere in the facility, since he was head of STEIM until his death in 2008, but unfortunately they weren't on location.  maybe another trip for another time.

onward

to say that the intensity and beauty of the 5 days I spent in Holland, “recharged” me, would insinuate that I wasn’t already charged up before arriving.  instead, I will say that my time in Holland, “reaffirmed” the idea that the “now” I am seeking to investigate, definitely doesn’t reside in a single place.  I know now that not only was it the right decision, but in a way, it was the only decision.  there was no way to get this experience without being part of it.  without going to it and partially creating it enroute.  like in video games where the treasure is hanging there in mid air, waiting for someone to grab it and no one had to be defeated to get it!  it is with said treasure in hand that I head back to the US to reconnect with my roots and grow some new branches…

0

How to design a wireless 3d printed digital musical interface (quick and dirty tutorial)


Picture 279A friend asked me, this morning, if I could give her some pointers on how to start designing a 3d printed digital musical interface. realizing that the answer would be more than a few paragraphs I decided to write a blog post on the topic, for posterity. then as I started mind mapping an outline I began to realize how little of the process is digital or computer-y in any way. and when I say “the” process, I really mean “my” process. As I am a self trained designer, I developed processes which work amazingly for me but which, I have been told by engineer friends, are strange but OK, since the results work well. I will attempt to keep this post short enough not to need to take a break but long enough to be thorough.

 

realizations from my own exploration into form and function


When the y2k bug failed to send the earth hurtling off into space, all attention focused on another date in the future; the end of the Mayan calendar; Dec 21, 2012 when the end of time was supposed to happen. as I type this in April of 2014, I can safely say that the movie-esque version of this conceptualization of “end of time” didn’t happen, but, examined another way, the world is very different than a few years ago. maybe it wasn’t down to the day or even year of said prediction, but it must be noted that we live in an age of exponential change unlike anything we could have predicted and the rate of change is increasing, as well as the rate of the rate of change. thus is exponentiality. but what does this have to do with creating a 3d printed digital musical interface. EVERYTHING. we are swimming in a sea of perpetually changing and mutating technologies, this article would have been completely different 5 years ago and would have been science fiction 10 years ago. some of the techniques in this article that are so common place now, are thoroughly based in the now, not the past. of course it all builds on the past, but we are on the other side of a tipping point of possibility and must approach our expression exploration from that point of view.


The key technologies that we are building on here are the reprap and the arduino platform. I could exchange the term “reprap” with “3d printer” and keep it general, but I choose not to. I think the reprap platform is more important than 3d printing because it is 3d printing with the open source ethos built into its DNA. every reprap can build other repraps. if it can not, it is merely a 3d printing appliance which falls in line with consumer products and market forces. the reprap, lives in digital space and when called out of that space as a physical machine, it allows the creation of form, from mind, in 2-3 steps, max! as a comparison, before repraps, which kickstarted the whole 3d printer craze, manufacturing, plastic or otherwise, took a workshop with specialized tools which were usually expensive and or dangerous, and years of training, which was usually expensive in either time, money or both. with the open source reprap project, one could download the plans and build one for less than $1000. a reasonably dedicated amateur could create something, realistically, within weeks or months, not years. and once you’ve built one reprap, you can build another, usually better one, easily. not so with commercial 3d printers, so for me, reprap is the way to go.


many repraps electronic systems are based on an open source platform called arduino. this was intended to be a way for children to learn about electronics but soon satisfied a need that didn’t exist before it provided a solution; an easy to investigate electronics platform. all of a sudden, sufficiently interested individuals could buy a $20 circuit board and create or adapt things that were of importance to them. or maybe just something they thought was cool. in either instance, the arduino allowed a low barrier of entry and increasingly sophisticated use cases based on its open source genome; there is an endless amount of information on how to adapt it to do anything…ANYTHING. flying robots, weather tracking, small satellites, autonomous vehicles and for our purposes, wireless digital musical interfaces.


the less significant but still very noteworthy technologies that we are spoiled with here in the future are plentiful cheap sensors, discoverable in internet based shops, deliverable anywhere in the world, and equally accessible CAD design environments and programming languages that talk to these systems and allow you to talk to your new lifeforms and in some cases, teach it to talk back with increasing syntactical ability. the one I will be gushing over is called Pure Data www.puredata.info and specifically, pd-extended. this is a “data flow” language that you program by calling functions called “objects”, which are onscreen boxes with the name of the function in them, joined together by onscreen wires/cables. I love pd because at its heart, it is really just a math program with sound outputs. it is open source, free and can also communicate visually, as or over a network and can talk to anything that can be connected to a computer, like the arduino mentioned above.


all of the above mentioned tools are great singularly but for our purposes we need to make them work together. a musical interface must allow for the investigation of Self, through sound. that is my definition. if you want to play “an instrument” then there are are many to choose from. but the desire to create a musical interface from scratch, at such a low level, is really the investigation and expression of Self; how you move and think about what expression is for YOU. so going into this exercise with this goal in mind, will speed up your design process and result in something that evolves and grows as you do, possibly, for the rest of your life. it is not the creation of a thing, rather…

 it is the physical manifestation of the beginning of an ongoing, evolving process trajectory.

let’s begin

Picture 288

the most important tool to begin with is a trinity of mind, paper notebook and pencil. your mind is your most important tool. it is your primary design software. if you can not build it in your mind first-imagining how it works, imaging playing it, how it feels, what it sounds like-then everything after that is a crap shoot. to do this part, my suggestion is to take large swaths of time to simply sit somewhere and let the mind wrap itself around the idea of what you want to create. at first it may be hard, maybe, but after a while, it becomes much easier to imagine your designs and iterations in multiple dimensions (cad design, code, sound, etc). (there is more I could put here about this stage, but it could fill a book. if there is interest in such a thing, let me know and I can look into it).

Picture 94Picture 287
after doing this for a while(sometimes hours or days, sometimes weeks, months or years, depending on the idea), the pencil pretty much guides itself to the paper without having to force yourself into long drawn out session where you attempt to coax yourself through some selfhelp/mystical bullshit. the key thing to remember, is that…
 your drawings are NOT art.
they are notes for YOUR mind.
a squiggly line might be a cable…or it might be your way to remember to pick up some eggs on the way home…this is YOUR space outside of your mind to begin to give your idea form.

 

I find that sometimes a drawing isn’t enough so I like to keep modeling clay around. I can, from my drawings, make a basic 3d form from clay then use my calipers to measure it before I take it into my CAD environment for 3d modeling. clay is your friend.

 

electronics


Picture 221Picture 58

now that you have a basic idea of the form that you are shooting for, you will need to shift gears. it is at this point I like to shift to electronic concerns. once the form is developed, don’t move straight to CAD modeling because you now need to wrap that form around a basic function set and those functions are expressed using electronic parts, which have a defined shape and sometimes orientation. if you try to design and print the form now, without knowing what is going to go in/on/around it, you will inevitably have to start over to do this step any way, and will have wasted time and materials in the process.

with a basic drawn form you can now investigate the sensors, boards, lights and batteries required to make your system work. so first is to determine what sensors will allow you to do the thing you want to do. developing strong Google search-fu and proper online forum etiquette is crucial.


organize your thoughts at this time. I can not live without mind mapping software. I currently use xmind, but there are dozens of open source, free mindmapping programs for all platforms. what it does is allow you to get down to the precise questions you must ask yourself or someone on a forum. the gods of forumdom have little patience for overly broad questions or overly general ones that have been asked ad nauseum by “noobs” who haven’t gone to the trouble of reading the FAQ or searching the archives. do those things first and make note that you have done so before you ask your question. said question should be to the point and detailed enough that someone who knows can, if they are so inclined, check your research and give you an answer. remember, most of the people on them are like you, but maybe a bit more advanced. they have jobs and hobbies too, so don’t be an asshole.


create a mind map and drill down into what you are trying to accomplish (another topic for the book, but suffice it to say, there is lots of info on mindmapping properly, online). in the center of the mindmap is the subject but as you branch sub topics into more detailed subtopics, you end up with a constellation of very specific questions around the outside edge of your mind map. it is these specific questions you should ask, if necessary, after you have researched all that you can on your own and there is no other recourse. once you post the question, if you find the answer your self, it is proper etiquette to post said answer in the thread and mark it SOLVED for those that will inevitably search for it in the future. you become a part of the community in this way, instead of an anonymous leech. this is part of the value proposition built into the idea of open source. respect it.


so at a certain point, your sensors are working with your arduino. note; I use a firmware for arduino called firmata, which turns it into a digital input/output board that only sends and receives to the computer. my concept of digital interfacing is the interface as dumb sensor io, connected to the computer wirelessly. in this way, all the heavy processing is done on the computer, which I find to be more powerful than a purely arduino based system, but that Is pure preference, and not a rule of any sort. this is a good time to connect the arduino to the computer using your wireless transceiver. my choice for the last couple of years, has been the arduino fio/roving rn-xv wifi transceiver combo because it works, its upgradable, the arduino is mated to the wireless transceiver using a socket which also allows for things like xbee or an Ethernet “shoe” for cable connection and if one of these parts dies, I don’t have to replace one$50-100 board. each part costs less than $35. it is also quite small and has a built in battery charger. there are other boards out there that seem to be very cool, but I have been pretty satisfied with this one so I haven’t investigated those yet. (I wrote a tutorial for using this setup here)


one benefit of this system is that I coaxed it to use udp protocol bidirectionally, which means that instead of dealing with tcp/p running over serial, which introduces latency by way of constant error checking, udp is “connectionless”. this means that the computer blast data to the arduino and the arduino blasts its sensor data to the computer and neither cares if there are errors. the result is blazingly fast data throughput, which is very necessary for playing music in realtime. if your design is a singular system transceiver, then you can ad-hoc connect it to your laptop and call it a day. if there are more than one, the you will need to get a wireless router, dedicated to being the interface for your transceivers to connect to your computer, NOT the internet. in fact, if you want the lowest latency performance possible, you will need to ensure that NOTHING else connects to that routers ip address or set qos rules to ensure you have a guaranteed minimum bandwidth.

 

wheeew…


handunit v2.2handunit v2.4-crossjunction

We will now assume that your electronic systems function on a basic level and you know the size of all the components that will go into your interface. at this point, go back to your drawings and/or clay models and revise the design to account for the actual parts that will go into the interface. now you must begin to think about printability in your design. its all good to want the design to be all swirling shapes but the reality of the reprap is that it prints from the bottom up which means that if it is designed in such a way that the bottom doesn’t touch the print surface or there are weird overhangs, ie printing on a surface that isn’t there, the design fails. I design in interlocking flat pieces. flat pieces stick well to the bed. if a part doest stick to the bed, then I make two pieces I can mate together by hand, after the fact. in fact my whole design process is based around the limitations of the reprap print concept: designs that will stay attached all the way through the print process and fulfill their function. I call it “dominant flat side design” each sub part has to have a dominant flat side. think of this from the beginning of the process and you wont end up with wasteful prints.


using the digital calipers mentioned above, measure each electronic component and/or groupings of components and model fitted templates where those components can mount. if you have enough 3d printer filament, print out the template to see if the components fit snugly (you can also use the data sheet for said component, but I like to measure things myself.) then it s merely a case of assembling these parts together in your cad environment, into a form that is as closely related to your drawings as possible.


Picture 229Picture 228Picture 215Picture 216


with my Mayan calendar example firmly in mind, think of the idea of the 3d printer as enabling form to come from mind to this plane of existence, in 2-3 steps. the 3d printer is a dimensional portal through which physical manifestations of ideas come through, but the limitation of this “early stage” portal is the size of the print surface and the materials used. so although form is coming from another dimension, LITERALLY, the portal, in my case, is 140mm (long) x 140mm (wide) x 120mm (high), so although I could, say, manifest a great many things, they will either have to be that size or smaller, or come through in pieces to be assembled on this side of the vortex. so design accordingly.

 

from here…


As I stated above, this is the super quick run through of my process and at this point, you should have something that basically works, but it will never be perfect in the movie sense of the word. I see “perfect” as a significant point on a process trajectory. if it works at all, then its perfect. yaaaaay, you win! but as soon as it works you will instantly see things that can and should be evolved. welcome to the never ending process trajectory. it can always get better. it is part of you. it evolves as you evolve… as technology evolves. you are all intertwined now. and the most interesting part is that because you built it, you now have “insight”; with commercial “products” you know as much as they want to reveal about their process and “intellectual property”. with your interface, "you know EVERYTHING and can iterate in any direction you please. if something screws up, you can fix it or change it or delete it or start over from scratch. it is yours…it is you. you will begin to think “why isn’t more of my life like this” and your life will evolve along the exponential paths that made your interface possible. welcome to the future.

 

 

image

16nkxuArE6P1Bbpa292CoAptbz3RCcK2dE

0

Exploring the exponential now-Episode#1:Leaving Berlin (The Nomadic Diary of Onyx Ashanti)

 

For those that don’t know, I have packed myself and project development of the exo-voice and associated projects, into my trusty duffle bag, attached by trailer to my folding bike, and decided to leave Berlin for the time being. It must be noted at this point that primary focus is still on developing and shipping crowdfunding campaign perks to my contributors, but I don’t need to be in berlin to do so.  truthfully, it is quite a bit cheaper to purchase parts and ship from here in Mississippi, than from Germany, but that is not the reason for leaving.  it is simply an assurance that “perk-iteration” is still my primary focus; making the people who supported me, happy, is still the overriding goal Smile .  The reason I left is to explore a “now” that is so profoundly interesting, that I can not be in one place to do it.

Exploring the exponential now.

Over the last couple of years, my mind has been a vast terrain of unexplored adventure.  I never dreamed of what awaited me once I turned off TV, movies, radio, CD’s, etc.  although I have become obsessed with sun ra, Terence McKenna, John Coltrane, Polynoid, online research papers, ancient Egypt, pre-colonial Africa, and Japanese Tech culture, I've done so at my own pace, using the internet, then disconnecting and letting all of these disparate topics interact freely in my mind.  the interactions between them and the hundreds of other topics, has been thrilling in completely unpredictable ways.  not “finding”congruencies, but just sitting quietly long enough for them to drift together effortlessly has created a pattern of its own and this pattern needs stimulation that can no longer be had primarily through a computer screen information interface.  tactility is now necessary.  personal experience.  fields within fields of influence.  the attraction of a future which wants to be experienced physically with mind and body, here in the present.  the now.

But this “now” is different from the previous “now” I remember from the last “nomadic era”, of which there have been 4;

The first one i n the late 90s living in a van in LA,

The second living in a squat in London in the early 2k’s,

#3 was living on the A-train in NYC in 2006DSCN1779, and was the first one I blogged http://nomadicjunglist.blogspot.com/ and

#4 was summer of 2010, living in a tent, camping, and busking around Europe and Ibiza, also blogged http://onyxashanti.wordpress.com/page/3/.

In all of those nomadic phases, there was less certainty of what I was doing, where I was going and why I was going.  the “now” was just a place that my body and mind happened to inhabit.  and all of these were pre exo-voice, pre-open source, pre-investigation-of-self.  I was a street performer who was looking for good pitches to play.

Since 2010, “now” has gained several new dimensions based on my expanded perception of it and on the explosive growth of human connectivity and technological expression.  “now” is no longer a tiny point wedged in between a future and a past, but an exponentially growing life-form.  it consumed the past long ago, parameterizing it and making it increasingly easy to parse, but it now also stalks the future like a seasoned predator. 

We have the tools and the minds to create any future we can conceive, right now.  the now has become an increasingly vast multiverse of possibility and expression.  so vast, that one must approach it as artist and anthropologist.  any expression can be constructed as quickly as ones mind can use and/or create tools to construct it, but then one must shift perspective and investigate said construct with a respect previously reserved for ancient artifacts, whose age and detail beg  for serious inspection by trained “professionals”.  the new now…our now, must be played like music while also studied like any other science.  its boundaries grow exponentially and we must look out from its event horizon, to gain even a slight glimpse of the futureverse. 

So, looking at the “now” as a terrain rather than as a point-or maybe a really expansive point, with us as particles on or in it-then the impetus is to map it.  to explore it and see where it leads, and maybe find other explorers and trade notes.  “now” exists in berlin as well but not only in berlin.  the new now is a vast reality topology abstraction that can be experienced in part from any point, but must be traversed physically to try to grasp its scope and depth.  nothing in the virtual sphere replaces personal experience.

so…

Another reason for this exploratory journey, is the birth of a new metaphor for the exo-voice software.  As I have mentioned ad nauseum for the last couple of years, I am obsessed with fractals in all forms.  from mathematical constructs to trees to nervous systems…they have changed my interaction with myself and with the world.  so the guiding principle for the evolution of the synthesis system of the exo-voice has been to make it “fractal”…to give it an evolving “fractal logic”.  to my mind, at a particular point, this was just an abstraction like “funky”  or “swinging”, but then a few months ago it came to mind to create a simplified version of the system for practice.  something that would be a little bit of everything in the system, condensed into a single synth/looper…POW!!!!!(insert matrix-y sound here)ZOOOREEROOOOMMMMMPOP!!!!  THAT WAS IT!!  THAT WAS ALWAYS IT!  make a single point from which to iterate the sound! The iteration is multidimensional in that it is an interaction protocol (how it is played) and a design protocol, which then feeds back into the interaction protocol.  this means that changing a very few things at a time can result in massive sonic personality changes.  basically, instead of changing an instrument metaphor, it is that of changing a “seed” metaphor.  tweaking the sonic genome, produces new morphosonic structures (“morph” is Greek for form).

the way this relates to leaving berlin is that berlin has a very pervasive “morphosonic field”.  a morphosonic field is a term describing the “sound” for which a place is known.  I derived it from the concept of a morphogenetic field put forth by Rupert Sheldrake in his book, “a new science of life”, in which he describes his theory that nature is made up of fields of influence and that each species has fields of memory that guide their collective habits.  Google and YouTube are your friend in researching this further.  from that theoretical point of view, a morphosonic field is the sound habit of a place.  berlin has sound habits , just like London has a sound habits, or Hawaii has sound habits.  so now if one interjects an iterative sonic construct into an existing morphosonic field,  you get very interesting interactions between the two; either the iterative sonic construct changes itself in the face of the exiting fields dominant concrescence, the existing morphosonic field incorporates the new novelty of the sonic construct, OR, both, simultaneously.  one or all of these processes has been happening in every city I have ever lived in but, now, with a fractal sonic iteration system, it is much more profound.  since each place gives its sonic genomic material freely,  a fractal sonic construct, like the exo-voices internal logic, can, theoretically so far, be interacted with in a manner that incorporates this genetic material in novel ways, and if it gets “stumped”, its own iterative, programmable genome can be modified to do so.

so now…substitute Detroit or Rio or Tokyo, or Goa or the Congo, for berlin and my desire to touch these fields, makes much more sense;

It is the traversal of multiple morphosonic fields, all “collectivized” (I just pulled that one out of my ass, but it felt like the right word) within the expanding membrane of the “now”.  rather than attempting to predict a sonic end result, by way of trying to sound “good” or even “musical”, I feel that developing a personal experience based interaction model of swimming through the “now”, with the exo-voice acting as  the snorkel in this metaphor, will result in outcomes that no amount of precognition will produce.  the sound will interact with any morphosonic field it encounters and mutate accordingly.

this sonic fractal is of the first interaction with this new metaphor.

Hardware/Software

vlcsnap-00002the mask iteration is coming along nicely now.  the minimized mount points, i.e., having as little material touching the skin as possible, have been working very nicely except that abs is no longer considered a suitable material for the high stress mask design.  around the mouth, the stresses are too much for abs.  it snaps eventually.  but nylon in just about any form, fits the bill nicely.  getting the angles right, has been a bit of a pain in the ass.  the teeth must sit on the tip at a certain angle so that when one blows, the air only goes into the air hole, otherwise, there is too much adjustment of the embouchure before and during playing.  now I feel like I can abstract the positioning into the design as a rule rather than a theory.  this is why the design process is taking so long; it must be played enough to develop solid design rules.

the hand units have been mostly stable for months now and are 98% done, as a release-able construct, with the holdouts being adjustable hand mounting, more robust circuit design and adjustable thumb positioning. 

20140217_115223the software took a quantum leap in efficacy a few weeks ago when I FINALLY shifted ALL development of the system, over to a USB flash drive based debian audio distro called Tango Studio.  the difference between this one and many others I had tried was that the real time kernel was enabled by default and after a couple of tutorials, I was able to turn on persistence.  this means;

  1. I can distribute the exact code I use, in the exact environment I created it in, and the end-patternist has only to insert the USB stick into their computer and boot it up.  no config (for reasonably modern machines) required
  2. the distro can be loaded with all my notes and pics
  3. real-time enabled out of the box means it will run well on 90% of computers that its put into (with some addendums, because someone will still try and run it on their commodore 64 or some shit)
  4. said patternist can iterate their distro their way and save it to the stick itself as well as back it up much more easily and transport it to any other computer.

 

next…

Episode #1 was just the first video of a new series of “nomadic diary” film expressions.  I haven't spoke in this (visual) language in a while so I am relearning what I knew while incorporating new techniques, ideas and motives.  my videographer gave me a Black Magic pocket cinema camera to document my travels with.  as you can see from the video above, the quality is stunning (thanks Tom and Darren!).  I will have to really dig into the capabilities of this camera over time, but definitely expect to see more artistic visual expression over time, starting with this video but extending through episodes #2-4 which document the 2 weeks it took to finally make it back to Mississippi, where I currently am.  loads of crazy footage from PIPSLAB and DDVTECH in Amsterdam, the slingshot festival in Athens, GA and the national society of black engineers conference at Opryland in Nashville Tennessee (which, previously unbeknownst to me, is a massive climate controlled biosphere).  in addition, detailing the final stages of getting the exo-voice and associated perks out to contributors, will be much more interesting as I investigate the visual story hidden within the exercise. 

I can not or will not say what happens after that.  but I feel confident that it will be interesting and now, well documented multidimensionally (blog, film, audio, 3d printed artifact, etc.).  the new now awaits…

 

0

Force sensing resistors officialy suck for lip sensing

for the last 18 months or so, I have been attempting to make FSR’s (force sensing resistors), the same pressure sensitive sensors I use for finger pressure sensing (the “keys”), as an inexpensive, easy to integrate lip pressure sensor.  until just recently, I believed that the error was in my design.  I thought that it was possibly the condensation of spit on the sensor which caused them to fail do often-usually within 2-3 weeks of continuous use. 

Picture 36so I spent the better part of a couple of months redesigning the mouthpiece to negate this possibility.  the latest oneplaced the FSR in a sealed space with a printed rubber gasket that also acted as a sort of “reed”.  pressure Picture 58applied to the top center of this reed-gasket would transfer this pressure through to the fsr.  the area underneath was, by design, completely free of any condensation.  only the transference of pressure from lip to plunger to reed gasket, made it through and this solution worked beautifully…for a time…

Picture 60in the last few days I started noticing that the sensitivity to lip pressure had begun to lessen more and more.  I believed this to be something loosening in my construction but decided to investigate it today and noticed that the FSR looked…flat.  yes, it is already flat, but there is a certain look they have when they have been pressed too hard for too long and upon connecting a fresh FSR to the circuit, my fears were confirmed; spit isnt the sulprit this time, but the sustained continuous pressure necessary for proper lip sensing.

what this tells me is that pressing an fsr continuously and strongly, as is necessary for lip sensing, will cause its eventual failure.  the ones for the fingers last much longer primarily, I now see, because I do not press them continuously and when I do it is at much lighter force than with the lip which must be pressed to a middle-range to center the pitch.  in this way, when I needs to pitch down, I must only relax my lip pressure a bit and if I want to pitch up, I increase it.

so, this presents an opportunity because, well, I’m me and I see opportunities in situations like this.  now, after fully exploring the FSR for lip sensing, and watching it fail, I will use a ratiometric hall effect sensor, which is tiny cheap sensor that sensing the intensity of magnetic fields.  when a magnet is moved close to the sensor, it changes the amount of voltage that is allowed through the circuit.  my previous wx5 wind controller used such a sensor as well and it was wildly precise.  And they are actually easier to find than FSRs and way cheaper- €1-2 as opposed to €5-10 for EACH fsr.  the primary benefits are

  • longer useful life
  • much more precise
  • I can still use the rubber reed-gasket design
  • hall effect sensors are much smaller (1/4 the size of a fingernail, avg.) so the mouthpiece will be much much smaller.

 

I will use another FSR this week for some playing I want to do this weekend to play thru some new hardware and software updates, but I hope the have the hall effect sensor integrated within the next few days.

0

Time to recalibrate… (part 3 of 3)

I have to apologize...i cant write the “big post”.  it was supposed to be (in my mind) some big “empire strikes back”, holy shit! WTF did I just read!?- level crazy shit!!  but…

Not that i "cant", i just wont.  why?  it's too much...too much information with not enough examples of WTF I’m on about.  the original idea was to drop it all as a sort of manifesto of sorts; an outline that describes a bigger picture that combined dozens of threads of research and thought.  But upon rereading the 20 or so drafts that have been created in the last month, the ideas were way too disjointed and abstract.  they didnt lead anywhere or get to the point that was intended.  My theory of what happened is that when you spend so much time in your own mindspace, it feels like the things you see and experience there, are as obvious to others as they are to you, as if the people you are sharing with, are in your virtual world with you, rather than the “virtual representations of them” that are there.

luckily, I have friends that let me project these concepts at them.  for weeks, i let down the "maybe they will think i'm crazy" shield, and just gave them the whole thing, exactly how i was experiencing it and all the associated research that i had done in, what i will term as the "communal reality" that we exist in together and agree we're both apart of.  well, you can probably surmise by the title of this post how most of these sessions went.  My friends love me, and are all honest people, otherwise we wouldnt be friends, so that look of "studied neutrality" that became the dominant response from 99% of them, gave me pause.  and during said pause i would write what i was thinking and read it the following day and have to commend them on their reserve for not simply walking away.

see, the problem, as i have come to see it, is that, unlike every other aspect of the project(s) so far, there was only the suggestion of an example to come rather than a demonstration of an example in hand. 

crazy shit isnt crazy shit if you can build it, play it, eat it, hold it, etc. 

much of the basis of the big post, (TBP) was theoretical and conceptual.  note at this juncture that i didnt say that the subject matter was bullshit...that’s a whole other thing.  "crazy" is not the same as "bullshit", but if there is no way to substantiate it, it is difficult to demonstrate any difference between them.

sooo...

  • Currently, development of the hardware and software are multiple processes that each fight for attention;
     
        imagination iteration,
        CAD design,
        protocol logic,
        sound design THEN
        playing it, discovering what needs attention then
        restarting the whole process from the beginning.

i love it!!  I get to learn any and everything i want!  its so amazing, i am sometimes overwhelmed emotionally at my fortune to be alive in this day and age, but, it is inefficient. its hard to shift gears to do each of the things that needs to be done, so a few months ago i began looking into the idea of making the development of the exo-voice and all of its peripheral vectors, into a portable development system that i could develop from "within".  The idea was/is that if i use the exo-voice as an input device, replacing the keyboard and mouse that i use now, I could kill multiple birds with one stone; developing the muscle memory necessary for virtuosity, while doing things as simple as checking email or as focused as programming the underlying software logic, by incorporating all aspects of the exo-voice, namely, breath as input, hand position and function latching.  the basic keyboard/mouse logic hasnt been been hard to implement and as a thought exercise, it has really helped speed things up in regards to getting the campaign contributors a system they can wear comfortably and actually use.  this last point is a big one.  i do not want to send out anymore interfaces that dont get played.  when this reaches my contributors, they will have a prosthesis that should inspire them to want to use it as much as i  love using it.

the "normal"-normal?

Picture 79 

(note: much of this overview is to assure my contributors that im not dead since ive spent so much time over the last few months on proper-izing things.)

The hand units are increasingly wonderful now! before last week, the palm and handbrace that holds the unit securely on the hand, had been angled toward  the wrong plane (smacks forehead)  the previous versions of the hand units placed the hand at a 90 angle to the keypad where the fingers rested.

 Picture 74Picture 76Picture 82Picture 83

the hand was, until last week, still at a 90 degree angle although the keys had all rotated approximately 30 degrees owing to the new more ergonomic design, so not only did it hurt my hands to play for long amounts of time, but no one else could even put their hands in it, and those that could, couldnt touch the keys (note: since this is a completely new type of interface, I don’t know these issues until I have had time to play them for a bit and discover them)

So last week it occurred to me to angle all surfaces that touch the palm and the hand in the same direction as the accelerometer planes, which is set at what i call the zero point; middle of the x and y axes.8Notice the angled edge in the first photo and the cut away angled section of the handbrace in the second)

 Picture 75Picture 77Picture 86Picture 88Picture 89Picture 92

and viola!  not only is it beautiful to play, but every person that has put their hand in one of them, can hold it properly now-small and large hands. yaaaaay!!  next will be to make the joytoggle arrays more adjustable, but thats easy.

the mask gained adjustability last month.

Picture 33

 

  it also can be put on many head sizes.

the mouthpiece works beautifully now.  it took a while, but it snaps together in 3 pieces, with no tools

. Picture 7Picture 9

there is the main oral interface which uses elastic tension to lock into the front teeth.  while there, the breath holeis in the middle of an oval shaped piece that minimizes air leakage and directs it straight in to the two pressure sensors at the end of the shortest plumbing I could design.  it is the fastest most precise articualtion I believe is possible; 3mm wide, approximately 60mm long, completely air tight and printable on any reprap.

Picture 37

the lip sensing is equally precise but with additional customizability. It uses the same FSR’s used for the fingers controls on the hand units.  until this design, they would need replacement every 2-3 month because of spit corrosion.

Picture 53Picture 55Picture 57Picture 60Picture 61Picture 62Picture 65Picture 67Picture 69 

together, the lip and breath mechanics work together in ways that abstract the normal lip/breath interaction of wind controllers and acoustic reeded instruemnts, into something much more programmable.  I’ve yet to investigate this fully yet.

the main issue with it now, is to "universalize" the mouth interface area and design it in a way that doesnt allow it to stick to the face so much.

Picture 1 

its lightyears better than previous designs, but by concentrating on a single line from the chin to the back of the head, then draping the design onto that (thats the best way i can describe it), it will be more comfortable, more adaptable for many head sizes and easy to print with nylon, which is now the prefered material for the system since it can be easily sourced at any hardware store in the form of weedwhacker line.Picture 3

Time to play some music!

There was a whole lot of crazy shit here but thankfully “cut and Paste” can simply be “cut”.  for the time being, lets stick with this stuff.  make some dope controllers and some cool perks and some funky sounds and rock some parties.  there is plenty of time for all the weird shit.  it aint going away and I can ease it out a bit at a time but its kinda doing my head in a bit.  I need to get out and play.  oh and…

Cybo....!?!

you may have noticed that i have avoided the term "cyborg", preferring the phrase "portable development platform".  As stated in the first post of this series, Cyborg, as a word, already has a life...a culture.  that post asked "what if the word cyborg had been invented today?" but since that post, i realized more fully that the word cyborg WAS NOT invented today.  it was created a long time ago for a different idea of what the interaction between a human and technology, could be.  it is for this reason, i will not use that term, although if someone uses it to describe me, i wont be offended.

the burden is on this idea, to birth a more humanistic, symbiotic conception of the union between a sentient intelligence and the technology it creates to expand its ability to interact with time, space, matter and sound. 

for now, I want to focus on little details like wrapping up last years crowdfunding campaign, and applying 6 months of sound design notes to the the synthesis system that has been neglected from the previous linear development concept.  oh...and rocking out with the fresh-ass minimized busking system.  its tiny, its powerful, and its begging to crash some (more) parties!! 

so with this post, i want get back to regular blogging and music and all that.  more to come, but as my neighbor says, “Slowly, slowly…”

0

3 year aniversary of the Original Beatjazz controller campaign

today is the 3 year anniversary of the campaign to build the "TRON" Beatjazz controller, which makes tomorrows post that much more relelvant

http://beatjazz.blogspot.com/2011/01/introducing-quest-to-build-tron.html

Pages:1234567...1516

Warning: file_get_contents(http://search.twitter.com/search.atom?q=from:onyxashanti&rpp=1) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 410 Gone in /home3/solarpt/public_html/onyx-ashanti.com/wp-content/themes/Onyx/footer.php on line 44

Latest Tweet