Fargot Password? / Help

rokettube porno mobil porno mobil porno rokettube porno porno

Archive for June 2014


Quadvector Gestural Fractal synthesis-current implementation: part 1

in an attempt to not unload streams of incomprehensible information all at once, every 2-3 months, I will try to unload more bite sized streams of incomprehensible information at a slightly more regular pace…youre welcome. as we near the event horizon of the singularity refered to as “releasing the project to the public”, little things become big things, as if their mass increases by virtue of time compression.

one such thing is the synthesis engine concept, currently refered to as a quad-vector gestural synthesizer.  this is one of a few “fractal” synth concepts that I have been working on for the last 2 years. the idea with this one is of a single synthesizer, tentacled with branching gestural parameter interfacing,  that can, through various combinatory means such as key sequencing, hand position and mode switching, mutate into any sound I can imagine.  this is accomplished using the multimodal gestural capabilities of the exo-voice prosthesis, whose interaction matrix is fractal. by “fractal”, i mean systems that evolve to higher states of “resolution” over time, from simple origin states.  this applies to everything from design to programming to performative interaction.  20140613_143909

in the design of the hand units, following pure function, I needed “keys” that were spaced to be comfortable for my fingers, while also being attached to my hands unobtrusively, which was accomplished in the first, cardboard, version.  over time, the design iterated toward things like better ergonomics, better light distribution, ease of printability, and so on, so the design and function evolved, while still being based on the the idea of the first version.  since all aspects of the exo-voice share this design concept, the entire design evolves fractally; as new states are investigated, the resolution of each state increases toward a state of “perfection” at which point it forms the basis of a new state of investigation.20140613_143621

each hand unit has one 3 axis accelerometer.  the x axis measures tilt, side to side, like turning a door knob.  the y axis measures tilt forward to backward, like the paint motion from karate kid (hope that is not too old of a reference for some of you).  the easiest way to picture it mentally is to think about the range of motion your hand would have if your wrist was stuck in a hole in a thin glass wall; you could turn your wrist and you could move your hand, from the wrist, up and down, as well as combinations of those two motions.  this forms the basis of the gestural kinematics the system is built on.  the x-range is one parameter and the y range is a second one.  the z is unused right now.  it’s the functional equivalent of using a touch pad in 3d space with x and y acting as two semi-independent control vectors.

there are a number of ways to use these vectors. 

  • Individually(per hand)-x is one “knob” and y is a seperate”knob”
  • Together(per hand)-the combined x/y position is considered to be one control point, similar to how a computer mouse works.
  • Together (both hands)-2 x/y control grids
  • Quadvector-x/y-left and x/y-right are combined into a 4 dimensiona control structure.  think of this as video game mode or similar to how flying a quadcopter works; the left hand is, in this metaphor, forward, backward, left, right, and the right hand is up, down, turn left, turn right.  no one control is enough.  all vectors must be interacted with simultaneously to control the position of the quadcopter in 3d space. 20140613_155205


the exo-voice system works by routing accelerometer data to parameters which are determined by which “keys” are pressed when in edit mode. ( a key in this system is a function initiated by pressing a sensor with the finger, like how wind instruments work, but here is more symbolic since the same function could be achieved in any number of ways.  since I was trained as a sax player, my mind gravitates toward that modality.) so in “play” mode, the keys are interacted with as saxophone fingerings on a basic level.  when the edit mode button is toggled, the entire system switches to edit mode and all the keys become function latches, ie, if I press key 1, the accelerometer controls parmeter1 and parameter 2.  if I press key 2, the accelerometer controls parameter 3 and parameter 4, and so on.  there are 8 keys so that is 16 base level parameters that can be controlled.

in previous iterated versions of the software system, each key could instantiate a seperate synthesizer when pressed hard (HP1-8), each with an edit matrix that was specific to each synth which meant there were 16 paramters to control  for each of the 8 synths.  now though, there is one synthesizer with 4 oscillators-each assigned to its own accelrometer axis. the HP modes are for “timbral routing” which means that the synth is routed though a filter which makes it sound timbrally unique in ways such as being plucked or being blown. some parameters are already set, such as pan position, gain, delay and pitch offset, each taking on key, so that leaves 4 keys (8 parameters) to paint with, which is not a lot.  this is where the fractal concept begins to pay off.20140613_143815

each key sends either a 1 or a 0 when pressed. in my system each key press is multiplied by 2, ie., key 1 =1, key 2=2, key 3=4, key 4=8, key 5=16, key 6=32, key 7=64, and key 8=128. this is an 8bit number sequence. one useful aspect of this is that each combination of key presses creates a completely unique number that you cant not achieve by any other combination of key presses. so, key 1, key2 and key4 (1,2 and 8) equal 11. there is no other combination of key presses that will generate 11.  in addition to 4 keys per hand, there are two button arrays-3 buttons on top, controlled by the index finger and 4 buttons on the bottom, controlled by the thumb.  the upper buttons are global, so they can not be mode-switched (change function in edit mode), but the bottom button array on the left hand is used to control octave position in play mode.  in edit mode all keys become parameter latches, so octaves arent necessary, but in edit mode, if you press key 1, while at the same time pressing lower button 1 (LB1) you get the 8bit number 257 and can control 2 new parameters not achievable with either button alone.   this is how I was able to create relateable sax fingerings. in play mode, I use the octave button presses to add or substract 12 or 24 from the midi note number that is genrated from the key presses, but in edit mode, I simply gave each of the 4 octave buttons, their own 8 bit number (256, 512, 1024, 2048) which now means that when I press key 1, I have the original parameter position as well as 4 new ones, totalling 10 parameters per key press and 80 parameters overall.  this is stage 2 fractal resolution.

so, as you can imagine, there is ample opportunity for this to become a class A headfuck.  it is not enough to simply throw parameters in just because one can.  functions have vectors as well, otherwise they become unwieldly.  the question that must be asked is “what function naturally emerges as “next”?”  it is at this point that I use a mental construct called a “perceptual matrix”.  this is the mental projection of the flow of parameters within the system.  I discovered it by accident after I completed the first working version and discovered that the act of building it had created a mental model in my mind.  this freed me from having to reference any visualizations.  now I design the functions to make sense within this perceptual matrix. so far, I have not found a limit at which I can no longer interact with its growing complexity.  in fact, by following a fractal design concept, the perceptual matrix guides the design evolution since the mind is the origin fractal of the matrix itself.  as the name exo-voice implies, it is a fractal prosthesis for the mind, both in creation and interaction.  the job of the visualizations is to convey this information to observers and collaborators as well as providing a means of interacting with the data after the fact.

im having to investigate these issues because fractal synthesis makes sense now. 2 years ago, it was two words spoken one after the other.  now it is a protocol. every parameter vector can be mapped now and iterated through a  continuously evolving key sequencing modality (every key sequence can have 5 positions of 10 parameters, which could end up being over 200 parameter vectors per hand and 600 between both hands, without adding any new hardware).  but what goes where becomes the grasp-worthy straw.  20140613_143653

my percussive concept is begging for evolutionary adaptation but its not as easy as just throwing something in there because, since it is one synth with parameter vectors as sort of tentacles, or synapses, one connection has to lead to a next connection and a next, fluidly.  my mind is pulled in a dozen directions.

I needed to share this modality with whoever is interested in knowing it because a lot of what I will be expressing in the coming months will be based on this and without this information, there is no way to know what the hell I am talking about.  in additon, I expect that within the next year, these parameter trajectories will evolve to more resemble biological expressions like nervous systems, which are fractal as well. 


The nomadic diary of onyx ashanti-Episode#3 and Episode #4-exploring the digital memory banks and exo-voice developemnt updates.

As I mentioned in my previous post, I was gifted a BAD ASS Blackmagic pocket cinema camera to document my travels.  so in addition to all of the other stuff I am learning, I am also working on developing more of an “eye” when it comes to visual expression. 20140610_144319 This is not the first time I have pushed into the video/film realm.  I edit all of my own videos.  but I do it more from the point of view of a technician rather than as a form of expression unto itself maninly because I never had a nice enough camera that warranted the expenditure of cognitive resources.  but I have been reaching into the void of visual expression lately, to see what I find.

over the last 2 months I have accumulated 1.5terabytes of footage of a range of subject matter.  I had been contemplating doing another “range of time” video, ie.  footage from a place and time, singularly, like berlin or amsterdam or atlanta.  but then it occurred to me that what I have is a digitized visual and audio memory bank.  as such, it would be interesting to interact with it as the brain interacts with memory.  so instead of, say, a video project concerning my time in Athens at the Slingshot Festival, or from my time in mississippi, I could think more about the totality of experiences from holland to where I am currently and single out particular conceptual constructs like, as this project presents, all of the representations of motion over the last 2 months.  times in cars, trains, planes and buses, just watching the world go by.

this makes the memory bank much more random access and interesting as a store of experiences.  ive got LOADS of footage  festivals, conferences,  maker faire, singing robots, thunder storms, strange insects…it is a more interesting way to access the digitally stored memory of my nomadic travels and makes me think very differently about what I capture on film and as audio.

Episode#3 is entitled “memory of motion” and is a montage of moments in motion between the time I landed in atlanta to go to Athens (GA) for the slingshot fest, then from atlanta to nashville to mississippi to memphis, san jose, san francisco and finally to oakland, where I am currently working on a few things that will be the subject of future nomadic diary episodes. Episode#4 is me playing in Berkeley, just above the BART station.  I was very lucky to have my friend Tony Patrick there the impart his camera skills on the occasion.  The purpose of the sound track is to give an aural reference to the state of synth development at the time of editing, for posterity.  in these videos, the synth functions have stabilized but are more simplistic than in previous posts.  this is because I took out many premade objects and replaced them with my own versions, but more on this shortly.

Busking in Oakland

berkeley2now that I am fully re-submerged back into busking, the iteration of systems is going waaaaaaaaaaay faster.  busking has always helped to accelerate idea iteration because you can practice more, in more relevant situations and get paid for it.  you become an entrepreneurial singularity; you only need think of an idea and try it in stages everyday that you go out. 

as I have been pushing myself to get the contributor/open source version out into the hands of people, the fast design-build-test-revise-integrate cycle that busking affords, has been invaluable. when busking, glaring fuckups or just “not-so-nice”ups, become unbearable because you are interacting with it everyday for anywhere from 1-5 hours a day.  a crappy sound is bearable if one only has to rock it for a 15-40 minute show on a nice stage with a nice sound system.  but, after a week of 5 hour days, the issue is  brought into laser-like focus and you are able to hone in precisely on errant gremlins.

luckily, since I was literally “from” here before I moved to Berlin, some of my old busking gear was still stored in my friends basement.  I had only to acquire a new battery.  my berlin battery was horrendously large-25kg (approx 50lb) so when I went shopping, size was a major consideration.  I found a nice, small sealed battery and borrowed the small Roland monitor that I also used to use back then (thanks Jaswho? !) and hit the road.20140525_141817


busking has supported me since I first moved away from home as an adult in the early 90’’s.  I knew a total of 3 songs when I started. busking allows you to develop.  it is the like a relationship with someone you might not see very often but when you do, its familiar and comfortable. 

the last few months that I was in Berlin, I was beginning to feel “soft”.  most seasoned buskers can rock it at a moments notice, anywhere…ANYWHERE.  I was starting to feel like I was losing that edge.  I had spent so much (completely awe-inspiringly necessary) time inside my own head for the last couple of years, that I felt I was losing sight of the ability to just rock out. but I KNEW I was losing that edge last july when I was fortunate to find out that DubFX and Rico Loop were going to be rocking out for free in Mauer Park. they are both seasoned buskers and I was awed by these cats!  they rocked it non-stop, in the hot sun for 4-5 hours with footpedals and beatboxing!!  it was amazing!  and neither of these cats has to busk at all anymore.  they are both famous with full touring schedules, but they wanted to give something back, and they really did!  that show stuck with me to this day.  I thought “why am I not doing my thing, somewhere…anywhere!?”  (the answer was because I was in the middle of revising the code and the hardware design to be more easily printable, but still…)

so now that I am “exploring the now” –partly from the street-, the boldness is coming back…that thing that a busker has that says “yeah sure, play right here, right now!”, but it is mixing with this scientific thing that has emerged In the last 3 years where things get interesting.

one system that is benefiting from this the most is sound design.  although the novelty factor, can work on passersby, sub-evolved sound and/or it parameter interfacing, drive me insane in a very short span of time-usually 3-4 days.  if I feel trapped by my parameter interface, I get bored.  if the sound doesn’t evolve toward a future attractor where it conveys its character with efficiency and beauty, I get irritable.  playing daily compounds these feelings.  so having a system that I designed from the ground up to be in a perpetual state of iteration, makes for exciting investigation.  shitty sound used to just be shitty sound.  now, shitty sound is a daily-updated list of “to-do’s”.  now that stability is reasonably common and expected, I can add the last synthesis piece…

the last synthesis piece

in March, I began using the new fractal synth ive been rambling about for almost 2 years.  by “fractal” I mean that it is truly fractal;

  • the system logic is an iterative branching protocol consisting of latching key sequences.  this means that by pressing one set of keys takes the system into a new mode and from this mode, a different set of keys, changes aspects of the current mode, from which yet another mode can be instantiated.  and since programming pure data is part of the expression, this branching system can continue to the limits of my computing power or cognitive abilities.
  • I imagine “functions” rather than specific sound outcomes, so I begin with the absolute simplest idea of the the sound function, then revise it toward higher states of expression, as I learn more and expect more from the sound.
    (this is the reason for the sound quality regression evident in my latest recordings; everything has been put into its simplest possible state so it can be iterated more easily toward higher states, rather than using too many objects that I didn’t build so as to sound “good”. )

the fractal synth the system is designed around, can be other types of sounds, based on how it is routed.  for instance, I can press one button and the sound is routed through a drum filter, so now, it is a drum.  but because it is gestural, the “drum” has a ripping/morphing kind of quality that is part function and part discovery-through-interaction.

20140610_145247pic 1: the fractal audio module object20140610_145630pic2: quad-vector synth with a few of the gestural control modules to the right20140610_145332pic3: the quad-vector synth closeup showing the routing20140610_145704pic4: the x and y vectors from one hand20140610_145554pic5: quad-oscillator selector patch
pic6: Drum filter

the last synthesis piece is the other timbral modalities ive been working on.  what this means is that when I press the “drum” button + another button, the sound now goes through a “string” filter, and/or a “pluck” filter, and/or a “wind turbulence” filter for wind instrument modes.  this is a bit of a holy grail for me now that I am comfortable with the concept of a synthesizer that I must effectively reprogram on the fly this sense, the term “quad-vector gestural synthesis”-which means that each x and y accelerometer vector, per hand, has its own complete synth- is the attractor that is pulling all development toward it. if the synthesis cant be done with gestures, then it is not gestural synthesis.

once this is working, it will be time to transcribe all the code and prepare it for distribution.  transcription will make it modular and understandable.  this will act to standardize the color synth parameters for the lights and the visualizations as well.

The modular mask

20140429_14174020140429_141723the mask has reached design stability now.  the final issues were;

  • snap together construction for ease of assembly and modification
  • designing a flip down mouthpiece so it could be moved away from the mouth without having to take the mask off.
  • modularity

in April I ordered a printer filament called tglase from Taulman.  this is PET, the same plastic used to make water bottles.  itits properties kind of slots between ABS plastic and Nylon.  its more rigid than nylon but more flexible than ABS and it doesn’t warp while printing.  but, most importantly, it refracts light beautifully, so I decided that it would be a great starting place for the mask design finalization stage.

20140429_14160220140429_141618the key design innovation for this was the simple idea of "”small interlocking pieces”.  I took a couple of weeks away from mask design to work on a side project; 3d printed orthotics.  this allowed me to investigate how one might create a design meant to be snapped together by hand and attached to a dynamically moving, shifting body, like the foot (I will show this as soon as it is functional).  it came down to the relationship of its interlocking small pieces.  so I took this new insight back over to the mask and started from scratch, guided by function only, and this was the result.

the design only touches the head at 7 points and is very easy to modify. the mouthpiece now shifts up, out and down out of the way and the circuit area isnt mashed against the jaw.  when not in use, it is easily partially disassembled.  the material is flexible and robust enough to withstand assembly/disassembly repeatedly. and although the contributor version will not have the goggles, which are form fit to my head size, it was a matter of a 5 minute design tweak to incorporate them.

the design is very very playable and comfortable for long stretches of time, even in direct sunlight, which was a weak point in previous designs.  having so much plastic against the skin is not comfortable under hot lights or sun, but this design solves that issue. currently, the facemount electronic components look boxy, but since they are now lego-like in upgradeability, the final versions will be much more streamline.


beta testing the hardware on people

I have begun printing parts for a beta-build of the exo-voice.  this is a full exo-voice, printed with design revisions, for the purpose of putting on other peoples head and hands to get a feel for what needs to be changed so it can be used by the greatest number of people initially, or easily changed by the pilot, to suit their own personal needs or tastes.

being able to take the prosthesis out into the real world, playing on the street, gives me much more confidence in its design.  its been bouncing around in an un-padded bike bag for weeks now and has been extremely stable, ie., low incidence of broken parts, and faulty wiring.  as I print what will probably be eventually posted online/shipped to contributors, fit, finish and material properties will be defined and there will be much footage of people of various size, age, race and sex to see how well the design translates so it will be a “print-and-use” as possible.  I am planning to make this aspect of the project more social by taking some aspects of the fullfillment stage, around to local hackerspaces and doing it there, rather than in an apartment by myself.  more info on that as I wrap up this stage.  time to go play a bit. 



Warning: file_get_contents( [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 410 Gone in /home3/solarpt/public_html/ on line 44

Latest Tweet