what would the word cyborg mean if it were invented today? (part 2 of a series): Computer interfacing


weeeeellll…I’m not gone preach lawng ta-daaay (if you didn’t grow up in a southern black church that is code for “this is going to be epically long)…wow…what a winter.  I am speechless as to the places that the mind can go when un-encumbered.  My gracious host here in Detroit, Phreddie, opened his home and his freindship to me and I am amazingly grateful.

there are a whole range of strange posts that have been marinating for months now.  so strange in fact that I felt the need to let the simmer a bit.  but even with that, I am coming to terms with the prospect that maybe I am becoming unintelligible.  I mean, as the words fall out of my face, they sound very well chosen and well spaced…not too rushed…be sure to pause between thoughts, etc, etc but there is still that look that reflects back at me many times…you know…the look where the pupils dance for a nano-second as if maybe a quick scan of the face or maybe the aura could glean a clue as to what the fuck this guy is talking about…”that” look. Picture 326 yeah, been getting that look A LOT lately.  not all the time but with some increasing regularity.  I guess its just one of those things one has to deal with when they spend more time in their own mind space than in social space.  this winter, ive had whole weeks where I didn’t even step outside, fueled by my diet of rice and spirulina to meditate into the computer screen for 14 or so hours a day.so im gonna unpack all of this slowly over a couple of weeks rather than blop it all out at once because its going to a strange place that deserves a bit of respect on my part.  this post isnt about the TED conference. that will be the next post.


for this post I just wanted to show one of the stages of the exo-voice I have been abstracting over the last few months.  a year ago, just before I left Berlin on my quest to explore this exponential “now” that we are embedded in, I wrote a blog post entitled “what would the word cyborg mean if it were invented today?” I was making the case that it would be a completely different meaning because there are technologies that we were not programmed to imagine, such as open source 3d printers and software.  but that question has swam around my conscious for months now.  not simply the how but the “why” of such a question.  open source implies that we are not only sharing information but that we are CREATING information.  ourselves.  so then a cyborg today would create itself, and most likely with open source tools, but to what end?  I spent a long time imagining the comic book inspired super powers that would fuel such inquiry.  things such as seeing infrared light and super strength and other such juvenile bullshit (no offense) but as I pondered the question, here in detroit, in the middle of the frequent snow…things (I don’t know whether they were snow storms or just snow fall or what, but there was a lot of snow) and realized, not too jarringly, that the question had become part of my meditative process which had become part of my design process. 

design and meditation are two sides of a multi-sided coin.  I must “go in” to find the right form and that form must be instantiable using the materials I have access to now.  then there is functionality.  this is a bit more esoteric.  function is like the parameterization of will.  function is like the hook of a good song-it glues together all the other elements of design.  I realized pretty early that function had its own aesthetic, but function also has its own purpose.  it continually makes me wonder who is creating whom? especially with all the hyperspace visits ive had in the last couple of years.  you begin to realize that we are embedded within something that interacts with us in our dreams.  we float in it but we can not percieve it like we percieve light or matter.  im not ready to give it a verbal name yet but that doesn’t mean that I cant still access it.  or it access “me”, whatever that is.

so, for most of the winter the question, more precisely, of what the exo-voice is and what it will be has been all consuming.  I frequently state that I am moving away from music as an entertainment and toward music as programming language for the investigation of self.  recently this vectored the design out into strange areas and back again and it brought back some very interesting and useful functions that will not only enable better questions going forward but make the system much more useable for my super patient crowdfunding campaign contributors on this, our 2 year anniversary of the campaign that just keeps on going Smile

one such function is that of computer interface.

I realized a couple of years ago, when I “should” have shipped the system, that it wasn’t what it needed to be, evidenced by the non-activity of the other 4-5 persons who already have that version of the system.  those systems were fully working identical copies of my own system…in fact, they were much better because my personal system is in a permanent state of “beta” whereas the ones I shipped were perfectly functional and beautiful.  besides the abstract nature of what it does and how it does it, the “beatjazz controller” as I called it back then, was only just that…a controller.  an interesting one…a pretty one…but merely a strange controller and I believe this is part of the reason no one is playing it.  it was an art project rather than a prosthesis.

to be a prosthesis, a device has to either create new functionality or renew lost functionality.  the previous hardware/software did neither.  the exo-voice began stepping in that direction in september 2014 with the neuro-mask interface.  although this wasn’t the first mask with transcrannial brain stimulation, it was the first one that actually functioned properly  and with high controllability and precision.  this functionality was never meant to be shipped to the contributors though.  it was a digimorphic question about what this system was actually for.  when you begin screwing with your brainwave patterns, you begin to take these questions seriously…that was the point of doing it-to ask the questions that would not be asked otherwise.

from there the hand units were tweaked for hand positioning and finger placement, especially thumb biomechanics, being of the highest importance.  with the thumb in a position close to how it would be when typing on a computer keyboard, a whole new spectrum of computer interfacing possibilities ran up to say “HI!”  because the hand positioning now made the thumb “float” within a circular array of switches, it was easy to percieve its position without having to strain or look at ones hands.  it was now almost foolproof to know where your hands were in the gesture field and be able to sequence functions predictably.  now the interface was less about being a musical interface and had become an extremely accurate computer interface.

why is this important?

a reason I believe for the dearth of interest in gestural interfaces is that they don’t make very good computer interfaces.  once the novelty wears off, they go in a box somewhere.  I myself was, until only just recently, guilty of really only putting the system on just before a gig  or briefly when working out bugs, but rarely for just fun or for a purpose which it was solely useful for.  in those moments, a computer keyboard was just easier.  so how could I expect that anyone else would be any different?  this led to questions such as “what is computing?” or rather, “is computing limited to existing modalities?” and on and on.  I realized that I needed to ask that question WITH the exo-voice, because if you don’t need it, why would you use it?

so I spent a few weeks designing a gestural keyboard mode.  pure data in linux has the ability to talk directly to the core of the operating system.  anything I can do with a computer keyboard, I can do with the exo-voice, in addition, I can do things with the exo-voice that could never be done with a computer keyboard.  currently that list only includes crashing the whole system instantly, but thus is progress.  I can use the system to type into a text or terminal window just like a normal keyboard but I discovered the reason this needed to exist as soon as it worked the first time.

when the contributors get their systems and the open source makers build theirs, they will have an interface that will train them in gestural mechanics without having to know traditional music first.  when windows first came out, they included games such as solitaire, whose main purpose was to expose the user to the, then new, useage of the mouse and the graphical user interface. the gestural keyboard mode (GK) fills a similar role. it uses every function that is necessary to create music, to type words. hand positions and breath pressure must be right to get proper syntax and it will work with any program, not just pure data, while also making music out of what is typed (still working on that part).  some may choose to never use the boehm “sax” mode because they can make music by gesturally typing words.  in addition, now all those who wanted to use it for vjing or other creative functions that would have needed the interface and a keyboard, now only need the interface as strings of key commands can be programmed and used gesturally.

I wanted to use this mode for my ted talk last week but it was, and is, still wildly unstable…but it works!  it is one portion of a completely revamped software system that is more modular, lower cpu (currently running on a 10 year old dual core laptop with 1gb ram) and easier to learn and modify.  the upgraded synthesis is based on a new form ive been working on called sonic pixels which enable a new kind of spatialized sound design that integrates intimately with the brain stimulation system and is based on an 8bit fingering system.  all if this will be in my next blog post because it goes all heavy in to spirals, mental slavery through standardized equal temperment scales and coltranes frustration with the limitations of the saxophone…yeah…trust, the next one is really heavy. holla…