Username:

Password:

Fargot Password? / Help

rokettube porno mobil porno mobil porno rokettube porno porno

Blog

0

Thoughts on my trip to Africa and Sonic Fractal Engineering

 

8954801848_7bb8f6e611_o

Last month I went to present at a TEDx conference in Luanda, Angola in Africa. I had been wanting to begin exploring the continent, so this was a cool opportunity.  leading up to the conference, I was in an uncharted space mentally.  lots of things were starting to make sense.  The realization that maybe it is ok to reclassify the term beatjazz to being more of a description of my own personal playing style and to use the term sonic fractals, to describe the sonic concept as it exists now.  before the last campaign, i had just began the process of reorganizing my perception of how sound and music relate and of interesting ways to manipulate that relationship. That re-tuning of perception has consumed every day of the last 2 months but i think it is now starting to pay dividends in efficiency of effort.

The 3d printed gun

For the last couple of months i have been thinking hard about the genius simplicity of the 3d printed gun.  I don't like the idea that a gun could be something that makes having a 3d printer, relevant.  but this design is simple and elegant.  print it...assemble it...put a bullet and a nail in it and fire it.  this spoke to the idea that what this technology represents is greater than copying existing forms.  it is a new form that asks a question and gives an answer simultaneously.

“why?” “Because i can. “

SANY0207This point is significant going forward because it is increasing exponentially.  each idea that gets iterated with the assistance of technology, increases its rate of evolution, in my observations. From this point of view,  the current design of the system as an open source design, is unnecessarily difficult to assemble and, in the case of the hand units, not structurally sound for the electronic components.  in the first printed design, i wanted to expose the circuit board as part of the aesthetic of function.  but this design exposes the circuitry to damage if dropped, although it must be noted that no such damage has happened yet. 

SANY0206the former headset was heavy and gave me “elf ears” as well as unfortunate pressure sensor placement behind my head which meant it took longer to trigger a note with breath pressure.  so I have begun redesigning the system to make it more efficient and 100% printable, not including electronic components.

 

 

SANY0024In May, I iterated a headset redesign that moved in that direction.  incorporating the idea of a 100% printed system, this new headset, now called a “Mask”, no longer has the screw/bolt construction of the previous version.  currently the design is held together using elastic bands ( which will soon be replaced using printed elastic I have been experimenting with.) The “goggles” do two things.  they mask the electrodes that they are mounted on and they give the “mask” a “face”.  they are less aggressive than the last ones and serve to complete the design, in my opinion.(I have a whole concept of inhabiting a construct

vlcsnap-2013-06-19-14h53m14s250 the LEDs are internally mounted into clear printed pieces to to refract light better.  Most notably, I redesigned the mouthpiece.  I placed the pressure sensors right next to the mouth.  now the articulation is miles better than any digital instrument I have ever played.  triple tonging is effortless.  I am working on a new version that gets rid of the tubes that attach the pressure sensors to mouthpiece and simply take the breath to the sensor thru a printed air pathway, placed as close to the mouth as if functionally reasonable. this adaptation will be integrated into the campaign contributor perks

IMG_20130617_130026IMG_20130617_125943IMG_20130617_125953IMG_20130617_125927

the arduino and the battery are now on the top-back of my head in a configuration that takes the weight of the design, off of the ears and re-distributes it to the line between these two points.  stability in increased by incorporating my Mohawk into the design.  my hair acts to line everything up properly.  I use it as a guide for the robotic tDCS arms that clamp onto the front part of my head, creating a very stable 3-way harness.  the arms incorporate 4 servo motors that place the electrodes in optimal positions to stimulate the brain regions I desire right now.  (the electrodes are detachable so I can design new electrodes placement arms as I begin to understand how it works).  this was all I was able to get done before heading to Africa but it was enough to get me thinking about what I wanted to say and how this new interface assert itself into the conversation.

for an interesting comparison, here is the video from a year and a half ago describing the idea

 

Africa

Angola is mostly straight down for 7 hours from Lisbon Portugal, below the equator but in the same time zone (as Lisbon, not berlin).  The days were muuuuuch shorter than here and the sun went across the sky in a different direction which you don't think you would notice, until you do.  We were pinned to the plane window to try to see this completely new (to me) topography.  SANY0041mountains and deserts and all that.  the Sahara was pretty impressive.  2 hours of sand with burnt spots in it.  burnt sand from 32,000 feet, says something.  but it ended abruptly because we were instructed to close the windows to simulate night time (!?)  in the middle of the afternoon!  but i digress because i was going to Africa.

i just assume that most of what i think I know about a place i haven't been , is bullshit, so i was very excited to go to this place and see for myself.   I didn't know whether the "Ashanti" thing was going to be offensive or not either.  i just kind of appropriated something that could be seen as artistic or cultural, or as not cool, but it was nice that the former was the mindset i encountered exclusively.  One of the TEDx guys said to me, upon arrival at the venue they were setting up for the conference, "Welcome back", which set a really nice vibe for the entire visit.

When we arrived at the venue of the conference, the ENAD, they were setting up and Januario Jano, one of the organizers, was having a birthday thing; cake and chilling.  and i was gonna do a sound check and the new headset, which i had only gotten to work 3 days before, started smoking because i used the wrong wire for the batteries and the servos, which was a bit of a dramatic entrance, but luckily, nothing was permanently damaged. 

945460_361926363907558_979677655_nThe conference itself was really interesting, although mostly in Portuguese.  Luckily for me, an Angolan woman visiting from new york, sat with me and translated the entire conference for me!  and most of the talks were of a southern hemisphere perspective.  it was very very interesting to hear stories and ideas generated in this part of the world, for this part of the world based on the "situation on the ground" of people living there. 474379_360926784007516_213492255_oFor instance, to discover that an Angolan child that doesn't sing, is a cause for high concern.  that blew my mind!  and the woman who talked about it,  Sonia Ferreira, was especially inspiring.  she had saved 60 children during the war there a little over a decade ago and took care of them into adulthood. Just one of the many great talks!!

 

 

8970950078_0e2c137014_oSo i decided, "fuck it!", I'm gonna drop the whole thing; fractal logic and self instantiated pattern engineering to program oneself...all of it.  all the weird shit that I have been obsessing over for the last few months and see if it connects.  the talk was 35 minutes, which is the cardinal sin of TED talks. I sat on the stage for half of the talk. I don't know why but it felt like the right thing to do at the time. I did the history thing, took it into the idea thing then played and went into a depolarizing neurons thing that was unexpected, but i was flowing and it didn't turn into bullshit. ( I have the whole thing on video but I need to see if its cool with TED or if they are going to use it.)   the part describing the electrodes kinda caught them off guard and I had to explain that the only reason I wasn’t going to use them was because of the smoke incident the previous day had me afraid id fry something useful Smile with tongue out   I was told later that my sound and movements were very reminiscent of a local style called Koduro.  I was shown and that shit is INSANE!

we spent alot of time discovering Luanda over the following 3 days.  street markets and restaurants and spontaneous dancing on the beach. From the beach there was a massive constellation freight ships and oil tankers.  it felt like everyone in the city was out walking, at the same time.  droves of people.  and police with machine guns.  the vibe was very chilled.  people sitting and talking and laughing.  this was the picture I left Luanda with and I hope to go back and build a fuller image, soon Winking smile

SANY0105SANY0102SANY0099SANY0091SANY0081SANY0080

 

100% printed exo-voice

After a long layover in Lisbon, I got back to Berlin to integrate some of what I absorbed from Angola.  like, for instance, one local guy told me that it is very hard to get stuff shipped there for reasons that I haven't looked into deeply. and the stuff that is there is expensive.  so it made me re-think my idea for workshops there.  If I want to teach building of repraps, for instance, its not going to do much good if no one can order the servo motors necessary to build it, let alone the arduino-based boards. maybe a reprap designed with common car parts is better as a viral idea construct because there are LOADS of cars there. 

 

IMG_20130605_203358by the same token, and going back to the 3d printed gun revelation, a 3d printed exo-voice that depends on an array of tiny specific non-printed parts –nuts, bolts, screws, tubes, padding, etc.- is not going to propagate well.  the design has to be as “ready to use” as is possible with current technological means.  It’s not an easy instrument to play but it should be dead simple to build.  like, 10 year olds building them fora  music class in 20 minutes (not including print time).

 

IMG_20130610_010951so the version 2 is designed to be printed and snapped together, with an adjustable hand brace.  the hand units and parts will have the cool hexagon pattern inside the parts to refract the light in a MUCH more interesting manner than simply glowing.  the new headset incorporates the mask concept in that it attaches to the face with minimal bracing on the head, making it lighter, small,and easily repairable.  I will have the first iteration of the design in the coming days, but the Africa trip combined with the 3d printed gun, made me amend “how” and “why” to account for the new insight these two things represent.

Harmonic “shapes”

the software has taken a large expressive leap as well.  for years, I have been trying to play chords.  a sax only plays one note at a time.  a wind controller can “sustain” multiple notes play sequentially but can not play chord changes (2 or more notes,  played simultaneously, to infer a note to note interval relationship) only pre-programmed intervals.harmonic shape chart.fw  but the exo-voice has an accelerometer on each hand that sends readings based on where my hand is in x/y space (basically full tilt left/right [x-axis], and front/back (y-axis]) so I took the chromatic scale, divided it by 4, which make 4 groups of 3 notes (123,456,789,10 11 12), each group gets assigned to either the x or the y accelerometer axis.  so by moving my hands to a particular position in space, any known 12 tone based chord can be created.  I am currently transcribing existing chord progressions to hand position sequences, or gestural harmonic “shapes” (the co-dependent hand positioning).  more soon but you can hear the interval relationships in the video below.

Pseudo-binaural panning using gestures

pseudobinaural.fwmost recently, I decided to make the entire signal path stereo, but something interesting happened as a result.  before, the synths were mono and only the transform channel was stereo, but only in the most basic sense.  both left and right channels were controlled by each hand, but there was no real panning.  stereo positioning was haphazard.  recently though, I updated each loop buffer to stereo then created a gestural panner.  the left hand can position the left side anywhere between left and right as well as control its volume.  same for the right hand.  the surprise was that now the sound can be positioned pseudo-binaurally!!  meaning that instead of a sound sounding like its coming from the left or the right, it can sound like its coming from “over there”, i.e. a point not between your ears. I say “pseudo” becaue I don’t understand the math behind binaural positioning, other than what I have read.  but I have heard many plugins that produce the effect and it is very close.   it is very intuitive.  simply move the hands together in a direction and the sound goes to that point.  yaaaaay!!!!  binaural positioning and harmonic shapes all in the same week!I got a chance to test both at a local club, Loophole, this last weekend.

Coming Soon!

I am aware that I have campaign contributions to fulfill so I am working on these iterations pretty intensely, but I think that the refinements in code, playability and ergonomics will be worth the slightly extended wait (and there is another interesting development but I want to show it rather describe it). By “wait” I mean days to weeks, not months as I plan to go and tour these iterations at festivals and busking, from July till end of summer, so everyone will have their stuff before this next adventure commences.  I will post a more in depth demo if the new harmonic logic once I add its graphical representation to the visualization.  but, again, I have 2 new related aspects of the above changes that describing with words wont do justice, so I will be posting about them very very soon Winking smile l8r.

0

Exploring the rabbit hole; The Better Question or Why so long since the last post(part1).

First, let me say that I apologize for not posting for such a long time.  I actually like this form of expression.  I can formulate a statement that either is completely separate from the other expression vectors or glues them all together into something vaguely singular.  this post will be an attempt to validate, with the requisite overly wordy descriptions that I feel comfortable projecting, exactly why I have not blogged in so long.  I will attempt to sum up the totality of the last 2 months in 5 posts.  any less and it will be like reading a small book.

The Crowdfunding campaign

another apology goes here.  I didn’t invest my complete self or mind into the campaign as I should have, but more on that in a sec.  I felt that there were two big issues with the campaign;

  • 1.It was just way too much too soon.  I gambled with putting a full process trajectory out in the public sphere to see what would happen and besides the video not saying much, although being the best looking video I have had, this should have been at least 10 separate campaigns, one after another, rather than one big conceptual behemoth. 

  • 2.It stopped being about “me” and became about “you”.  this one I should have known better. I HATE projects that assume that I am going to do things the way the creator wants because they have this picture of an “end user” in mind.  it tipped dangerously close to a market focus more than an idea focus.  I wont make that mistake again.  This is my system for expressing myself, which I will “share” as fully as possible, meaning that first and foremost,

    -if I can express myself ever more fully with it, then I am happy
    -I will teach myself what I feel will contribute to my vision
    -I will share what I teach myself
    -if no one else plays it, I can at least be happy that I can play it based on my own criteria.

the campaign was more of an expression of the process that has been going on in my head for the last few months.  in future campaigns, I will be sure to segregate the concepts into their own modules.  and once I ferry all my current contributors to happy land (I've still got a surprise for you Leo), I am investigating creating a page that will simply be a place to contribute to individual modules and receive perks, without the need or expense of a campaign site like kickstarter or indiegogo.  let me know what you think about this idea.

    The Better Question

The better question isn't a “question” in that way, but a mode of inquiry.  Basically, I am programming myself, openly and tangibly.  finding existing patterns, gauging their usefulness and either updating them or deleting them by keeping them in the front of my perception and referencing the new pattern until the old one is no longer referenced.  To do this, I have a basic, updating set of rules so my program doesn’t become an influence vector for detrimental processes.

  • -focus on self guided evolutionary growth
  • -do not monetize the “core” expression. 
  • -learn how every aspect of my construct works ( computer, ram, wire, capacitors, servos, etc.) and reverse engineer it for the sake of the first rule.
  • -share the idea in a way that allows continued growth
  • -each expression vector should be able to be plugged into any other vector and influence it in interesting ways.
  • *entertain myself.*


this part of an evolving logic system that governs everything now.  its not about “music”.   its about expression.  music is “an” expression now and not “the” expression.  My current expression vectors are;

  • sound
  • programming-mostly pure data but also learning JavaScript
  • motion-sometimes dance-like but mostly parameterized to influence a particular sonic syntax
  • visualizations
  • CAD (Computer aided Design) and 3d printing

robotics
tDCS (Transcrannial Direct Current stimulation-stimulating my brain while expressing through the system, effectively playing myself in a feedback loop)


each vector feeds the other, i.e.., if I change a bit of code, I may need to change the design of the, say, hand units, to accommodate the new function.  this cross pollination of idea vectors has had the unintended side effect of what I will call “idea fever”-an obsessive investigation of the mutation possibilities, exacerbated by the ability for immediate results.  if I can design it on paper, I can probably 3d print it and have it working within 2-3 days, max.  now I rarely leave home because each vector is like a child that needs constant attention.  I am designing a protocol for getting out of the crib (yes, I need a protocol) “expressing” .  I wont go so far as to say “performance” because some aspects cant be performed in a classic sense, but they can be expressed.  I cant explain more than that because I don’t know what it means yet, hence the protocol: a function or set of functions that will lightly govern all of these vectors in a way that is fun and sustainable.

one such protocol is my “ghost protocol”. Jan freheit and Max Krueger have been graciously helping me create a a new form of data presentation/performance using webservers.  I will describe it more when it works, but it has been an all encompassing aspect of the idea fever that has kept me from blogging the last couple of months.

the idea fever is more intense than in the past because each expression vector increases resolution, the more I use it, and when I tire of one as a focus, I can instantly shift to another without feeling like I am being lazy, and so far there are 8 with at least 5 more on the horizon.  the more things that emerge from the “other” place in ones head, and into the “see/hear/touch/feel(different from touch, I am discovering) space”the more the idea mutates and this process is astoundingly interesting.  Having ingested various psychedelic substances over the years, this “new normal” state has  all the best characteristics of a great psychedelic experience, without the initial compound to get the ball rolling.

The search for answers is uninteresting because it implies a finality of inquiry.  the greater quest is now for the better question.  for instance, while in an idea fever, I tend not to eat much (slows me down too much).  so the “answer” is to eat, obviously.  the better question is why eat? can I create a means of sustaining or even improving my health and cognitive functions which is based on ingesting/eating something I design, using my above expression vectors?  or would this trajectory require me to create another vector just for this purpose?  the point now is  to investigate the question!  I will eat food until then, of course, because I will check this question against my rule set above.  get it?!  questions produce results that may be answers or just questions that needed the first question as a starting point.   the exo-voice was an answer/question that the beatjazz controller inquiry produced.  the original question did not result in an exo-voice, only the function of a wind midi controller split into 3 parts.

so this is the rabbit hole I've been in for the last couple of months.  maybe its more information than you need but I felt i t was necessary, to convey that there is an evolving process happening.  and in the name of sharing, I present it here for you.  this week I am working on contributor perks (thankfully I don’t have to invent anything to fulfill them this time), and updating all my online stuff which no longer represent the new normal…the new reality of perception.  and updating you on my travels with pics and video of all the shows/presentations and trips.  after which I will post the build instructions for the v1, since I haven't actually done that yet.  but at least you will have an idea of why my descriptions of these events is so trippy.  Holla…

1

How Beatjazz works

So, I finally got around to creating the first a number of videos on the beatjazz system.  these two videos are a synopsis of all the hardware and software components and a demonstration of “transforming”.  interface controls.fw

 

0

No, I haven’t lost my mind…I’m designing a new one!

 

 

So here it is.  My next project phase.  A Sonic Fractal Matrix. TA DAAAAA!!!!  A network of wirelessly connected nodes-some of which are robotic-to enable any number of beatjazz system enabled (musician/dancer/performer/scientist)’s to synchronize and become a…thing.  a shimmering sonic pattern.  yaaaaaay!!  I’ve completely lost my mind! But along the way, I’ve picked up a new one…

Onyx_Node06_7I have been alluding to this shift in mind state over the last few months but there are no words for how much of a shift all of this has represented to me as a person and as an artist.  WE LIVE IN THE FUTURE!  its like a fantasy, like the dude that created Pinocchio (no time to Google it, I’m flowing…).  It is possible to go from idea –in your head- to “thing” on your desk or body!  In one to three steps!  I can not iterate that enough (no pun intended) …you think something…then you make it exactly how you see it in your head! #mindblown

Onyx_Node06_8I honestly thought I was freaking out a couple of months ago.  The possibilities were too numerous. I got heavily into studying Fractals.  I reduced the number of “possibilities” down to a set of “probabilities” and it was still too much.  The saving concept, which I have mentioned numerous times, is the “function aesthetic”.  meaning that “function” executed properly, has its own aesthetic.  So by focusing on function, an aesthetic emerges and it tells its own, very honest, story.  That story, abstract though it may be, calms me and directs my attention toward an ever clearer field of probabilities.

 

You see, you don’t go from horn sounds to “transforming”, in one smooth curve, but in a loosely related surges.  When I began the first campaign, I just wanted to play without having to hold my hands together on a horn, or what I had begun referring to as a “musical neck brace”.  I had no frame of reference for what it is has become just as the next stage is completely murky.  I have no imagination construct to envision the stage that I propose.  I can only design and construct the idea of the sonic fractal matrix concept.  but as with the exo-voice (what I call my beatjazz controller now),  I can see very clearly how to make it work.  designing the logic and hierarchical systems will be reasonably easy (I like logic systems…go figure…).  I will have to learn a lot more about networking over the coming months, but that’s the point;  to learn and create and share.

-----

As I begin my 3rd campaign, I think about what the last 2 have taught me.  Amanda palmer hit the nail on the head in her alignment of crowdfunding with busking.  Maybe that was why it felt so comfortable as a concept to me; I share my expression and if you dig it, help me keep sharing it, by contributing. But to go one step further I would say that crowdfunding has become “part” of this widening artistic expression, enable by the internet.

Not long ago, performance was a “thing” with rules and stages. promoting was something “else” and life was another thing that was kept separate.  But as I started preparing for this campaign, I began to look at the narrative potential of every single element; how the video looks…how to present these two “onyx’s”-the one that creates and the one that performs…what is the natural progression from the 3d printed exo-voice campaign?  all of these things became elements of a new (for me) type of performance that is social media, 3d printing, robots, network system design, film, and crowdfunding.  a multi-dimensional performance across time and space.

a crowdfunding campaign feels like it should be gripping somehow.  like between the time it begins and it ends, there should be something interesting to emerge from the intensity of presenting a new concept, daily, for 30+ days.  then to try to shoot past the idea of that stated goal as the project moves into “creation phase”.  when I began the first campaign, I envisioned DIY with lots of wire and plumbing pipe and in the second one it was carbon fiber molds and synthesis.  and in both instances, neither turned out the way I intended…they came out better but the whole idea evolved in wonderfully unexpected ways.  I think this time will be even more interesting because the goals have evolved this time.

-------

IMG_0969in the previous campaigns, I created beatjazz controllers as perks for contributors.  they all work, because I played each one of them extensively before delivery.  BUT, I underestimated the differences in the computer systems they would be connected to.  the controller was created for my needs and optimized for my computer.  so for me, its good and getting better but for the ones that make it into the hands of others, the mileage varied wildly based on their existing hardware.  so, going into this phase of the project, I wanted to make the exo-voice playable, “out of the box”.  it occurred to me that the only way to do this was to ship it with its own computer.  my experiments with using the raspberry pi as the computer, have been very encouraging, but I want to incorporate a second computing board just for audio processing.  the point being that I want these systems to be playable instantly.

sonic fractal matrix network config.fwNow, this doesn’t mean that the new “pilot” (cause its like flying thru sound…geddit?!) will be able to play it “out of the box”. first off, it’s a very logical, but unconventional musical input construct. And I have been playing for 30 years still have to practice daily.  Practice is the key.  Practicing with another person makes it even easier, so I figured that since I’m building the systems, I can make a simple way for 2 or more beatjazz systems to form a network so that there is a master tempo from the first person on the network.  then two or more people can play together and build a construct made of sound that just hangs in the air.  I call it a pattern.  I cribbed the whole pattern/pattern master thing from an Octavia Butler book series called the “patternist series” which is about a race of telepaths that are all connected together into a web called a pattern which is controlled by a Pattern master.  In my implementation, the role of pattern master, is that of providing a master tempo to everyone in the pattern and also the ability to transform the audio of every “node” in a manner similar to this (fast forward to 3:46 );

The best part is that once the pattern master transforms the audio then creates the new tempo, the role is passed on to the next node in the network list; everyone one is important.  The pattern will have a wide array of different interpretations because there is no “leader” and everyone is leader. 

The network is made up of nodes and the nodes I am creating now, are “baby nodes”.  primordial nodes (like “boogie”) in a network that will evolve very quickly.  My  first node is robotic because there is something incredibly exciting about creating a bio-mimetic robot avatar; when I move, it moves. robot avatar v1.fw By watching it move, I can refine my own movements.  It is me.  There are no autonomous functions where it does things on its own.  this avatar will be a robot expression vector (it also keeps the wireless router as close to the exo-voice as possible).  I find this extremely exciting. Where does this lead?  I have no idea.  I am the function.

  

------

So, think about it…weird new instrument.  You buy it and can hack it by design.  But then you can also go “sync” with others with the system (actually, it would only need the wireless router and node patches in pd.  It could theoretically run on a smart phone).  Maybe that would mean beautiful music.  Maybe horrible noise.  Maybe visuals. My vision is one where any space where one “patternist” creates, is a sonic fractal matrix consisting of one node.  and any number of patternists can join them and create an evolving pattern of increasing self organizing complexity.

THIS is the performance now; “creating” this sonic fractal matrix is as much part of the expression as playing it.  The idea is exciting to me, because I know that I can build it, even though I have never built it before.  24/7 goosebumps!  And since I am creating it for my purposes, even if no one gets into the idea, “I” can still play it!  This campaign is structured so that I can fulfill some aspect of the concept no matter how well the campaign does.  it will be another 1/2 generation before this takes hold, so it’s a long-haul iterative project.  In other words, it’s gonna happen no matter what. 

So, come with me on this ride.  I promise that I am not even slightly exaggerating when I say that if you let me, I will blow your mind.  Let’s see where this goes…

0

How a Sonic Fractal Matrix works

Soooo… here is the logic and the methodology for how beatjazz evolves into a networked sonic construction system.

stick man.fwThis is a person….kinda.  it’s a stickman.  but it is a representation of a person, by a person who sucks at illustration.

A person can generate expression in many ways; talking, writing, jumping, walking, surfing, singing, etc.

The original beatjazz “controller” was created to provide a buffer where expression could be channeled through it to become something more.

 

promophoto-onyx-ashanti-oct-2012The beatjazz controller idea became the origin point for what is now called the “Exo-Voice”.  A refinement of the beatjazz controller into a fully integrated “sonic prosthesis”, complete with its own software.

 

For the sake of continuity we shall present it like this:

 

 

 

interface controls.fw

There are prosthetic attachments for  each hand, one for the head, dealing with breath pressure (in and out), monitoring, lip pressure and head position, and one for each foot.

fractal pilot.fwThe person, now wearing the full prosthesis, is now the “pilot” of a metaphorical sonic-spaceship construct. This construct is a sonic fractal matrix-level:0-the singular level.

 

The matrix is composed of 8 elements relating to the 8 “keys” used to input performance data (i.e.., playing).  each element is a different sound, called an element.  In addition to a sound component, each element also has

 

 

  • A looping buffer that records played sound from that element and loops it according to the system tempo.
  • An edit mode that allows for sound design while in the midst of playing. fractal matrix.fw this methodology is called Gestural Trajectory Synthesis.
  • The ability to add fix, mute, change volume and pan the sound.

 

The 8th element is called the “transform” element.  it twists all the other elements into a singular trippy vortex-y sound with no limitations in tempo or pitch.  When making a loop within this element, all other elements sync to this new tempo.  it sounds like this:

 

 

 

All together, this system is capable of sound concepts I am only just beginning to explore.

Along the way I accidently discovered an interesting solution to a few of problems; parameter feedback, laptop interaction and wireless latency.

A couple of months ago I fixed most of my latency issues by switching over to Wi-Fi from xbee wireless.  the increase in bi-directional data speed allowed me to use the LEDs for more than just element color indication.  They were fast enough for parameter feedback! so I took the most important data –mute mode, record mode, panning and volume- and used the leds to provide that information and it worked beautifully.  So much so that I no longer needed to interact with the “laptop” computer at all.  but I found that being “near” the wireless router helped to keep the latency (the time it takes for the computer to make something happen after you push a button) down. And the laptop takes me out of the ”altered state” that wearing this system drops me into. 

Around this time, I built a small robot and discovered that robots are amazing for parameter feedback.  it was much more engaging and intimate than projections because it was a physical, moving “thing”.  I began using it for parameter feedback and it became my first robot avatar.

It eventually occurred to me that giving the node legs would solve a number of issues like  having to set up computers at a show and having everything on a table or in a booth somewhere rather than having an avatar right next to you, linked wirelessly. also, I have finally began working on a foot interface to compliment the hand interface and will use the robot legs to provide sensor information from them. So I designed a small simple system that is a wireless router, and a couple of inexpensive computer boards that will handle synthesis and everything else in a form factor that feels something like “intimate”. This is Node:0

robot avatar v1.fw

Node:0-Level:0 is a small wirelessly controlled hexapod robot that houses everything necessary to convey expression and  provides parameter feedback concerning the system.  the node:0-Level:1 is a larger robot that uses the level:0 avatar as its brain.  this larger bot has a full speaker system as well as various digital projection and video recording capabilities. 

sonic fractal matrix-level:1 is a network of nodes.

sonic fractal matrix network config.fw

As soon as 2 or more pilots-called patternists when networked- join the same network, a hierarchy is established.  The first patternist is node:0-Pattern master.  The second node is node:1, then node:2, etc.

The pattern master establishes the tempo that everyone is synced to.  all patternists are fully audible, all the time.  when the pattern master switches to their transform element, they can transform every transform element of every node on the network as much or as little or for as long as they want, but when they set the new loop length after their transform “solo” they become node:1 and node 1 becomes node:0 and thus, patternmaster.  Then the process repeats and cycles thru every node in the network. 

I envision 3 rooms. one for beginners to build things for the first time, a second room for intermediate pilots to “sync” or “mesh” with each other and trade ideas and even parts and techniques.  the main room is where there are at least 1 or more patternists that can guide the pattern more fully.  These would be considered “pro’s”, each of whom is capable of expressing full patterns when solo.  The goal is to create a type of social construct where there is no leader, no band, no dj; just a network of people who create the sound-space improvisationally. a space where one can learn about it, how to do it, and to actually do it, all in the same space at the same time. and where the collective will is expressed because every person becomes patternmaster eventually and if the network is too large for each patternist to become pattern master, it can be broken up into smaller patterns.

The most exciting part about this project is that I can or already have done everything I described above so at this point I just want to see it  happen.  the exhilaration of hearing a room full of loosely related grooves being twisted together into a vortex of sound, boggles my mind.  lets do this thing!!!

0

Coming to America and The New Normal

vlcsnap-2013-02-25-14h56m22s80it has been a while since my last post.  this wasnt a lapse.  i needed to order my thoughts fully to express stuff properly.

The New Normal (?)

this "thing" or state or project or whatever it is, is changing, and it is changing me with it. (“the new normal” is a reference from a kevin kelly article I read) its is a very interesting feeling to feel as if you are being mentally manipulated by your own expression, in realtime.  manipulated may be the wrong term;  more like "programmed by your own will".  this didnt just start.  i "could" say that it began when i started playing around with the idea of live-looping with synthesizers back in 2007. vlcsnap-2013-01-16-16h31m41s195 but that was an origin point, like birth.  this is something markedly different.  it incorporates all of the previous experiences, and i do mean, ALL of them; from insignificant habits as an 8 year old and everything else, all the way up to a generalized point in October of 2012, just after a couple of shows in London where i was profoundly not happy with how i sounded.  the following week, just before a couple of shows in berlin, i changed a couple of key parameter trajectories that resulted in a sound and concept so alien to me, yet so obvious as the next step that it completely wiped my "imagination slate" clean.  And from that point, mutations, which before this fateful week were mostly hypothetical conjecture, became as clear as map points on google maps.

timestretch algorithmThe big change?  simply gesturally controlled granular time-stretching.  the same timestretching that has been a staple of audio recording and performance software since almost the beginning of software synthesis as a democratic musical concept.  it is a technique where the "play head" is looping the audio right underneath it but is also moving at normal speed, so it sounds like normal playing audio.  but if you slow it down, the little loop is still playing at normal speed so it sounds like the audio has slowed down without changing the pitch.  in addition, the little loop can be slowed down and even reversed so a whole range of manipulations can be be had from this simple technique.  mine isnt even that complex.  it is based on the same one that comes in the help files for pure data, with a couple of gestural tweaks that allow me to control these variables with my hand position.

8 elementsMy system works by having 8 "elements", or sound sources. i can use finger pressure to select one of these elements as the focused one, then play it into a looping audio buffer.  each element has its own.  they are all time synchronized wih each other.  i simply move from element to element, building my arrangment.  i can mute and unmute any or all of them any time i want.  In september of 2012 i decided to add a "resampling" buffer to element 8 that would take the entire mix of all the other elements as a single audio file and allow me to manipulate it.  sonically, this was very interesting because now i could, for instance, play chords as a two step process of playing a basic chord then resampling it and playing new interval relationships with it. 

But -and there is always a but, i had a problem i had had since the beginning of this idea of continuous live looping: i could not change the tempo.  i would do all of these changes but if i needed to change the tempo or loop length, i needed to start over from scratch.  i found the monotony of the predictability of the loop length became incredibly taxing on me, creatively.  like performing to a metronome, which i have done in the past, and was happy to stop doing.

so after said gig in London, i was frustrated enough with this issue to go into one the many trance-like programming states that seem to be more and more common.  By the end of the 3rd week of October 2012 i had added the aforementioned timestretch algorithm in hopes of being able to change the tempo.  i did not expect the sound that came out from that hypothesis.  the sound could not only be stretched, it could be scratched, and ripped apart, IN STEREO -yes, i can scratch the left side with the left hand and the right side with the right hand, at the same time, like some kind of scratch-drummer. THAT was the point;  what was this sound?  why was this sound so alien yet so obvious?  i had completely baffled myself.  so few parameters yet so much scope for narrative and syntax.  it was then, like being in a video game where you can only progress to the next level after you have solved all the puzzles on this level, that the "transform" element showed me the next stage; better, more timbrally complex elements to feed into the transformer.  with the transform element, there were no more mistakes.  mistakes become timbrally interesting building blocks thatsound almost sweet when digested in this new matrix.  and the tempo is completely untethered to the previous musical ideas.  the transform process resets the system loop length for all elements so it can be a brief stutter or long and flowing. 

I can no longer predict what i am going to play in a performance anymore.  the beginning of a "thought" no longer has any bearing on how it will end or how it sound after its first transform.  this confused me as an aesthetic concept, as i didnt know how to contextualize this sound concept, musically, so i started looking to mathematics for answers, which was easier than i thought it would be.  maybe by this point, i was mentally ready for a more mathematical answer than an aesthetic one.  the one that impressed itself in my psyche most readily, was that of fractals.  a basic construction algorithm of nature based on self similarity where phenomena is expressed then divides and splits and repeats, in simplest terms ( I am still studying this so dont shoot me for my oversimplification).  I began to look at the system as not a "looping" system-looping is an incidental aspect of the way the sonic ideas are expressed-but as an expression evolution engine.  the expression vectors must evolve along a different type of route after being completely and unpredictably transformed in every way.  this is not restricted to sound.  in late december i cleared a data bottle neck by ridding my system of the serial communications bus, which was slowing down everything i wanted to do, and replaced it with much higher bandwidth 802.11g, or wifi.  this has enabled highly controllable expression vectors which now consist of light-color, tempo, intensity, etc-, and robotics for which pure data is extremely well suited.  i dont understand why the linkage between pure data and robotics isnt spoken about more, but i digress.

node1So yes, this is as much of a head fuck as it sounds like it is.  i have begun designing gestural  fractal elements-sounds of varying timbral complexity that will translate a certain way, to feed this process and also started redesigning the transform matrix for more dramaitic types of transformations.  and all of this has changed me.  i dont watch movies anymore.  they distract me from this crazy algorithm that is trying to parse itself in my head.  the only things i can really listen to are post bebop/pre-70's jazz from artists like sun ra, archie shep, ornette coleman and eric dolphy, none of whom i knew about 2 years ago and when i would hear their musics randomly, i recoiled from them from being to “self-indulgent” (yeah, pots talking shit about kettles…I know).  now i cant get enough of it.  and everyday, all day, is streams of the strangest series of questions about everything from stop signs to wall paint composition, to geographical locations of saltmines and what the first humans did before language or tools, and 1000 other questions.  it was driving me a little mad but now its more comfortable.  it is the new normal.

this was the last 2-3 months, much of which was spent perched on my crappy office chair, rambling to either myself or my neighbor, but i knew that i was going to be in the US for the first time since 2009.  technically, i had been in the US twice in 2011-once for the TED thing and once for the Maker Faire in NYC, but those were both in NYC and i didnt see any family nor did i really have a defined concept as complete as this "mental algorithm" or "pattern".  i was acutely aware, as i am writing this blog post, that maybe they, the people in savannah georgia and in memphis tennesse, would think i was completely nuts.  maybe i had sat in my flat a month or two too long, exploring my own world in too much detail.  i knew those were possibilites, but the algorithm felt so right...so complete, that i was more curious than apprehensive (airport TSA shenanigans not withstanding).

Coming to America

savannah cover photoSo I arrived in Savannah georgia to the completely surreal experience of seeing my own picture on the cover of the local weekly zine.  whoa!  I've never been cover material, or so i told myself.  that, and it was HOT, as in weather.  so i took advantage of the opportunity to revel in some much missed sweetend iced tea and a whole list of unhealthy former addictive food-stuffs which usually populate the shelves near the counter at gas stations.  all that is besides the point though...I was in Savannah to do a series of presentations and performances for the Telfair Museums Pulse festival.  there was a whole digital arts exhibit with digital video arts, video gaming arts and more.  it as beautifully put together.

there were two days of full auditorium presentations for my project.  i was happy with the diversity of the people in attendance, especially the age ranges.  there seemed to be just as many older people as young ones.  so i just decided to drop it how i see it now, without any consideration that i might be kooky sounding and surprisingly to me, everyone got it!  i just described the esoterics and how they tied to the mechanics of the system and idea and everyone, older and the kids as well, got it.  i could tell because some of the questions were very specific.  one kid, about 8 years old, asked if i had thought about putting the sensors in my body and i explained, in detail, my apprehensions with sub-dermal sensors but that i was open to EEG scanning and he was completely cool with that answer!  cool!  questions about integration of this modality into the school system, its place in the continuity of jazz, from a local older jazz cat (really happy that he saw the connections i was trying to present musically, without saying anything about them first), communal music creation, and on and on.  the Q&A sessions lasted from 30 minutes the first day to about an hour on the second and last day.

After the talks, the range of people that came up and the range of questions were very informed and interesting.  i felt like something had connected and felt very much more confident that i wasn’t crazy.  that doesn’t mean that everyone liked it.  there were swaths of people who got up and left as soon as i started playing, which was encouraging since i dont want this idea to be middle of the road;  “dig it or dont or build your own!”-should be the motto.

I was also given some beautiful artwork by local artist PE Walker, created from reclaimed materials.  I was so happy to get these items home without destroying them in my luggage. IMG_20130301_123110[1]IMG_20130301_123122[1]IMG_20130301_123156[1]

i had a week or so with family to chill out and revel how old we're all getting and how grey we're becoming and even talk about this "algorithm" (and by talk, i mean ramble endlessly) and it still kinda held strong.  nothing that fell out of my face was glarringly wrong or complete bullshit, because if it were, i would have heard about it right then and there.  this was heartening as i prepared for Memphis.

IMG_20130301_123420[1]but i should say that Memphis was prepared for me!  they went all out! and by “they” I mean Eric Swartz and Sarah Bolton who went to a lot of trouble getting me there( I met them while they were in Berlin filming a Documentary called “Children of the Wall” which you can check out Here,) and guys at the Five in One Social Club who dusted out their wonderful subterrainian bunker and filled it with custom stages, screen printed posters-yes, screen printed, like on tshirts but on cardboard-beautfully designed and they even made me a t-shirt version. and an amazing 3d projection mapping co-performance by Christopher Reyes. (this and more photos from the night here)tumblr_mibu5n0esN1rk1wygo7_1280 Memphis is special because my first time playing there in 2008, was literally my last stop on my way to new york to fly to berlin for the first time, to begin a new life.  and that party was heavy!  I and Jeremy Bustillo did an all EWI based party night, and held it down all night long.  the crowd kept it going!  and then again in 2009 at a space called Odessa, which was so warm and comfortable that i couldn’t wait to play there again.

I was so chilled that i sat, onstage, for a good portion of my talk.  i rambled about all of the above mental stuff and interspersed it with bits of playing and then dropped into a full set.  people were sitting on the floor and generally relaxed from the barbeque, cole slaw and iced tea that was being served, so i took the opportunity to investigate new ways of transforming.  i am having to come up with new terms for this stuff but i found "stutter rips", and "dving deep" and my new and barely explored favorite so far, "scratch drumming", where i am learning to scratch each side of the audio in relation to the other. kneeling-memphis i finally stopped when i felt like i had explored all the permutations i could think of that night.  i was tired and the crowd seemed satisfied. and the rest of the evening was super chilled out.  it was a great opportunity to talk interesting heavy stuff with many representatives of Memphis' artistic community.  I even met Larry Heard-House music legend-There! 

 

After all of this, i spent a few nights in a motel 6 near Hartsfield airport to decompress a bit and even in the solitude of a hotel room, it was all a flurry of drawings, and patch programming and design bits.  i think this is a permanent state now, which is ok with me.  the next stages are clear now and in the coming week, i will be doing another crowdfunding campaign to unveil what "next" means.  it will be weird- Trust!- but i think it will be as obvious to you as it is to me now.

1

Nothing like your own show poster t- shirt!

The guys at Five in one gave me this wonderful T-shirt and I thought I would show off their amassing design work here. If you live in Memphis, be sure to stop by later! image
0

Notes on switching from xbee to RN-XV for low latency applications using Pure Data

IMG_0043Happy new year!!  It’s been a crazy crazy month!  I wont even go into all the mental intensities emmerging from digging deeper into the mathematical underpinnings of sound design (which resulted in a couple of weeks of burnout).  although a reprap breakdown has slowed down the pace of printing perks for crowdfunding campaign fullfillment, I have been working on personal mixes for my beatjazz supporter pack contributors and managed to mail off another controller! One more to go!!  it still amazes me how much nicer they are than my own, which is constantly being added to and tweaked.  this particular contributor received his system with my own personal xbee wirelesss data system. the one I have been using since the beginning of this journey.

As crazy as it sounds, one of the things I have learned about myself that if I have a hard “future” thing to do but I have a comfortable “now” thing laying around, I will go for the “now” thing.  especially when there is a deadline or it’s a bit too abstract to fiddle with.  so what I do is get rid of the “now” thing and make the “future” thing the “now” thing.  I have done this for years, like when switching from saxophone to wind midi controller, switching from playing cover tunes to playing originals, from hardware to laptop, from wind controller to beatjazz controller and, last summer, from commercial audio software to using pure data as my central means of expression. 

 IMG_0055the Digi made, extremely popular Xbee series 1 and series 1 pro’s (the pro’s are the same, but with 63mW broadcast output as compared to the regular series ones 1mW),  are marketed as “a drop in xbee replacement”  which excited me.  the xbee’s werent my first dalliance with wirelessness.  I had a kenton midistream wireless midi system when they first came out in 2004.  I also used my iphone’s accelerometer as a gestural controller for most of 2010-the summer before I began working on the beatjazz controller.  the Xbee was my first interaction with actual networking protocols, although it was an older, slower yet more reliable one called Serial protocol.

Serial was created a long time ago and its very slow in comparison to todays protocols, but its stable and is good for less bandwitdth hungry data transmission such as that coming from the low powered arduino platform.  there are multiple ways of integrating xbees with arduinos using snap on “shields” and, in my case with the arduino fio, having an xbee “shoe” built directly onto the board.  this has worked for almost 2 years with some caveats that are worth mentioning;

  • if the computer loses the connection, the software must be restarted, which sucks in a show.
      bidirectionally, the data coming from the arduino is mostly speedy enough for low latency performance, but data transmitted BACK to the arduino from the computer, using the aforementioned xbee pro’s was useless for anything time sensitive so I only used it to change the light colors infrequently. 
    if I am in the vicinity of a “hipster forcefield”, (a phenomena whereby the sheer number of smartphones, with their numerous wireless capabilities, drown out my xbees broadcast capacity and the system becomes unresponsive), I end up huddled over my laptop-situated xbee basestation, gasping for bandwidth.

IMG_0052the RN-XV’s, made by roving networks, use wifi and are designed for serial communication although not limited by it.   I purchased them but didn’t really use them much until a recent robotics project (I will be writing about soon).  I found them to be remarkably faster by Wide margin, out of the box, but had less success when using them with the beatjazz prosthetics.   the data was very shakey and garbled and I had no idea where to even begin fixing it.  I put them away, mentally, until just recently when I became interested refining the system, which seemed a bit labored recently.  so I just said fuck it and mailed the xbees off with the controller and stranded myself in 802.11 land.

Along the way, I learned a lot of very valuable stuff but there was one thing In particular that I wanted to share; If you use firmata, as I do, you can use it, (mostly) unedited. you can use the rn_xv to create a wireless arduino network with no serial port virtualization!

 

How to.

the Goals:

  • go pure wifi instead of serial for wireless communication
  • give each “unit” (headset, left hand, right hand) its own ip address, directed at a small dedicated wireless router (d-link dir-300)
  • speed up bidirectional communication significantly enough to make better narrative use of the lights and haptic feedback, and reduce processing overhead.

 

  • arduinoobject-inside first thing is to go into the [arduino] object in PD. and open it where you will find this structure.

  • the inlet is where settings bound for the arduino are input into the [pd command processing] abstraction which then sends it to comport object, which is PD’s serial interface.  The comport inlet (at the top of the object) sends the serially formatted data thru a serial connection, to the arduino.  in my system, the xbee’s were the virtual (wireless) serial cable.


the outlet at the bottom of the comport object is the data that comes “from” the arduino.  this data is a firmata-formatted stream of numbers
224 17 0 (repeat) but in a stream

this stream is sent to the [pd make lists] abstraction and then from there to the outlet where it is routed to digital and analog route objects where you can assign them to things in pd. 

udp internal stuffI needed to get sensor data from the arduinos  as fast as it could send it, then process it in pd and send control data back to the arduino, wirelessly.  it also needed to be as blazingly fast as was possible.  My first idea was to emulate a serial port using one of many virtual serial port programs on the internet like Serial/IP, socat, HW-VCP and about 6-7 others.  in all cases, the data I was getting was garbled and would stop after a while (more on this later).  so after a long time of beating my head against this wall I decided to try simply getting rid of the serial communications system and trying this with networking objects.

I tried all of the networking objects I could find in pd with most people telling me to try UDP, which is a connectionless format that, unlike serial and TCP, doesn’t need to “hand-shake” to allow data to pass back and forth.  both ends of the link simply “scream” data at the other.  in this way it is great for things like skype and video streaming but not so much for things that have a low tolerance for errors.  so after another week of fiddling I was able to place the [udpsend] object underneath the command processing abstraction, where the comport was.  [udpreceive] receives data from the arduino and sends it to the rest of the pd-system.  [udpsend] accepts data at its inlet and also needs the ip address and port to send the data to the right arduino. 

the [udpreceive] object recieves it data from a port on the router that I set on the RN-XV itself, negating the need for arduino libraries or rewriting the firmata firmware.  at this point I will jump to the RN-XV setup.  it will make sense when I come back to this abstraction.

RN-XV

The RN-XV is a powerful 802.11b/g tranciever marketed as a drop-in xbee replacement, meaning that the it shares the same small pin spacing that xbee is based on but is a bit more tedious to program.  for the purposes of this tutorial we will stick to the features that will enable us to link this directly to pd.

what we need to give each RN-XV its own static ip address and port, first.  this is where the sparkfun USB explorer or equivalent usb serial to xbee board, comes in handy.  with it you can program using a terminal like Putty or, in my case on win7,  Coolterm

in coolterm, goto Connection->Options and make sure the serial port your explorer is on is selected and the initial baudrate is 9600 and click OK.

in the main window, click the “connect” button then in the terminal window type

$$$ (but DO NOT press enter afterward.)

this puts the RN-XV into command mode and the green light blinks more quickly.
if you type “get everything” (no quotes) you should see all of the settings for the tranciever. these are the ones I found most important:

  • “set ip address”<ip address> –is the ip address of the of the rn-xv
    ”set ip local port”<port number> –this is the port that [udpsend]will send data thru to the arduino
    ”set ip host” <ip address of the computer>
    ”set ip remote” <port number> –this is the port  that the [udpreceive] will use to receive sensor data and send it to the system.
    ”set ip protocol” <1> this sets the rn-xv to use UDP protocol instead of TCP.
    ”set wlan ssid” <ssid> –this is the name of the wireless network you are connecting to. 
    “set wlan channel”<ch.number> –set your channel number here. I set mine to 0 so it associates with the AP with whatever channel is there.
    “set wlan join” <1> joins the access point saved in memory (the one we input above)
    ”set wlan linkmon” <0> I turned this off. 
    “set wlan rate” <15> I am playing with it at this rate which is the fastest as 54mb/sec.  1-2mb/sec would be fine(0 or 1).  roving states that lower transmit rates increase the range, so play with this one as you need.
    ”set sys autoconn” <0> this turns off the autoconnect timer.
    ”set uart baudrate” <baudrate> this effectively sets the baudrate of the connection.  I have mine set to 115200 kbps  and will experiment with higher rates but this seems to be fast enough for right now.  one line in the arduino firmata firmware will need to be changed to allow this to work properly.  if you need to use Coolterm with this rn-xv in the future, it must be at this baudrate in the option settings.
    ”set uart flow” <0>
    ”set uart mode” <1>
  • and the last and possibly most important two settings
  • “set comm size” <1420> this is the “packet” size or the number of bytes sent over udp as a block or numbers
    ”set comm time <5> this is the number of milliseconds between flushes of data blasted out to the computer from the rn-xv.  every 5 ms, it sends a packet of data that “should be” 1420 bytes.  I say should be because it is udp and there is no guarantee of delivery.
  • after inputting all of this into the terminal window, type “save” which saves it all to its own local memory (which is why the arduino WiFly librairies are unneccessary in this instance) then type “reboot” which puts it back into regular mode, ready to be placed back into the arduino fio.

 

Arduino firmata edit

next thing is to now use the arduino software to change the baudrate of the firmata firmware from 57600kbps to 115200kbps.  goto File->Examples->Firmata->standard firmata.  scroll to the end of the sketch and then scroll up until you see


  Firmata.begin(57600);

and change it to

Firmata.begin(115200);

then upload it to the arduino. 8you can not have the rn-xv attached to the arduino while uploading a sketch. put it on after uploading)

  • Setting up the new network arduino object

myarduino

ok, so back to our edited [arduino] object, which I have named [myarduino-udp].  we need to tell [udpsend] where to send its data, so we create a message with [connect xxx.xxx.xxx.xxx xxxx < which is the ip address we put into the rn-xv earlier, and its port number.  when you connect it and click on it, it “should” work and output a 1 at its outlet, signifying that it works.  if it does not, make sure that the router is working properly and that everything is connected.  if there is a red light blinking on the rn-xv, then it is not associated with the router.

Next, in the [udpreceive], enter to port number that you entered as the remote port in the rn-xv settings. if this is working, it should show the received ip address and bytes from the second outlet. ( I took this from the help files).

packetblocks

once it works, the “bytes received” number box should show some activity if you have sensors attached so you should send some data at this point. for reference, I attached a print object to the output of [udpreceive] so I could see what was coming out (below). it is blocks of numbers starting with a 2xx number the a 0-x number and another 0 to about 17 or so.  these are the right numbers in the wrong format.  they must come as a stream like;

200
0
2
224
0
0
230
1
5
etc…

so I used a list object called [list drip] which outputs a list as a stream.  this formated the data properly and everything worked!  it was at this point I realized that either [comport] or the serial port was the previous bottleneck in my system.  the system was now noticibly quicker than with the xbees.  so I decided to try to eek out a bit more speed by toying around with the [list-drip] since now, it was the new bottleneck when I came across this thread with test abstractions that sped up the [list-drip] object, significantly!  I ended up using an undocumented version called [list-drip-scheid] which itself is a faster variation of one called [list-drip-quick2]!!! (the thread is great read as well)

one last thing I did was to place a bang object on the inlet of the [loadbang] because it doesn’t send the data to the arduino on the first [loadbang] so I do it manually after the patch has loaded.  and that should be it!  I say “should be” because there is always something that falls over somewhere but this works for me and has been stable for about a week since I figured it out.  my operating latency is about 4ms bidirectionally and now the system is fast enough to control the lights with gestures and breath pressure in real time while allowing much more expandability.  as well, there are no weird comport, serial port, ftdi driver issues; everything will work on any operating system with minimal fuss.  I now need only add ip based components to my system for my router to recognize and pd to utilize.

this is my first official tutorial so let me know what you think.  if it is confusing or too long or whatever.  I aint gonna change it but I will incorporate the feedback for future posts.  l8r.

EDIT. here is a demo of the complete system, working.

Me and my bot “Boogie”, working out in the lab.
0

the “twins”

I was going to try to not post this for a while until got caught up with my crowdfunding fullfillment and some other stuff, but things are advancing so fast right now that if i dont put this in the conversation, it will just get left behind. Introducing the first iteration of my beatjazz parameter feedback bots. I call them "the twins" there is one for each hand and give axial spatial feedback without the need for a projection. i will be sharing more about it soon but i REALLY have to wrap up the previous campaign first.
3

Happy Holidays! the Beatjazz controller is ready for download for makers everywhere!

so finally, I have begun the process of fullfilling my promis of turning this into an open source project.  you can see the files for building your own controller here http://www.thingiverse.com/thing:37472  it is now in the wild.

I dont really know what this will mean to me or to others or to the project.  it might be ignored completely.  it might be jacked and someone else take credit for it.  it might be used for something i cant imagine or just one piece might be relevant to more than just me.  I dont know...which is exciting.  i dont have experience here so i am interested to see what happens.

what i hope is that people see that this is not an instrument, but an idea.  an idea that one can create their own personal prostheic computer interface and not have to be content with what the corporations think we want based on their little demographic surveys. 

 it is a seed.  it is not meant to be this configuration persistently but to be modified, changed and even discarded once said person has found themselves through going through the build process.  i have recently seen 3 different systems inspired by the beatjazz system and none of them look or operate like mine.  that is the point.  it is also the point to the construction.  i designed it with a million little pieces so that each of those pieces can be iterated to each persons perfection.

I knew that i had to upload it now because my own personal system iterates so much that i do not consider it to actually exist as a tangible thing.  it only exists as a tangible "state"  no part of my system has remained unchanged for more than 6 weeks.  and there is no reason for it to.  it can change based on my whims, creative urges, geographical location, event occasion...it is a tangible thought...a corporeal idea.  and as such i knew that if i didnt upload it now, it would iterate away from anything that anyone would understand.  

so Happy holidays and happy building.  and thank you for helping me build (one of) my dreams.
Pages:123456789...1617

Warning: file_get_contents(http://search.twitter.com/search.atom?q=from:onyxashanti&rpp=1) [function.file-get-contents]: failed to open stream: HTTP request failed! HTTP/1.0 410 Gone in /home3/solarpt/public_html/onyx-ashanti.com/wp-content/themes/Onyx/footer.php on line 44

Latest Tweet