0xf8 Phase 2

by Clint on November 16th 2010

Now that the init.system DVD is out, I thought I'd spend some time talking about the next big project for synnack and 0xf8 Studios. We've travelled around the world with custom video for the synnack live performances. VJed in clubs. Created and produced a film-like DVD. What's next?

What's coming

There are multiple projects ultimately influencing the new video direction. Fortunately there are common components and infrastructure needed for these that can be applied to all of them. Here's a summary of what we're working towards:

  1. Videos for v2.5: The v2.5 release is entirely dark ambient. I'd like to create a patch in Jitter which is able to analyze the audio from v2.5 and based on a set of attributes (Eq, amplitude, length, etc...) dynamically create a visual. I'm not looking for generative abstraction so much as a combination of dynamic video effects over photography that I would do. Outside of the upfront time it will take to create the Jitter patch, this should, in theory, allow us to create these videos pretty quickly yet still contain a high degree of customization and connection with the music. They would likely be given away on Vimeo/Youtube with possibly a short-run, hand packaged, and signed DVDr.

  2. New live video capabilities for the synnack sets: Early this year when we did a few shows in Europe we started sending MIDI events from my laptop (which runs the audio) to Jennifer's laptop (running the video projection) such that specific video effects could be triggered by the music. We started doing this with the kick and snare drums mostly. Regardless of what footage was displaying, there was a visual connection to what you heard. We were doing this using the IAC bus in OSX over WIFI. It works fine, but to scale this up to start interpreting all the audio as video effects, it has to be rethought and rebuilt.

  3. Increasing interest in installation work: This is not exactly new for us. Both Jennifer and I have been disinterested in video projection strictly as a square on the wall, or as a backdrop of something else, for a long time. Club style "VJ" work is so boring and uninspired to me in most cases. I have always felt that since the tools needed to VJ are common and anyone can throw some stock VJ clips into an app and look just as fancy as the next "VJ", the focus needs to be on the content of the videos themselves, not in how they spin around. This is very anti-vj if you think about it. Most VJ work is done to give people visual eye candy while they have fun in a club. This is NOT why we do this. Content is king. Back in college when I majored in painting, I had always thought that painting was such a limited medium in that no matter how good a painting is, it's a square (or rectangle) on a wall. The way people interact and experience a painting for the most part hasn't changed in centuries. Paintings are things you stare at on the wall. When you consider VJ-style video is much the same, it's boring shit. My increased focus is to liberate video art from the "square on the wall" and follow the rich history of artists who use video and sound to create environments which can sometimes be dynamic and interactive, and experienced in a physical sense in 3 dimensions.

    I have an initial idea for an installation work that will stand on its own for installation in galleries that I hope to realize in 2011. I am hoping to create installation inspired by the events surrounding the week Hurricane Katrina hit New Orleans. The installation could include a Max/MSP/Jitter environment that reads in meta data that maps to the events of the week that triggers different effects and video clips that are projected on 3D objects that are linked to those events. The audio will be sourced from the synnack Katrina release coming. Much much more on Katrina and other installation ideas to come.


How to get there

To enable these 3 things, we've started building the technology infrastructure we need using Max for Live and Max/MSP/Jitter as follows:

  • Convert from MIDI to OSC: The volume of data we need to push between machines would introduce too much latency to continue with MIDI. Now we are using Max for Live and [udpsend] on Laptop1 to send the data as an OSC path to [udpreceive] in Max/MSP/Jitter on Laptop2.

  • Audio analyzers in MaxforLive: We needed a series of Max for Live devices that can listen to audio on different tracks on Laptop1 and interpret it into a series of numbers for processing by Jitter on Laptop2. Max for Live turns out to be a perfect tool for this as you get all the power of Max/MSP, combined with direct access to the API's of Ableton Live. This enables me to not only interpret the audio into numbers, but also characteristics of the DAW environment itself. I have been a BIG FAN of Max for Live since the early beta and used it for some interesting use-cases in the v2.5 release. I'm excited to finally apply it to video in some way. Once these are all done I will likely post them to maxforlive.com. Here's what we have prototyped so far:


    • SEND-HIGHEND.amxd: uses the [fffb~] object to split the frequency spectrum and send only the high frequency data as floats to Laptop2

    • SEND-LOWEND.amxd: uses the [fffb~] object to split the frequency spectrum and send only low end data as floats to Laptop2

    • SEND-PLAYLENGTH.amxd: uses the Ableton API to detect when the global transport starts, and counts upwards. This running count is sent as a float to Laptop2 so specific video effect can occur based on how long a song has been playing.

    • SEND-TEMPO.amxd: users the Ableton API to detect current BPM and send that to Laptop2

    • SEND-AMPLITUDE.amxd: This device uses [live.meter] to get the current amplitude of whatever track it's on and send it to Laptop2 as a float. There are 3 possible destinations so a single instance of this device can be used on multiple tracks with multiple destinations.

    • SEND-ENV_FOLLOWER.amxd: This device is a hacked up version of the Envelope Follower that comes in the Pluggo library. It sends numerical values as float to Laptop2.

    • SEND-SPECTRUM.amxd: uses a [fffb~] object and series of of [live.meter] objects to send a message containing all frequency band content to Laptop2

    • SEND-MIDIBANG.amxd: This device sends a bang message whenever a specific MIDI note occurs in a sequence.

    • SEND-AUDIOBANG.amxd: Sends a bang message whenever audio amplitude exceeds a user-defined threshold. You can throw it on a kick track for example, and it will bang for each kick.

    • SEND-STOP_START.amxd: uses the Ableton API to detect changes in start and stop of the global transport and send this status to Laptop2

    • RECEIVER-SEND_OSC.amxd: All of the previously mentioned devices use [send] objects to this device which routes the data to Laptop2 using [udpsend] and the OSC protocol. It has options to display the current data being sent, or selectively turn on or off each sender.


  • Receivers on Jitter machine and preset matrix: On Laptop2, we accept all of this raw data using [udpreceive], parse the OSC paths, and route it to video effect controls in a matrix that can be controlled via presets. This means we can store different routes for different songs to get entirely different effect possibilities. As part of the Max patches on Laptop2 we've pretty much decided to rebuild our video effects patch with clarity and better use of encapsulation. This in itself is a massive undertaking but Jennifer has been kicking ass on this in a short period of time. I am pretty good with Max/MSP but advanced Jitter stuff breaks my head so I'm glad Laptop2 is her machine. :-D

  • Custom control interface using TouchOSC and iPad: I've been messing with TouchAble and TouchOSC on my iPad a bit. For the audio control of my live set I think I will stick with the APC40. Nothing beats tactical when you're jumping around and sweating. There are some really cool uses for TouchOSC though on the iPad as a controller. X/Y conrol is particularly fun and turning virtual knobs turns out more fun than I thought. Part of the new setup includes Jennifer wirelessly controlling her Jitter patch from her iPad.

  • Summary

    This will be a ton of work but we'll get way more out of it this time than a single release. It's the foundation for the next few years of output of audio/visual work and the start of a transition back to "artist" for me from "guy in a band". This doesn't mean there isn't much more synnack music to come though. Katrina is in the works and I already have ideas for "v3" planned. I think I need more redbull in my life.

    Did you like this post? subscribe to this blog or follow me on twitter or facebook for updates when new posts are made.

    blog comments powered by Disqus

    Copyright © 2024 synnack All Rights Reserved.