Hybrid live set and the struggle with DAW audio latency

I’ve been preparing for a live set, and decided I would be using:

  • Bitwig for atmospheres/ambiances/pads, and some longer vocal samples
  • Modular with voices sequenced by the Oxi One, and a ES-9 for the audio outs
  • Octatrack for live looping/mangling the modular’s riffs, and the Bitwig vocals
  • Analog Rytm for the added percussion/hats/non-melodic sounds
  • All sync’d using a Midronome for stabilising the clock jitter

Been practicing mostly with the modular + OT + Bitwig the past 4 weeks to get into grips of what I can improvise with, decided it was time to put the AR into play to add some sparkles after I got the foundations of what I want to play, and the nightmare of audio latency begun… :sweat:

Bitwig is being used also as a mixer for the ES-9 inputs, very little processing (just a few FX lacking from the modular), nothing I can see in terms of VST adding lots of latency (around 0,7ms).

I’ve noticed that the clock is absolutely 100% on time but the audio latency has been impossible to solve, even at 32 samples in the DAW when I record a loop in the OT (coming from the ES-9) it always has a bit of latency, very noticeable when recording just a 4/4 kick pattern and scrolling the waveform in OT’s audio editor. It’s enough to throw any 1/4 note or 1/8 note hats very off from the kick pattern (the kick is played from the modular).

Been trying to think what can even be done to compensate for it, so far I could only think about routing all the AR sounds through Overbridge in the DAW and output it on the ES-9 (or the opposite, route all sounds from the ES-9 into the AR via OB), at least to keep all the percussion sounds in sync with the same latency but then the issue becomes: any phrase I sample on the OT will be a little off time when I put down a trig…

Rookie mistake since I’ve been quite happy with the MIDI sync when recording stuff but completely forgot about the fucking audio latency when monitoring via a DAW.

Don’t have very high hopes someone will have a solution but if you do… Please enlighten me! :pray:

1 Like

I’m no expert in Bitwig, but Latency Compensation for the input of any DAW, in general, can only get you so far if the latency has already developed as part of the output for the clock. You actually need what one might call “Negative Latency Compensation” to get the external clock and gear running a bit earlier than the rest of the project in Bitwig.

The easiest solution is don’t route the tracks back through Bitwig and instead go straight to a hardware mixer. Of course, that’s only going to work if you can sacrifice what Bitwig is doing to those tracks in real-time. It will definitely knock your latency down though.

Moving on from that suggestion, this is an audio signal that’s driving the external clock, right? If so, maybe there is a way in Bitwig you can create a negative delay on just that track’s own output? There are ways to do that with Ableton that I’ve seen. If it works, you may need to create a 1 Bar pre-roll in the Bitwig project to allow the external clock to begin earlier than the project’s first downbeat.

Next, are you using a plugin for sending the clock signal? Maybe there are ways to do a negative latency adjustment within the plugin itself, or on the track that is hosting the plugin. I’ve seen features in Ableton for adjusting the Latency of just a single plugin on a track.

So those would be my first attempts, finding “negative” latency compensation for the output of the clock track. If you can do that, then in theory you can make the external clock and all the devices that depend on it run a bit early.

The other way to mitigate involves getting an external clock that can dial in its own negative latency compensation. So, the DAW track output or the plugin has no compensation, but the external device itself determines how it will handle adding latency to the signal, which you can make into a negative latency (with a 1 bar pre-roll).

This isn’t meant to be an ad, but I make CLOCKstep:MULTI, which has those external latency adjustment features, but it’s not the only one. ERM Multiclock does this, and I believe Innerclock and USAMO might be able to as well. That’s if you can’t find a way using any of the suggestions above first.

7 Likes

Hi, a read here might worth it :wink:

Some people have the Midronome and discussing it in this thread, maybe you can PM them :wink:

1 Like

I have done a bit of a comprehensive deep dive into the subject lately, and also am using Bitwig as my primary DAW. I have some solutions in terms of recording and compensated monitoring with latency inducing plugins / HW FX while keeping everything on grid in the timeline. This does involve creating additional latency from input to output, so if tracking live played instruments, may not be ideal but am happy to take a moment to consolidate and share my solutions once I get a short break from my relentless day job… I’ll try to keep a note of this thread and provide an update when I have the chance.

2 Likes

clockstep multi is a very cheap answer, and it works as converter to sync modular or midi devices. Hapax can do this onboard.

2 Likes

That completely negates why I’m using the DAW in the first place: it’s both the FX chain for the modular voices, and the mixer for the ES-9 inputs (I have 5 voices going through it, using faders on a controller). I do use a hardware mixer for the whole setup but in this case I won’t have space for setting up a mixer on the desk :frowning:

I’m using the Midronome with its U-SYNC plugin, that setup works flawlessly if I just want to sync the DAW with all the hardware and record stems into a timeline/clips, the issue isn’t so much the clock sync (that works well) but the overall audio latency of using the ES-9 for inputs + outputs, the outs going into the hardware clock sync’d to the DAW, and the final output getting a bit delayed from the round-trips. I can adjust the timing to compensate but then everything will have the delay, not only the boxes (OT and AR) which are playing before the delayed audio. I’m reading the manuals of the gear that could potentially have timing adjustments (Midronome and the Oxi One) to see if I missed some feature where I could add some latency to the clock signal going into the OT and AR.

Probably that would be the solution, if I can manage to introduce some latency to the clock signal, hopefully something I have can do it.

1 Like

Sometimes, in very specific case you can make a draw of your setup and send it to each brand of your setup (tech support) and sometimes one can reply useful tips. Usually I would try first the specific sync solution and explain exactly what my setup and what are my intentions, how I will use the whole and perform.

It may help … on a forum, you would need to find someone with a very close setup, that think to perform your way… Sometimes it’s very difficult. Performing is also a personal/subjective vision.

1 Like

Firmware xxx58 did introduce negative and positive offsets - my usage is Abelton Audio clock to sync the midi clock, and record my synth, works without any problems - you need a 6.5 to 3.5 audio cable to sync from your audio interface to the sync in of the clockstep.

I wonder why oxi didnt include such an option in their sequencer, with a CV/gate in, that should also be possible. Maybe send a feature request to them?

3 Likes

Yup, totally agree it’s quite personal :smiley:

For this setup I imagined others could have used a similar configuration, even though the hardware pieces are more specific, the basic gist of it are:

  • A DAW being used to process sounds coming from a soundcard (ES-9), the sounds are sync’d through analog clock from a MIDI clock device
  • A DAW outputting sound through a soundcard after processing
  • Hardware box(es) also sync’d through a MIDI clock device that needs to play in time with the DAW sounds (which have a little latency)

I feel the only solution will be to have a way I can delay the clock output to the hardware boxes sync’d through MIDI. It’s relatively easy to calculate how many milliseconds of delay I see on the waveform recorded in the Octatrack so compensating that by adding some delay to the MIDI clock (and not the analog clock going into the modular) would set everything in place.

@Sternenlicht suggestion of the CLOCKstep:MULTI seems to be exactly what I would need (if I can’t find a workaround with the Midronome) but the main issue is: it’s out of stock and I’m supposed to play on the 23rd of May :scream:

1 Like

I feel you ! Damn.;……

Hahah, yeah, the deadline is making me sweat buckets to solve this in time :cold_sweat:

All the help you folks tried to give me on this topic just gave me an idea: I have a Blokas Midihub which has a “Sync Delay” pipe, it might be enough for what I need (at least roughly since the sync delay is based on clock pulses and not precise milliseconds), I will give it a try as soon as I can and report back.

Thank you all for jumping in and trying to help, I love this community :heart_eyes_cat:

2 Likes

There is also a driver compensation in Abelton, maybe Bitwig has something like this too, i belive it can put negative offsets also. (normally it offsets audio in and out measured directly on the audio interface)

1 Like

Hey, I have a similar setup, but without the modular. Did you know, that you can use the Analog Rytm as an Audiointerface?

You could use the AR for Drums/Percussion and(!) a Bitwig Interface. Then you could route the Modular to the Audio Ins from the AR and then catch it up in Bitwig and use Bitwig stuff (FX, Ambient, Synths, Atmos) and route the Bitwig master into two Pads (for example BT & LT) from the AR. You can do this in the Audio Routing setup from the AR.

So then you could pan one Pad (BT) to the left and the outer (LT) to the right. Then you could cancel those Pads from the AR Master and route the individual Outs of those Pads (BT & LT) from the Rytm Back. So now you have the Stereo Master Out from the Rytm (Drums/Percussion) and the individual Pad Out (Stereo Bitwig Out for Melody, Modular) and both stereo pairs could go into the A/B/C/D Inputs of the Octatrack. That can be used for loop based transitions and mangling effects.

The sync master could come from the OT to the Rytm and then via USB to Bitwig.

In Addition: The Macro Pads from the AR can send out MIDI CC so you can use it to controll Bitwig stuff like FX Send or whatever.

Thats basically what I do and I think it could fit into your setup as well?!?

Best, Kai

2 Likes

Let’s see if I can quickly summarize how I have this set up.

Midronome + Usync is a plugin? So first step is to have the track with this plugin ideally sending it’s audio to a HW FX output, no other track inputs or outputs enabled.
Then do a recording test, and assuming the recording from external hardware does not land perfectly on grid, add a Time Shift device to the track and, if you can adjust until incoming audio is on grid.
I use Audio>CV sync so this part of the process may differ.

Then create a group track for your external inputs. Set this to Master out, no inputs and Muted. Using an instrument layer, add a HW instrument device for each of your audio inputs.
In the FX tab of the Instrument Layer you can add a Time Shift device and manually set offsets in relation to other audio sources
On this group track master is where you add another Time Shift device. This is where you create an extra buffer to allow automatic delay compensation to work properly down the line. You would set this to the maximum compensation needed to get your plugins and HW sends properly lined up.

Then you add an audio track (not in the group folder) for each input and route the input from that track from Instrument Layer>Instrument layer chains>HW instrument device>HW input Main out (or something like that)
These are the tracks that you would use for latency monitoring and further FX processing.

Sorry if this is not very clear but maybe it will help get you on the right track. It’s a bit tricky, took me a lot of farting about to get external audio, overbridge audio, midi input, VSTI> internal and external FX processing to not end up just a total trainwreck.

Yup, we are on the same page. Your sync works great from a tempo perspective, but the round-trip latency that you experience from Clock Out → External Processing → Audio Return + any added Bitwig processing, is the problem for Real-time performance.

The real-time part is the crucial piece of info, and is why the Driver Compensation feature in the DAW can’t really help, because the alignment adjustment that it makes are only present after you hit stop and replay a newly recorded track.

Hopefully, something I’ve said will click in time for your show. I wish I could put it in the context of Bitwig, but my experience is with Ableton. In Ableton, there are many timing features, including one that allows you to manipulate the timing of the output of an individual audio track in relation to the output of all other tracks.

You might try switching from the proprietary USB protocol of your clock device and change it over to its Audio protocol (if that even would make a difference in how everything gets routed, I’m not well acquainted with any proprietary protocols). If Bitwig can do some track-based, real-time latency adjustments, you might have success. At the very least, you could try adding a Line Delay plugin to the clock track, which will allow you to mimic a positive latency adjustment, which perhaps then you can find a way to turn into a negative latency adjustment by how you start the project and all the external devices. Or, maybe you could still use a delay plugin and continue to use the proprietary protocol if it just works as a plugin on the track and will be influenced by the delay too.

Good luck!

Are you in the US? I would suggest that you keep up efforts to work with what you have and find an immediate solution, but if you want to have another option waiting, I would be willing to send you my personal CLOCKstep:MULTI as I won’t be needing it right away myself. You can send it back and either exchange it for a new one, or just return it. I would just need you to pay for all the shipping.

(I can’t make that offer if you are not in the US, sorry. That would be cost prohibitive and Customs can always throw a wrench in the works).

2 Likes

Hybrid setups are a recipe for disaster. Sure, if you’re in the studio, there are various workaround to solve latency issues, but when on stage you only have 2 options: going 100% dawless or 100% in the box

I learned the hard way

1 Like

I think it’s a bit more nuanced than that. The majority of issues seem to occur when you are using the software as your mixer. I can either sequence from Ableton, or from my Hapax to both software and hardware destinations, and as long as all elements output to a hardware mixer then you can control and mitigate the latency. I have used a Behringer XR18 for this.

However, if in the same scenario I try using Ableton as the mixer (same XR18 as an audio interface only), then I quickly run into issues with latency as Ableton is not consistent with its latency when under variable CPU load. The Hapax can do audio sync, plus per track/channel sync offsets, and that helps keep the sequencing ultra-tight, but I still end up with variable latency at the output.

3 Likes

These are good points

You reminded me of something also, and it makes me want to mention that Negative Latency Adjustment that occurs outside the box is dependent on project BPM. Which is also why it’s common for Clocks that can do Negative Latency Adjustment to implement Presets. If you can’t store and recall with exact Latency Adjustment used for each project, you wont have a fun time live at all.

I have what might considered a sort of White Paper about the variable aspect of external latency tied to BPM, and I can post it if anyone wants. It just explains the why’s, what’s and wherefor’s of how Negative Latency is actually implemented under the covers.

1 Like

The “Ableton as a mixer” option is definitely the one we would all like to use in a liveset, unfortunately it doesn’t work (due to the problems you mentioned): the show will quickly turn into a nightmare malfunctions. The option instead of using gear and a computer connected to an outboard mixer, also involves bringing (in addition to the mixer) an entire rack of FX, compressors and various pedals. Basically dismantling the entire studio and moving it on stage, with all the problems of cable connections and stuff. You already know that generally musicians who play electronic music don’t have roadies and sound engineers who do this job.
That’s why, for convenience, the electronic gig must have little gear, the setup must be as simple as possible, some smart musicians have flycases with the gear already wired inside and no computer at all.
Those who don’t want to go crazy instead, bring laptops and controllers to be on the safe side.

1 Like