I’ve been thinking about how to use the Digitakt as a “track arranger” lately and this is my plan. Haven’t really seen anyone talk about this so just putting it out there. Yes, it has its drawbacks too, but might be good enough for some people:
DISCLAIMER: I haven’t actually had time to do this yet, so at the moment it is only theory. I think it is pretty solid, but there might be some “gotcha” somewhere that ruins the whole thing!
Resample or record each track as an audio loop. 1 bar, 4 bar, 8 bar, whatever. 33 seconds is enough time to do this.
Reduce pattern scale and/or tempo (both if necessary) to the point where trigs no longer represent individual steps, but entire measures.
Use each trig now to sequence when each audio track will play.
Since the DT doesn’t time stretch audio when changing BPM, you keep your original audio BPM.
BPM needs to be reduced in half subdivisions of whatever your starting BPM is for things to match up. So make sure to use a BPM that can be divided at least a few times. Like 120 to 60 to 30 to 15 to 7.5
From the math I’ve done, if you start with a 120 bpm pattern and that’s what you commit to audio tracks, you can reduce pattern scale and pattern BPM to the point where you have 8 minutes of time to work with until the pattern tries to repeat itself. That’s more than enough time for most songs.
You can “subdivide” it even further if you want each trig to represent MULTIPLE measures. Like, 1 trig = 4 measures worth of time. This then moves from a measure-based view to a multimeasure phrase-based view. Then you end up having like 16 or 20 minutes of time, I forget exactly, possibly even more. I stopped figuring out the math after the 8 minute calculation above. I was only concerned with figuring out if you could get it to a point where you have a full song’s worth of time to work with.
You can lock AMP and FILTER envelopes to do automated fade ins or fade outs on a per-measure basis with parameter locks. Can probably use LFO for the same purpose. EDIT: Actually not sure about the amp/filter envelopes as they might not slow down with the BPM/pattern scale, so they might not end up slow enough to do long fades. Will have to test this. But still should be able to use one-shot LFO, I think.
My main concern is how it will affect Delay and LFO times with super low tempos, but I have yet to experiment with that.
Also you lose “per-step” parameter locks such as panning. If you had panning jumping all around in your original pattern, on an individual track, you’re gonna lose that when you resample (cuz mono). You can still pan the entire audio track and I assume you can still use LFO for automated panning. You still have parameter locks obviously but it becomes a “macro” level, per-measure lock, rather than a per-step lock.
Can still live-perform stuff like filter sweeps and quick reverb/delay spurts and stuff like that. Actually I already tested this and even when after a long audio file is triggered, you can turn up reverb and delay and it will still “take effect” on whatever part of the audio is playing at that moment.
So yeah, that’s about it. Basically, “zooming out” from a micro-editing perspective, once you are done with all the micro editing and polishing and want to move on to arranging the tracks in a sequence, just resample as audio, “zoom out” by reducing BPM in subdivisions, or pattern scale, or both. Then use the trigs to sequence the full multi-measure audio tracks.
I guess it would make sense to copy to a new pattern before changing “modes” like this so you don’t lose your original “individual steps” sequence.
The only other thing I can think of is this will probably use a lot of drive space, but so be it.
EDIT: Also forgot to mention, you can record different variations of each audio track (or hell, an entirely different kind of instrument or whatever), and each new measure, trigger a different audio clip. It doesn’t have to be same thing over and over.