as I was reading this thread I was thinking this is an instance where physically routing the CUE outputs to the audio inputs to do a ‘fake SRC3’ via the audio inputs to feed the ‘fake neighbor’ could be useful. In my imagined scenario the fake neighbor would be a fully wet delay channel so the latency this technique would introduce is not as inherently problematic. Never actually tried something like this though
The advantage is the possibility to add more fx, no need to add +1/384 microtiming for playback…
Works well with internal SRC3 CUE rec…
Yep I mentioned here :
hehehehe I actually forgot for a minute when starting to read this thread that the delay can go fully wet until I realized I make the delay fully wet every single time I set it up for beat repeats
main thing I wanted to point out for anyone that goes down this path is that in this particular esoteric fake src3 cue>fake neighbor scenario is that if it’s not a fully wet delay channel the introduction of this latency may be problematic for your mix depending on what you’re trying to do and can/will result in phasing issues
I used that Flex / Neighbour with CUE principle here :
A cowbell only track with reverb, thru a pitched delay feeback to reverb or something like that. Kind of filtered panned shimmer reverb.
A 384th note at 120bpm is about 5.2 milliseconds, so yeah if your cuey’s and some inputs are free sounds like a better way to go…
Just (re)checked, lacenty seems to be around 136 samples (3ms) with a cable between cue out and inputs.
Yes, now we’re getting really geeky.
Is that input+output latency then? Would it be halved for just coming into inputs?
Coming in on flex/buffer?
I’m still have some inquiries from here: Do neighbor machines increase latency?
I think so.
I don’t know.
I don’t. It’s not just AD/DA latency, it’s internal buffering and that would remain the same. If it were converter latency that 3ms figure would be ridiculously high. Typically converter chip latency is a a couple of samples (high end converters like those used by RME have sub-0.3ms latency (around 10 samples @48k)
Who talked about converter latency?
Don’t know what is the buffer length but it could be 64 samples, because it is the minimum selection you can make for Trim / Slices.
… and your stated “around 134” samples combined latency lines up pretty nice to twice 64 samples.
Personally I’m just interested in user experienced latency, the actual time it takes which I guess would be converter+buffer, or something along those lines. Off topic now but really I’m just curious about how long is the delay between when a signal arrives at the inputs till when it comes out the outputs with the various monitoring methods…
So how would you go to design such a machine as the OT then? Without input and output buffers? Of course it’s a guess, but based on my technological understanding of how to build stable audio/streaming platforms and not just pure fictional correlation of random things …
Sorry, but I don’t understand the question… which post you refer to to exactly?
Your latest reply, of course, where you quoted my post in combination with the link about “fictional” correlations.
Is it really fictional that a value of ~134 samples may be caused by 2 buffers each 64 samples long?
That was meant to be humoristic. That could be related, or not… only the Elektron engineers know the architecture, I certainly do not.
Some are gone from rumours, maybe they forgot since. Maybe nobody knows now. Maybe they don’t understand anymore what they did…
Ok so we need answers! Maybe they don’t understand anymore what they did anyway.
(I’d like to ask them about some totally illogical choices in MD too, like song menu)