Gain aGain

Proper :wink:
Nice wave and seem just fine… (+8bBu = 8 square or pixel dot in the vertical line measurement/metering)

1 Like

not sure what you mean, but to avoid confusion in others the two systems are un-relatable … analog/digital (we can learn empirical relationships for various input devices or maybe master the Leds)

The Full Scale nudging signal on the MKi screen will need to be a hotter one for the MKii, each relates to its own input ADC

I just count the square pixel on the vertical line and there’s 12 square in the row… The 2 waveforms stopped both on the 8th… so…
==> maybe not accurate as a level metering though :slight_smile:

8a323dfbdc5966138482cc0364ad50c80167ad58

And of course zooming can ruin the metering I don’t know

Yeah, I was very sloppy with terminology throughout, and that whole post was stream of consiousness so anything that’s confusing or outright wrong is on me for sure, and I’ll do my best to clarify it if I can. When i refer to the level of the test tone, that’s its level inside the DAW (because I have no reliable way of metering externally and the whole system isn’t calibrated in any meaningful way).

Yeah, I’m not really sure about that either, to be honest! Looking at it now I don’t think this part makes any sense - the signal is digitized, the -12db pad is applied before it’s saved into the record buffer (I assume, I still don’t really have a good idea of what’s going on with the pad), and then the gain attribute adds those 12db back, so for all intents and purposes, in the tests I was doing, the pad didn’t exist. Other than those two photos of the OT audio editor, the pad shouldn’t actually be affecting anything I was testing, so that whole statement about “track level at full = 12db below unity” seems completely wrong in retrospect, another mistake I made while I was writing it up. because the pad and the gain attribute theoretically cancelled each other out, the signal level immediately after the ADC (pre pad) should be the same as the signal level coming out of the flex machine (given a +12db gain setting in the buffer’s attributes) so unity gain between the OT’s inputs really was unity gain and I confused myself while I was writing it up.

Yep, all of the statments about the test tone level in dbfs are only relative to full scale inside my DAW becasue that’s the only accurate point of reference I had. This is all specific to my setup, which is why it would be great to see someone else do the same tests and see what their results are. Without real, calibrated test equipment (and real testing experience, which I don’t have) this is all broad strokes and it’s definitely worth reiterating that a lot so people don’t take my post for something more definitive than it is!

Thanks, I was blanking on the “u” in “dbu” so I just didn’t state any units at all and figured it would get sorted out. My biggest question in all of this is what it means that I was clipping the inputs heavily even when I was padding the test signal by 6 or 7 dbfs in the DAW, using an interface that is supposedly calibrated to +4dbu. I don’t actually have any of the documentation for my interface, though, maybe its outputs are actually hotter than that and I just don’t realize it. I had the inputs set to +4dbu but the outputs are fixed. I’ll track down a manual and see what it says, although since it’s been heavily modded there’s no telling if any of the original specs still apply. My thinking was that doing the loopback test at the beginning amounted to calibrating the output level enough for the purposes of this stuff at least. Thinking about itnow, maybe I should have been considering the -2dbfs setting on the Reaper tone generator 0db at the interface output for the purposes of all this and called the -8dbfs setting that I had to use to avoid clipping the OT “-6db”, but I don’t really know if that makes any more sense.

At any rate, assuming everything was more or less correctly calibrated (which is a big assumption), it seems like the OT inputs were clipping a couple DB earlier than the specs said they should, but well within the likely margin of error for my actual equipment. I guess if I want to make more sense of this I’d have to drag out my voltmeter (which was once very nice but is also around 30 years old and hasn’t been calibrated since 2011 if the sticker that was on the top of it when I got it is accurate, so who knows how accurate it is - calibrating it costs a few hundred dollars so it’s not going to happen) and taking actual voltage measurements at the outputs of my interface for the -2dbfs and -8dbfs test tone settings I was using.

In fact, maybe I should actually do that if I have time this evening.

EDIT: your next post, with the tests using the A4, pretty much covered that actually, but I might still take some measurements if only to find out what 0dbfs in my DAW actually corresponds to at the outputs on my interface. But probably not today.

2 Likes

One other thing I observed but didn’t mention was that the test signal doesn’t appear to be distorting appreciably in the OT’s analog signal path - the test tones we’re using appear completely clean right up to where they start clipping the inputs. Obviously, distortion in the analog section could be a lot more program dependent - I’ve had more than one piece of gear that distorts much more readily at some frequencies than others - but it does mean there probably isn’t much to worry about in terms of driving the analog signal path too hard like there is in some gear. It looks like you’ll be clipping the ADC long before you get any significant analog distortion.

EDIT:

My biggest questions right now are:

Where exactly does the -12db pad get applied, and how does that correspond to the audio editor display? Is “full scale” at the default zoom actually -12dBFS in the OT’s internal gain structure? If not, how is it possible that we’re able to approach or reach full scale in the record buffer if there’s a -12db pad being applied? It seems to me that there are three possibilities here:

  1. Full scale inside the record buffer is lower than the level at which the ADC clips, the audio editor scales the waveform up by default (easily testable by saving a test tone that reaches 0dbfs in the audio editor display an opening it in a DAW to see if it’s actually reaching 0dbfs or if was padded to -12dBFS peaks and the editor’s default zoom isn’t actually showing us what we think it’s showing us)
  2. The -12db pad isn’t actually applied destructively to the recorded audio, an is actually just a matter of what the gain attribute for a sample actually means - in this scenario, a gain attribute of +12 would actually be unity and a gain attribute of 0 would actually be -12db, but the actual audio recorded into a buffer wouldn’t actually get padded at all
  3. There’s no padding happening at all, the LEDs are calibrated to go into the red at around -12dB below the point where the ADC clips and it’s all a bit of misdirection to keep people from clipping the inputs.

Where are the points in the OT’s internal signal path that the signal can clip? I was able to internally clip it easily using the AMP level, but how is it structured? are there points in the path where it processes in floating point and can’t clip, with bottlenecks where it’s converted to fixed point and can clip, or is the entire signal path fixed point? And does that actually make much practical difference? My suspicion here is that the whole thing is fixed point, because if any of it is floating point why wouldn’t it all be? I’m not sure how we could really test this though, since the AMP level is the only point where it’s convenient to increase the level cleanly - the next point in th signal path where it could be done is effect1, but the only effect I can think of that can boost levels much is the distortion in the lo-fi effect, but since that also distorts it might be hard to tell what is the effect and what is unwanted clipping. I guess playing back a sample that’s 0dBFS peak to peak and then boosting it just a bit with the distortion should (if the whole signal path is fixed point) push it into clipping long before the distortion effect was doing much, so it should be easy to tell the difference but I won’t be able to actually try that for a few days. For now I’m going to assume that the whole internal signal path for a given track is 24 bit fixed point, and that whatever your headroom is coming out of a machine, that’s your headroom for the entire track, an you have to gainstage deliberately, like an old DAW from the early 2000s before they all (except Pro Tools, which insists on staying a decade behind its competition and was still fixed point up until v11 came out a couple years ago!) went to a fully floating point signal path.

My new approach based on this is going to be

  • Record as hot as possible, aim for orange with peaks in the red and as little in the green as possible.
  • Keep the AMP level at unity or lower
  • Leave track levels at their default 108 and boost the main output level to +12 for unity gain when sampling from the inputs (have to test thru machines and internal sampling and see if they work the same way, though)
3 Likes

I need to read through all this still, great tests by the way, thanks!

One thing I’d like to mention is that I was recommended to use the -10dBV setting on my interface to send to the OT from an apogee employee when giving them the specs of the OT mk1’s inputs…

I have an ensemble thunderbolt, in the manual it shows the max output strength the unit will send out using either the +4 or -10 reference.
At the +4dBu setting, the unit will output +20dBu max (when the DAW meters read 0 dbfs)
At the -10dBV setting, the unit will output +6dBV max (when the DAW meters read 0 dbfs)

Using this handy dBu, dBV, voltage calculator: -http://www.sengpielaudio.com/calculator-db-volt.htm

I can plug in the +6dBV max from the -10 reference, and see that it is 8.2dBu
The OT mk1’s inputs are rated 8dBu, so this is just about right…

Means with my ensemble thunderbolt set at -10dBV setting, a signal sent out of it should clip the OT’s inputs just before my DAW meter hits 0… Also means +4 reference setting is way too hot…

1 Like

Interesting, that would explain why I had to pad the test tone in the DAW first, and it lines up with Avantronica’s experience using the CV output from the A4, too. I can switch the inputs between -10 and +4 reference but not the outputs so it wasn’t really an option for me.

This might explain why some people find it really easy to clip the OT’s inputs and others don’t have trouble with it. I generally assume that any gear aimed at professional users is going to be +4 but it actually makes sense for the OT to run at -10 because it will be used for a wide variety of sources and it’s always better to attenuate a source that’s too hot than to boost a source that’s not hot enough.

According to the manual, my Casio CZ-101 has a maximum output of 1.2v RMS, which according to that calculator would be 1.8dBV - well under the 5.8 dBV (1.95v) that the OT’s inputs are rated for, but I can easily drive the OT’s into audible digital clipping with it in the project I’ve been using lately. I’ll have to double check and see if I’m clipping the inputs or if my gainstaging in the OT can be tweaked to keep it from happening.

1 Like

" Hey Beginners we promise to clarify this thread at some point with simple explanations and advises ":joy:

Scientist-1-Color

For now it’s Scientists & analysts There :stuck_out_tongue:

15 Likes

Haha, I was gonna say yesterday that this will probably have to get more confusing for a bit before we can turn it around and make sense of it all! :wink:

4 Likes

Hell, I’m confused enough by my own posts in this thread as it is.

3 Likes

I was surprised about not using +4 and rapped with Don from Apogee who was very knowledgeable at explaining why the -10 setting was preferable…

One of the key points I picked up is that the +4 and -10 settings are a reference setting.

Every audio interface will have a max output at each setting, and it’s this max output that is produced when your meters in the DAW hit 0…

So by consulting manuals and using the converter I posted above, you can find out how many dBu, dBV, or volts an audio interface produces from its outputs when the meters hit 0 in the DAW for each reference setting, and then determine the appropriate setting to use for whatever the device your feedings input specs are…

1 Like

Just wanted to get to the heart of why this was all of interest to me, it’s just being able to rely on the immediacy of using the OT to capture short ideas/moments without resorting to hooking up an interface or field recorder etc

Buoyed on by the epiphany that previous assumptions are no obstacle I’ve arranged a quick’n’ dirty A’n’B test for willing participants, it’s hidden just now, just to keep it to focussed replies … essentially it’s a chance for all those folk banging on about how bad the OT inputs/mixer are to show in a blind test that they’re hearing this degradation or not depending on what they listen to (as oppose to expect)

To be honest, it was quickly assembled and I mostly just hear my own personal Mika Vainio gig in my ears all day anyway besides the passing traffic, so I’ve not had a chance to listen properly, but I do know what’s what and whether I’ve been mischievous or not - what do your ears tell you ?

I’ll open that thread soon, anyway, back on topic


1 Like

I’m thinking it could all be fixed point maths inside… which do those DSP chips preferred anyway? I actually wouldn’t mind a fixed point design, it has a ”sound” lol

If anything, this thread has already clarified some key points for me:

  • I will not be using mixer input gain settings from now on (no point) better to adjust pre OT
  • track level faders are attenuators only
  • Okay to redline the inputs a bit

I really appreciate all this guys! I wish this sort of curiosity was more commonplace tbh (s there something wrong with ne? :diddly: )

A post was merged into an existing topic: OT AB test : comparison between source / capture

If anyone’s interested in doing it a bit more formally, this plugin for Foobar2000 works great:

https://www.foobar2000.org/components/view/foo_abx

Back when I was trying to decide between getting a JV1080 and JV2080 I downloaded a bunch of level-matched, high quality, raw waveform comparisons that were posted on a site and purported to show that both machines sounded identical, and was able to tell the difference between them with about 90% accuracy (which was great because I consistently preferred the sound of the 1080, so I saved myself $100). Which isn’t really relevant, except to say that I’ve used it and it works really well for self-administering blind ABX tests.

EDIT: topic got split while I was typing this, oops!

2 Likes

Yeah, I would be surprised if it wasn’t, it wouldn’t make sense to convert between the two and it wouldn’t clip internally if it was all floating point. There are a whole lot of beloved hardware effects that use 16, 18 or 24 bit fixed point DSP, there’s certainly no reason it wouldn’t sound good but it does mean that gainstaging is different than the gainstaging in a modern DAW, and since the OT has a more complicated signal path that a lot of digital hardware, there are a lot more opportunities to accidentally clip stuff, especially if you’re used to floating point gainstaging where you really only have to worry about clipping at the inputs and outputs.

I was also thinking about it having a “sound” too, and also that knowing it’s possible to clip the actual DSP signal path at various points, there’s potential for creative abuse. Of the top of my head, if your input signal or sample is peaking near 0BFS already, the AMP volume suddenly becomes a digital clipper for example - use an LFO modulating AMP volume to rhythmically push it in and out of clipping, and put a compressor in effect 1 set to smash it a bit to keep the output level even and round off the clipping a bit. Haven’t tried it, but it might be a cool trick.

1 Like

True. Why don’t you write a practical sum up ?
:stuck_out_tongue_winking_eye:

2 Likes

Because I’m an observer on this one… i done nothing…

20090416_observer_190x190

5 Likes

So. Did you guys ever reach a conclusion on how to get a signal going through the octatrack without loosing volume or quality. A newb guide would have been awesome. Was it posted elsewhere?

If so, where? and if not… What are the settings that work? Where do you boost the signal to keep it clean??

3 Likes

Would also like to see a final conclusion. But I guess ain’t nobody got time for this?