Why not Multi-timbral?

If there is one thing that still confuses me about new synths these days, it’s this-I guess I expect that computer chip technology has expanded possibilities, now that we are here in the future, where you can face time someone on your watch. I mean 25 years ago Nord made 4 part synths with at least 4 voices each (maybe I’m wrong about this, I’ve only owned the NM). Now years later digital synths are bi-timbral at best.
I guess in the end, maybe it’s just marketing and the way things are…This is not about Elektron. I know they make 6 part synths, but whats the bottleneck here? Are we in the future? Does anyone else wonder these things? Have the chips in synths developed? Actually, looking inside my AK, it seems they have, when you can sound plock analog presets. Is it all bells and whistles? Like to complicated of processes to have 4+ parts? Too expensive? Who knows. When I see a new synth and it’s some monster spaceship looking thing…and it only has one part, I move on…and go back to my MnM.

16 Likes

Same feeling about why so many modern samplers are monophonic.

6 Likes

The Kodamo EssenceFM is the most appealing synth for this very reason.

4 Likes

Yeah, my first sampler was the MPC2000 (1997). I think it ran on a 200mhz chip, and was 32 voices poly and could be upgraded to 32mb ram. I mean really, why is ram still in megabites? I see your point. A valid point. I am only curious. Not trying to bash any companies.

3 Likes

I wonder this all the time. Late-1980s digital rack synths (and some analogs) were mostly multitimbral, and several even had a 6-part mono mode for guitarists (Casio VZ-8m/Oberheim Matrix 1000/Roland MKS-50).

Maybe it’s because nowadays buyers expect a knobby interface and a low price, so multitimbrality had to be scrapped. And several recent synths had issues with multitimbrality (EX5/Tetra/Blofeld) so maybe it’s too much of a hassle.

And regarding polyphony, each voice needs to run through its own filter and VCA, plus individual envelopes and LFOs per voice, so it’s understandable with analogs.

People complain about Elektrons being ‘too expensive’ but IMO they’re a bargain.

2 Likes

Really, if you look at the chips that the A4’s have in um and think about what they can do, basically all analog circuits, env’s, lfo’s, ect- All changing per step? It’s mind boggling. But the digital synths really…come on. One voice multi-timbral? Knobs? Nah, 1996 they could do it.
Maybe someone on here has an insight into chip tech and their evolution.

4 Likes

I think part of it comes down to modern workflows. The heyday of those massively multitimbral synths and workstations is the era when computer based digital mutlitracking was still out of reach for most home setups and you needed more simultaneous tracks available over midi to flesh out your arrangements.

19 Likes

Hardware is relatively cheap, software is expensive, and that’s usually where multitimbral would come from.
Sadly because manufacturers (and even uninformed customers) would rather have custom chips and OS’s vs something built on open “standards” like linux, a lot of time and money has to be invested into just getting “simple” stuff working on the latest hardware.
I believe there will be a move towards standard “microchips” which are basically just running linux (raspberry pi etc), leaving instrument makers room to actually make instruments and not write drivers for a 32MB RAM chip so that it works with their custom 1Mhz processor chip.

5 Likes

This is interesting. 1mhz? I wonder why they are stuck at such a low processor speed still? Is this a problem with industrial chip makers?

I don’t know if that’s an accurate processor speed to be honest, but as many “modern” synths are operating in the range of 32-64Mb of RAM, I don’t think there would be much point in having multicore Ghz speed CPUs etc. It’s probs a low estimate but I would honestly be surprised if some of these things are operating at much higher clock rates.

1 Like

Because were not ready to move the next stage of evolution yet. People like familiarity. As soon as you change things people run in the other direction. And you don’t change a winning formula in business.

2 Likes

Evolution is all backwards I guess. Devo.

7 Likes

The multitimbral synth market was primarily oriented towards romplers, and the market for romplers was eaten alive by software synths and software samplers as general-purpose computers developed.

Consequently hardware synth sales are reduced, and a lot of the remainder of the industry’s attention is on analog or esoteric digital synths where polyphony is more costly to achieve.

13 Likes

Now that I think about it, Roland has done a lot of Multi-timbral stuff, with a built in sequencer. MC-101 comes to mind, in such a small box. Maybe it is the workflow. Elektron has multi parts attached to the sequencer

Yeah, very true.

1 Like

It could be, in part, because for cost-cutting reasons, the hardware that runs the firmware is typically spec-ed to run the optimized firmware within an inch of its life. For instance, I keep reading that certain features can’t be added to the Digitakt in a firmware update because the hardware is already kind of maxed out with what’s there.

So now we get into signal processing, and the idea that several voices being generated from one (mono-timbral) or two (bi-timbral) patches in firmware can be optimized, signal-flow and logic-flow wise, to run more efficiently than the same number of voices being generated with, say, eight (multi) active patches.

And then, as previously mentioned in the thread, there’s the design that goes into standalone workflows, both hardware and firmware, which suddenly becomes more complicated and expensive too.

But yes, I do love the way many Elektron devices, such as the lovely Digitone, allows me to freely distribute voices in just about any configuration across tracks. Makes for less additional hardware needed.

2 Likes

I feel similar about my MnM. Although not multi part poly, it’s still multi digital voice. If it weren’t for the OT, I could do everything in it.

it’s the marketing and R&D - it’s about identifying a market (whether it exists or not) then screwing down the hardware & development costs to exactly fit the perceived requirements for this perceived group of possible buyers?

ie. I assume that something like a Korg bi-timbral synth hits a market point where its has some cool/unique/worthy stuff onboard but also integrates well as a controller/input device in a computer-based environment - which really - is the mainstream of production.

Of course, I don’t really need to say on this forum but the interesting zone is the outlier manufacturers (Mutable instaruments) - or initiatives that only build prototypes even but make their plans available (MIDIBox? etc)

1 Like

Had no idea. Thanx. Googled.

1 Like

Convergence is also a curse: the big resource that ‘computers’ in music context have: being able to display stuff on a big screen with familiar controls that makes it easy to edit, manage samples, etc.

For a standalone music device we can more RAM/CPU and display-size/interface resources, until we end up with… a computer? (this is what I think about when people ask for disk streaming on the current MPC line - not entirely fair but I still think it).

Pretty sure this will have affected projects we don’t even know about - if they survived, they transformed into Mac/PC software instead

2 Likes

This is true…and the historical context. The Nord Modular predates many peoples birthdates, so how would they know? Or even care, as long as it has a million blinky buttons and a over the top wavetable that can, through AI, make a new pop song. It’s all that and a bag of chips. Culture is so subjective.