Jukka
5
I found it – it was a research project, and they created settings for u-he Diva from a sample. I posted about it here :
Audio data as video data representation - #38 by Jukka
The method is neural net based and they train the system to recognize spectral images.
There are versions, that basically do something very similar.
This one comes with source code, but you need to train the network, i think.
https://jakespracher.medium.com/generating-musical-synthesizer-patches-with-machine-learning-c52f66dfe751
And here’s a research paper that may well be an original source for these other two. This is in pdf, and is very technical.
Seems to me a hardware maker could do something similar, where you upload a short sample, and they return a patch for your synth, or an image to work from, if there is no patch storage on the synth.
4 Likes