g3o2
17
Resynthesis has been around for a while, too. Maybe with ML, it might be easier to apply it to more complex synthesis engines. Of course, there is also a cost trade-off here: why not simply sample the sound you want or combine sampling with synthesis? So how many companies would be willing to invest in this area?
Didn’t the devs of the Hydrasynth use machine learning (ML) internally to emulate the filters of various synths? I remember having heard something like that in the early YT videos. So this already seems to be the reality and would not particularly need any “AI” but “only” some training data and a decent model that after training translates into relevant DSP settings.
Generating music seems easier with Deep Learning today than generating sound though: the training data is already there thanks to music notation and more importantly MIDI data. All you have to provide as input is music in MIDI format and then define the problem as follows: “predict the next note based on the previous note or sequence of notes”, similar to predicting text. That way you avoid expensive manual training of labelled data. This same approach is already used by NLP models these days, including ChatGPT and BERT.