I just saw this and I guess it falls under the ‘other gear’ category? Anyways, really cool!
I think ‘machine learning meets synthesizers’ is the inevitable future.
Machine Learning to design sounds?!?!
But that’s, like, the best part! This is like, developing a program that will eat dessert for you!
khaled – Are you building one?
Think of it this way, if modulation can give you a pair of hands. AI (or machine learning) will give you a pair of brains
but I am just happy that somebody is trying to marry the two. this project is tackling sound design but possibilities are endless.
No, I wish I was good at the hardware part and from what I’ve seen this is not really an entry level build
It would be better if a group worked together making this. I looked over the github on this and it’s laid down pretty clearly – better than most. You can hire out some of this too, particularly with a little volume (ten would probably do it.). I’ve wanted to experiment with a neural network synth for a while. Would be fun.
Interesting concept, but I have to say I didn’t hear anything particularly unique, it mostly sounded a bit like either FM or a 90’s tone generator. It will be interesting to see how it develops though.
You can check out the synth engine here =
To me it doesn’t seem to complete replace sound design. It morphs the acoustic properties of 4 sounds at a time to create new sounds out of the 4. You could still create the 4 sounds it’s morphing between…
Honestly a kaos pad seems more destructive to your sound design, but that’s just my take on it…
If they teach these things to make tracks we better get our skills ready boys and girls, haha!
(Edit: maybe a kaos pad is designed to destroy your sound , but I think people will get what I mean, just replace kaos pad with something else people use a lot that drastically changes the sound)
Fun fact: the shop they are in in the beginging of that video is where I picked up my OT and Rytm…
Skynet.
They should have used more vocal sounds in the demos, I think. I’m guessing the freaky stuff comes out when you’re morphing a cymbal with a lion roaring with Whitney Houston, not as much with a palette of General MIDI vanilla.
Hmm, could be a fun build project
Yeah I wonder its also possible that the results from something like that just don’t turn out musically useful… have to say where my mind went was vocal synthesis when I saw this too. Also it seems like most instruments combined might sound a bit like a broken accordion according to what I am hearing in this video.
Seems to me that with a little work one could just ditch the custom user interface electronic hardware. This project is really about the neural network code that processes the audio samples, so strip it back to that. There are lots of options of what to put in place of the UI – probably just a midi interface. (I have other ideas too.) Then this could be more just stock hardware stuff and what should be straight-forward code changes.
As for the sounds one needs to keep in mind that we don’t have much to judge this by, and really this is a software technique, and especial a neural system that’s very open to change. One level up and you would be able to teach this to make the sounds you like and avoid the ones you don’t.
This looks sweet. Agreed that this doesn’t really take away from sound design, it just gives more options to make sounds interact with each other in different ways. It’s like breeding sounds. I’d be putting all sorts through it morphing samples together. Imagine sending 4 outs from OT through it and morphing them while you do cross fader stuff. The sounds in the video show a very very small amount of what’s possible, think bigger than synthesizers you can put anything through it
If it can create new sounds from 4 realtime inputs then that is pretty impressive.
I looked at the web page and the sounds have to be analyzed beforehand…
I think it’s because the neural machine learning code is too large and this machine is just to be able to take some of what it’s processed and play with it…
-Note the prototype is played by “Hector”
Please note that this box does not do any of the analysis of the input sounds; it’s more like a player with a nice human interface for the model once it has been computed.
Neural network stuff in general is slow to set up, the learning phase, and faster to ioperate the network once it is made. The N-Synth as shown here is implemented on a Raspberry Pi, which is of moderate power, faster processors or the dedicated neural networking electronics would make this all faster.
This may be the future of this, getting closer to real-time.
Great that the N-Synth is available for exploration and invention