This is an inexpensive physical keyboard that does generative music using artificial intelligence. You give it a melody and using software in the cloud, the music gets completed. There are generative models for pop, rock, jazz, classical, or you can build your own by training the system.
Whoa. Interesting for them to dive into this market, and in such a way. Amazon has huge infrastructure for deep learning, Iām just surprised that this is one of the first entry points that they chose. They launched a self-driving RC car this year, which seems more in line with the types of products Iād expect. Of course, thatās a whole other sub-culture with a big following. Music-making just seems like a fringe-ey, frivolous subject for them. Then again, Amazon is so big, and has so many resources, they can kinda produce whatever they want.
Edit to add: Donāt get me wrong - Iām hugely interested in machine learning and its various sub-genres. After skimming the website and reviewing the pricing structure, Iām confused about the end goal. Perhaps I should watch the launch video. Itās almost like theyāre trying to make it possible for everyone to be a musician (quality notwithstanding). In a way, it points to one possible Utopian vision of the future. One where working for a living is obsolete, and we can all be free to make music. I digress though.
A generative musician ā which i see as different from a āmusicianā meaning traditional musician. But the levels for generative musicians will also be expanding too, so someone with experience and learning will be able to do things the beginning generative musician doesnāt. These ideas are on topic, but headed toward a peripheral topic, that perhaps will have itās own thread soonā¦
I have done some self-guided learning at home in these fields, though Iāve stalled on that for the past year and a half. Iām planning on picking it back up for a specific project in 2020, but Iām still only slightly above neophyte level. At any rate, after watching the intro video, it definitely sounded like he was describing a GAN. Those types of models work well for āartisticā applications. Basically, you train two models. One (the generator) creates a bunch of images/music files/etc. to send to the other model (the descriminator). The descriminator then compares the generated file to what it āknowsā based on training files. The descriminator then makes a decision as to whether the generated file is close enough to what itās been trained on. GANās have made huge strides in the past couple years. I think the āDeep Fakeā phenomenon are a product of GANās.
Iām not familiar with Sagemaker, but I think itās a cloud platform for deep learning. Itās probably the underlying platform for the DeepComposer application.
Anyway, I hope that wasnāt too much info (or not enough). Itās fascinating to me.
Edit: I forgot to add - hereās an example of generated music, a Recursive Neural Network (RNN) in this case, with a GAN used for the imagery. The genre may not be appealing to most ā hell, it could be downright offensive to some, considering the imagery. However, knowing a little about the behind-the-scenes of how this was created, itās still impressive.
That was very helpful and well stated cold-fashioned. I looked up what G.A.N. stands for: Generative Adversarial Network ā and it works exactly like you describe it.
This is what someone using this product would actually have to interface with so thatā's totally on topic. Quality of this product, how youād use it, personal experience with, how it relates to other similar products, improvements youād like to see etc, all are on-topic.
I am anticipating discussion here heading into a AI music is good/bad type discussion (related in type to the frequent Behringer is good/bad discussion) ā which is what i hope to divert off to a new thread. Itās valid as a topic, just peripheral to the Deep Composer in specific.
(Perhaps iām incorrect to anticipate this. And perhaps there will be no discussion at all in this thread.)
Yeah i m wondering the same. Who will pay for it? Sounds like professional market, but the quality of what they demo d is really mediocre. Perhaps good enough for B rate radio commercial jingles. Which could be the target market though. Weird product.
I look forward to the Beatport generic house/techno version.
As someone that works in AI startup I find the topic extremely interesting and and the same time the provided example extremely underwhelming on a āorgans have more exciting accompany features than thisā level.
Saw an advert for a free AI plugin for Ableton on reddit last night. tried to give it a go out of curiosity but it didnāt work for me. Seems to for other people though. Most the fun in making music for me is well, making the music. Couldnāt hurt to give it a go though. Link if anyone was curious
Peter Kirn @peterkirn (?) over on the CDM blog really roasts this product reveal ā and heās right on all points. This might be why weāre all so confused about this product release.
How did they manage to make it sound so terrible, when people have been making much better generative music for decades?
As @bitroast said when I showed this to him: āimagine spending time and money on AI Music and not using it to do IDMā
Speaking of IDM, Jamie Lidell mentioned this on Sonic Talk and reminded me that Iād not yet checked it out since Richard Devine brought it up on Lidellās podcast:
No idea how it was made, but it sounds great. Very Autechre-y, but itās a surprisingly interesting listen, considering each track is an hour long.
I think Nick Cave recently summed up the idea of an AI songwriter pretty well. Seems like a pretty dumb idea to me.
Edit: I suppose that isnāt to say an AI-created track canāt be interesting though. As long as youāre willing to sacrifice subjectivity as a value.