I have done some self-guided learning at home in these fields, though Iāve stalled on that for the past year and a half. Iām planning on picking it back up for a specific project in 2020, but Iām still only slightly above neophyte level. At any rate, after watching the intro video, it definitely sounded like he was describing a GAN. Those types of models work well for āartisticā applications. Basically, you train two models. One (the generator) creates a bunch of images/music files/etc. to send to the other model (the descriminator). The descriminator then compares the generated file to what it āknowsā based on training files. The descriminator then makes a decision as to whether the generated file is close enough to what itās been trained on. GANās have made huge strides in the past couple years. I think the āDeep Fakeā phenomenon are a product of GANās.
Iām not familiar with Sagemaker, but I think itās a cloud platform for deep learning. Itās probably the underlying platform for the DeepComposer application.
Anyway, I hope that wasnāt too much info (or not enough). Itās fascinating to me.
Edit: I forgot to add - hereās an example of generated music, a Recursive Neural Network (RNN) in this case, with a GAN used for the imagery. The genre may not be appealing to most ā hell, it could be downright offensive to some, considering the imagery. However, knowing a little about the behind-the-scenes of how this was created, itās still impressive.