Close enough
agreed - so probably the answer is âjunked old laptops running Windows 7â?
Oh the horror.
maybe weâre a little ways off from this, but when I was a kid i always thought about a future in which we had technology that can output the musical thoughts you have in your headâŚ
check out this guy who uses his prosthetic limb to output CV from his thoughts, to his eurorack
haha my bad. FIXED!
The internet.
It used to be that there would be a couple of really culturally dominant genres at any given time. In the last couple of decades, weâve seen genres rise and fall in record time. A certain âsoundâ goes from underground, to mainstream, to played out, in maybe a couple of years, rather than a decade. Plus we now have this crazy granularity where entire scenes pop up and develop followings in a decentralised way, existing almost totally online (e.g. synthwave, lo-fi beats etc). The dominance of a few monolithic genres has been replaced by a sea of particulate genres, all floating around and occasionally interfering with one another.
I know this isnât really what you were asking, but I think weâve reached a point of diminishing returns with music technology - we arenât gonna have another sampler revolution, but the way people communicate and interact is absolutely creating fundamental shifts in the way music sounds and develops.
So true.
âŚitâs been more than three decades by now, since the world has seen something really newâŚ
techno and hiphop were the last real progeression in genreâŚ
all the rest eversince was nothing but subdiversitivityâŚ
we got more subgenres then ever beforeâŚwhile the basic story remains the same and still can be cut down to handmade vs not handmade, for moving ur body or not moving ur body, straight downbeats or broken downbeats, full on approach or minimalistic approach, more dirt or more shiny, attitude over everything or no attitude at allâŚ
truu progression always came from something undergroundishâŚand whenever some undergroundish raised enough multiplication in overall attention, it became a next mainstream thingâŚ
but whatâs mainstream, middle of the road these daysâŚ?
i couldânt nail the answerâŚtoo many options.
since we entered the age of information, all the âgood oldâ rules in music started to vanish or blur into many single piecesâŚ
so whatever will shape how music will sound tomorrow, itâs an ever ongoing evolutionary process that will never find enough multiplication again to rule it allâŚ
the diversity of genres will grow until our whole species founds itself in a sudden change of modern civilisation as we know itâŚ
and thenâŚwell, we start it all from scratch again⌠in best caseâŚ
itâs no set of next toolsâŚitâs all next new mindsetsâŚ
for the worse: airpods, tiktok, clickcounters, ai-trained composers
This is the next thing. I just re-read my old comment here, and I said âwe arenât gonna have another sampler revolutionâ, which I now disagree with, in terms of how significant the impact will be of AI. Within a few years, Iâm sure weâll start hearing AI-generated music thatâs better than a lot of music created by humans. When it gets really good, we wonât necessarily even know that it wasnât a human - pop acts might boom overnight, with an entire career worth of material generated overnight on a server somewhere, ready for a marketing dept to trickle it out. Once the tech becomes more accessible, the quality of music will increase dramatically, but itâll still require humans to steer the direction. Artistic vision will still be an inherently human trait, at least until the singularity kicks off and then who knows whatâll happen.
Whether the music is âbetterâ or âquality will increase dramaticallyâ is totally subjective, but youâre right that we wonât be able to tell the difference between AI-generated vs. human-generated music.
This is probably already much more prevalent than we think. If you want to put on your tin foil hat, you might argue that Spotify has been using AI-generated âartistsâ for years, packing their âsongsâ near the top of their curated playlists to save on royalty payouts on the millions upon millions of streams they get. You could, if that tin foil hat fits nice and snug, argue that Spotifyâs data collection practices make them the perfect home base for AI-generated music and that theyâve been making design/UX decisions in their app to systematically separate our experience with music from humans, artists, or albums, so we might more easily adopt AI as our musical future.
But, you know, that all could be dystopian nonsense. For now, suffice to say, AI will likely be quite important to the future of music.
I heard an interesting anecdote about the Yamaha DX-7. The Yamaha maintenance person noticed that a large percentage of the DX-7s serviced had all the factory presets intact (I guess they would have been overridden if the users made their own patches).
Iâd say most music is written by AI humans these days, ghost writers, formulaic down to a T in order to sell.
I have a different concern about AI and humansâ relationship with it. I think computers will be made to compose music, and the humans will assigned to focus groups whose job it is to either press or not press the like button. Kind of the logical extreme of current pop music.
I should have clarified, I was really just referring to the technical aspects of music. Humans wonât require as much technical skill to make very technically competent music. Mixing, mastering, and to some extent music theory will be handled quite easily by AI. This is already happening to an extent.
I expect pretty soon to be able to feed an algorithm a bunch of music and have it spit out a brand new WAV file that resembles the music it was fed. Thereâs absolutely no reason someone couldnât sit down today and train a network to do that. I bet techno would be so fucking easy to do it with.
I donât even think Spotify needs to waste the R&D on that; they make a killing already, and as soon as itâs technically accessible, people will just start doing it for them.
AI composers is an interesting idea. Iâm not really sure that the actual music being composed (by humans) is even the most crucial aspect of music at all these days. I mean, we largely all work in western scale (Most often in 4/4 time) in modern music and there are only so many choices to be made there. Where it gets interesting, and where music differentiates itself is by the non composition decisions that are made in the recording process. Instrument selection, recording practices, effects, etc are kind of the important elements in creating and original work of art. Not the scale or chord progressions (which have all been done to death). So does it really matter if a computer determines that aspect of the music? If at the end of the day a human or humans or shaping those decisions. I also feel like AI would be at a real disadvantage in terms of replicating the feel and imperfections (that are integral) to humans making music together in a room. It seems far fetched to imagine an AI composer being able to mimic the thousands of micro decisions and âmistakesâ humans make while interacting with each other playing music together in a room. Who knows though.
Gonna be controversial here.
Not an gear but DALLe is going to be the game changer. Also most likely ruin everything (because publishers are gonna spit pop songs in every 15 to Spotify TikTok and thatâs going to be the standard for the gen after gen z)
AI is already heavily embedded in music. There are AI engines that major labels are using to determine hit songs. The days of humans going by their gut to push a song through the Label promo machine are over.
AI will be built into our hardware and software more and more. It will be faster and easier to craft songs.
The problem is AI is not creative and canât find the new sounds. It is made to find established sounds and create established sounds for you. So there is always a place for the human producer. It just will yield less and less income over time. There will be very few full time producers down the road. Soundtracking will suffer the most from this, as AI has decades of film to pull from and is perfect for composing scores for Hollywood films.
As for gear, I believe the more immediate future of gear will be handheld devices. Portable will be demanded by younger producers who donât want to sit in studios all day and want to make influencer vids with their shiny toy. These are tough to make, and the only 2 I have owned that fit the bill are the OP1 and DirtyWave M8. The M8 feels like a device from the future when you use it. I think the demand for devices like this is huge and will only get bigger.
Just wrote about dalle 1 min ago before you
Are you an AI as well and automatically come up with three paragraph with factual proofs for my tiny reply?!
I know people have mentioned AI but the part of ai I think is interesting and weâre going to see more of is deepfaking vocals. There is software that can take the characteristics of someoneâs voice and superimpose it in your own. I can see singers altering their voice to sound like more famous singers.