Which Gear or Software Do You Think Will Most Shape How Music Sounds in the Near Future

Close enough :slight_smile:

1 Like

agreed - so probably the answer is “junked old laptops running Windows 7”?

1 Like

:rofl: Oh the horror.

maybe we’re a little ways off from this, but when I was a kid i always thought about a future in which we had technology that can output the musical thoughts you have in your head…

check out this guy who uses his prosthetic limb to output CV from his thoughts, to his eurorack :open_mouth:

https://www.youtube.com/watch?v=qSKBtEBRWi4

1 Like

haha my bad. FIXED!

1 Like

The internet.

It used to be that there would be a couple of really culturally dominant genres at any given time. In the last couple of decades, we’ve seen genres rise and fall in record time. A certain ‘sound’ goes from underground, to mainstream, to played out, in maybe a couple of years, rather than a decade. Plus we now have this crazy granularity where entire scenes pop up and develop followings in a decentralised way, existing almost totally online (e.g. synthwave, lo-fi beats etc). The dominance of a few monolithic genres has been replaced by a sea of particulate genres, all floating around and occasionally interfering with one another.

I know this isn’t really what you were asking, but I think we’ve reached a point of diminishing returns with music technology - we aren’t gonna have another sampler revolution, but the way people communicate and interact is absolutely creating fundamental shifts in the way music sounds and develops.

7 Likes

So true.

…it’s been more than three decades by now, since the world has seen something really new…
techno and hiphop were the last real progeression in genre…

all the rest eversince was nothing but subdiversitivity…
we got more subgenres then ever before…while the basic story remains the same and still can be cut down to handmade vs not handmade, for moving ur body or not moving ur body, straight downbeats or broken downbeats, full on approach or minimalistic approach, more dirt or more shiny, attitude over everything or no attitude at all…

truu progression always came from something undergroundish…and whenever some undergroundish raised enough multiplication in overall attention, it became a next mainstream thing…
but what’s mainstream, middle of the road these days…?
i could’nt nail the answer…too many options.

since we entered the age of information, all the “good old” rules in music started to vanish or blur into many single pieces…

so whatever will shape how music will sound tomorrow, it’s an ever ongoing evolutionary process that will never find enough multiplication again to rule it all…

the diversity of genres will grow until our whole species founds itself in a sudden change of modern civilisation as we know it…
and then…well, we start it all from scratch again… in best case…

it’s no set of next tools…it’s all next new mindsets…

2 Likes

for the worse: airpods, tiktok, clickcounters, ai-trained composers

This is the next thing. I just re-read my old comment here, and I said “we aren’t gonna have another sampler revolution”, which I now disagree with, in terms of how significant the impact will be of AI. Within a few years, I’m sure we’ll start hearing AI-generated music that’s better than a lot of music created by humans. When it gets really good, we won’t necessarily even know that it wasn’t a human - pop acts might boom overnight, with an entire career worth of material generated overnight on a server somewhere, ready for a marketing dept to trickle it out. Once the tech becomes more accessible, the quality of music will increase dramatically, but it’ll still require humans to steer the direction. Artistic vision will still be an inherently human trait, at least until the singularity kicks off and then who knows what’ll happen.

Whether the music is “better” or “quality will increase dramatically” is totally subjective, but you’re right that we won’t be able to tell the difference between AI-generated vs. human-generated music.

This is probably already much more prevalent than we think. If you want to put on your tin foil hat, you might argue that Spotify has been using AI-generated “artists” for years, packing their “songs” near the top of their curated playlists to save on royalty payouts on the millions upon millions of streams they get. You could, if that tin foil hat fits nice and snug, argue that Spotify’s data collection practices make them the perfect home base for AI-generated music and that they’ve been making design/UX decisions in their app to systematically separate our experience with music from humans, artists, or albums, so we might more easily adopt AI as our musical future.

But, you know, that all could be dystopian nonsense. For now, suffice to say, AI will likely be quite important to the future of music.

I heard an interesting anecdote about the Yamaha DX-7. The Yamaha maintenance person noticed that a large percentage of the DX-7s serviced had all the factory presets intact (I guess they would have been overridden if the users made their own patches).

I’d say most music is written by AI humans these days, ghost writers, formulaic down to a T in order to sell.

I have a different concern about AI and humans’ relationship with it. I think computers will be made to compose music, and the humans will assigned to focus groups whose job it is to either press or not press the like button. Kind of the logical extreme of current pop music.

I should have clarified, I was really just referring to the technical aspects of music. Humans won’t require as much technical skill to make very technically competent music. Mixing, mastering, and to some extent music theory will be handled quite easily by AI. This is already happening to an extent.

I expect pretty soon to be able to feed an algorithm a bunch of music and have it spit out a brand new WAV file that resembles the music it was fed. There’s absolutely no reason someone couldn’t sit down today and train a network to do that. I bet techno would be so fucking easy to do it with.

I don’t even think Spotify needs to waste the R&D on that; they make a killing already, and as soon as it’s technically accessible, people will just start doing it for them.

AI composers is an interesting idea. I’m not really sure that the actual music being composed (by humans) is even the most crucial aspect of music at all these days. I mean, we largely all work in western scale (Most often in 4/4 time) in modern music and there are only so many choices to be made there. Where it gets interesting, and where music differentiates itself is by the non composition decisions that are made in the recording process. Instrument selection, recording practices, effects, etc are kind of the important elements in creating and original work of art. Not the scale or chord progressions (which have all been done to death). So does it really matter if a computer determines that aspect of the music? If at the end of the day a human or humans or shaping those decisions. I also feel like AI would be at a real disadvantage in terms of replicating the feel and imperfections (that are integral) to humans making music together in a room. It seems far fetched to imagine an AI composer being able to mimic the thousands of micro decisions and “mistakes” humans make while interacting with each other playing music together in a room. Who knows though.

Gonna be controversial here.

Not an gear but DALLe is going to be the game changer. Also most likely ruin everything (because publishers are gonna spit pop songs in every 15 to Spotify TikTok and that’s going to be the standard for the gen after gen z)

AI is already heavily embedded in music. There are AI engines that major labels are using to determine hit songs. The days of humans going by their gut to push a song through the Label promo machine are over.

AI will be built into our hardware and software more and more. It will be faster and easier to craft songs.

The problem is AI is not creative and can’t find the new sounds. It is made to find established sounds and create established sounds for you. So there is always a place for the human producer. It just will yield less and less income over time. There will be very few full time producers down the road. Soundtracking will suffer the most from this, as AI has decades of film to pull from and is perfect for composing scores for Hollywood films.

As for gear, I believe the more immediate future of gear will be handheld devices. Portable will be demanded by younger producers who don’t want to sit in studios all day and want to make influencer vids with their shiny toy. These are tough to make, and the only 2 I have owned that fit the bill are the OP1 and DirtyWave M8. The M8 feels like a device from the future when you use it. I think the demand for devices like this is huge and will only get bigger.

2 Likes

Just wrote about dalle 1 min ago before you :scream:

Are you an AI as well and automatically come up with three paragraph with factual proofs for my tiny reply?!

1 Like

I know people have mentioned AI but the part of ai I think is interesting and we’re going to see more of is deepfaking vocals. There is software that can take the characteristics of someone’s voice and superimpose it in your own. I can see singers altering their voice to sound like more famous singers.

2 Likes