Rhythm Labs Breakbeats link

Does anyone have a link to a single compresses file of the Rhythm Labs Breakbeats from here?

https://rhythm-lab.com/breakbeats

Downloading each file individually is killing me :grimacing:

…i’d say, calm ur collectors/must have it all horses…
that’s pretty contraproductive…
curate ur loops…each time u start something new, u pick/choose one or two carefully curated ones, dedicated to the moment/state of mind ur in, to get it started, instead standing each time in front of an endless array of options…

2 Likes

I would just use something like the Down Them All extension.

4 Likes

Exactly. Make it a video game achievement about mastering each break. You can’t go to the next stage in Super Mario World until you’ve finished the next one.

Some of these breaks take a long time to learn. Once you learn everything about it then you can make any drum kit shine.

I have the entire Paul Nice libraries and I strategically use only a few breaks between two volumes . That’s how powerful being selective can be.

1 Like

This is great! Thank you!

Sounds like OP’s needs have been met using the browser extension linked above, but for those who like ā€œcode golfā€ and want to avoid installing extensions a command line approach is:

wget https://rhythm-lab.com/breakbeats -O - | \
egrep -o 'https://.+/sstorage/[^"]+\.wav' | \
head -n 5 | \
wget -i -

This will download the first five files. Get rid of the line with head -n 5 to download 'em all – though be careful as there’s more than 900 of 'em.

5 Likes

You’re right. I’ve been doing this since 1993 and I still can’t help clepto-sample-mania :grimacing:

1 Like

Does this work in MacOS Terminal?

MacOS doesn’t come with wget by default so if you don’t have wget on your system (say via Homebrew) than we can accomplish the same thing in a more verbose manner using curl which is a default MacOS application. This solution is more complicated, and furthermore curl is picky about what it considers valid URLs so we need to do a little Python magic to encode the URL strings.

Unlike the wget pipeline this is a probably best done in multiple steps:

  1. Download the URLs and encode them so curl doesn’t error out:
curl https://rhythm-lab.com/breakbeats &> /dev/stdout | \
egrep -o 'https://.+/sstorage/[^"]+\.wav' | \
head -n 5 | \
python3 -c "import sys, urllib.parse as ul; [sys.stdout.write(ul.quote(l, safe=':/\n')) for l in sys.stdin]" \
> encoded_breakbeats.txt 
  1. Download all files via xargs and curl
xargs -n 1 curl -O < encoded_breakbeats.txt

As before, remove the head -n 5 line to get every wav file.

Note that the file names of the downloaded files via this approach are ā€œURL encodedā€ (e.g. spaces are replaced by ā€œ%20ā€) – this is kinda ugly. A little more script fu could be used to change the file names posthoc, though I leave that as an exercise for the reader :wink: The wget solution is definitely simpler.

1 Like