When I say “understands,” it’s a metaphor: LLMs don’t literally understand concepts like humans do. They statistically map input text to plausible output text, effectively simulating understanding. So, while they have no genuine comprehension or consciousness, their statistical capabilities produce outputs aligned so closely with human intent that, practically speaking, they feel like databases that “understand.”
Also, I’m working on the finetune version, I’m preparing presets data here for finetune
Cool project
I’ve been working on tools for the Elektron machinedrum via Claude - so far tried to emulate parts of the mega command and a patch cc mutator/ random
Also had Claude create a Md user wave emulation.
I cannot seem to get midi working on my usd soundcard even after setting all the midi permissions in chrome/edge
Please report to Elektron support (after having made sure you’re running the very last OS).
To me, what would be the most interesting is to be able to drop a sample in a UI and get the DNII patch that corresponds be sent to the machine.
Would be more or less like adding samples to the DNII ^^
My goal for this would be to explore new corners of the machine, get new start points for sound design.