"It should be simple to code"

Yes but once you have a sysex and you know that the block is 128 bytes in total, the body of the 119 bytes can be fully made with 8 bit of information, isn’t it?
The 7 bit thing I think applies when you have to keep the first bit 0, so you can distinguish a control byte (is called like that?) from the info byte…

Edit: sorry they are called the status byte and the data byte

Edit2: On second thought that may apply always so you can catch up with the info even if you start listening in the middle?

No the first bit of every sysex byte is 0, and in most (possibly all) schemes the bits are rearranged so you can’t just strip out the first bit and jam them all together. If you look in the back of a sequential manual you’ll see an example

1 Like

Just want to say that I really love where this thread has gone. More reverse engineering Elektron stuff! Overbridge next so we can make multi output machines for the overbridge devices!

1 Like

ahahahahahahahahahahahahahahahha I really not would count on that :smiley:

4 Likes

One person has already done it actually, but haven’t been able to put it together into a product. (Very relatable as a person also trying to make music electronics for sale)

1 Like

Wow, this thread moved quickly! I’ve seen the doomsday predictions that AI is going to kill all the programming jobs. I’ve given it a good deal of thought, and come to the conclusion that it likely won’t. Some of the low level jobs at the bottom rung of the ladder, maybe, but almost certainly not the jobs higher up. Here’s why…

The big misconception is that writing code takes a lot of time. Not true! Good code (like a good song) typically happens quickly. What is time consuming is testing. That’s the tricky part, to decide if the thing you wrote does what you think it does/intended to do and figure out how it reacts to edge cases that you didn’t think of (aka the real gotcha). That part is really hard. It’s hard because the definition of success and failure is a multi-faceted thing that would be difficult to describe to a computer in language that is actionable to AI. Then, assuming you could tell the AI what success and failure even looks like, you may have to get it to interact with a number of analysis tools in complex ways to interpret the data. After all that, the AI needs to explain it’s analysis to whatever meatbag is actually giving this work the green light.

After that, you’re done and you can trust it, right? No, that’s where the really deep rabbit hole begins. Because then, you have to ask yourself did the AI analyze the data in the way you wanted, did it’s error metrics respect what you requested, did it test the edge cases, did it try to break the code, did it rig the implementation just to pass the given tests, and so on. Setting up all of that and then going back and double checking all of this would be effort commensurate to just having a trained human write the thing.

This neglects the really scary stuff like AI hallucinations, which can have all manner of fun implications, and the computer security aspect (which is such a hilarious nonstarter as to render all previous discussion irrelevant). And assuming all of this worked (a really, really big IF), all you’d have is a black box tool that no one on your team actually understands, and they can’t modify it fix it. How valuable is that to management? Oh yeah, and did the AI write coherent documentation? If not, you’re shit out of luck.

I would trust an AI to do highly algorithmic, simple, linear tasks, similar to what I’d expect of a kid fresh out of undergrad. Which is to say not much and not without lots of oversight.

4 Likes

It’s a huge mistake to judge and make predictions about the impact of AI tech based on current performance of LLMs. The scientists and engineers on the forefront building this stuff are quite aware of the shortcomings of the current rather brute force approach and will continually adapt it, and of course once we get to the point of AI successfully optimising AI we’re into serious exponential territory. Plus consider that military spending is involved for autonomous systems so it’s a true arms race. So many people are going to get completely blindsided by how it will change everything.

I would argue we haven’t yet reached a healthy adaption, at a societal level, to the internet.

EDIT: Time to spin off another thread maybe ?

3 Likes

For me it’s “I don’t understand why Elektron doesn’t just do this - they would make millions/sell like hotcakes!”

4 Likes

I use maxforlive a lot, the other day I made a quite complex device that involved a lot of Live API stuff and I thought it’s going to be hard to make but it wasn’t, 30 minutes later i had it down. For the same device I needed a function that generates a non repeating integer based on a maximum value that would change, easy, right? Turns out it took me hours to get it right, got all kind of errors and stack overflows. So with max it happens quite a lot, at least for me, that seemingly simple things turn out to be quite hard to make.

2 Likes

We already have Ai in the network which analyze traffic, and sets a baseline what normal traffic behaviour is, if the traffic deviates from this baseline you will be alarmed if for example a virus tries to access a certain dns address etc.

Ai will for sure generate this dynamic monitors for a feedback system, it will be artifical life, not needing us.

Just hit exactly this problem at work.

Product Manager: Why is this hard ? It’s just one thing to change
Developer: No, it might be a single concept in your head but actually, in code, it’s more than one thing to change.

3 Likes

It happened already with chess:
“they said the computer would eventually reach a level to beat the world champion and we are still waiting that this happens. No machine has the cognitive abilities required to master that game.”

This is you if we were in the eighties.

Edit: you a-r-e a machine

“It should be easy”:

2 Likes

It should be easy!

3 Likes

I think what is important to discuss with all the LLM stuff is what timescale are we talking? Technology seems to improve in fits and starts, and at the beginning of the introduction of a technology it is easy to see the potential expansion as limitless. We made cars and so the natural logical conclusion to that was flying cars. We made tiny transistors and thought moore’s law would go on forever, but we are reaching the limits of that. There is no telling exactly how long the road ahead of this technology is, and how long it will take to realize it. It could be like transistors where we get 70ish years of incredible development that revolutionized how we live. Or it could be like fusion energy, the revolution always just over the horizon.

1 Like

This actually was very very interesting to read… They sniffed the USB traffic as I understood, so some levels easier on the reverse engineering side than a firmware (which i don’t think is really achievable), but nonetheless great results.

I never understood the usefulness of Overbridge honestly, I just know the name, and I don’t understand why this Overbridgy thingy instead of having a compliant implementation as the multichannel audio cards that are on the market. Is there any difference? what advantages are there? to me doing something not standard is always a bit wrong, for compatibility issues.

On that thread there are also a couple of hints about an utility device that to me would ‘revolutionise’ the dawless music making world. So apparently there is a need for that and my ST dev boards were ordered to make a prototype for this “ultra secret” device. :smiley:

Overbridge basically gives you complete DAW control over the device, and multitrack outs. It basically turns your hardware into a VST you can control in the DAW. I don’t use it since I don’t do DAW stuff, but I would really love a breakout box to get separate tracks out of the digi shaped boxes.

Yes, but isn’t that analogous of what a multichannel usb audio card does (having separate channels) with full compliant USB device, plus a bidirectional standard midi implementation to set/get all the machine parameters?