There is a new $200 AI interface device that let’s you teach it how to do stuff for you, in the way you want it done on a computer or in an app. It is coming very very soon.
…don’t get lost in the hype…this live action model handheld is just another device to prompt the rest of ur other devices…with a pretty uncharming voice, subjectively…it till sounds pretty much like a siri with charme mode on steroids…
there’s big ratrace going on for next level personal assistant concepts based on large language models and we only see the beginning of this…
200 bux is no big deal for tech nerds to give it a try…but let’s face it…just what the next gen of phones end of this year will have to offer, will make this actual next new shiny tech toy already feel old again, even with it’s “timeless” te retro design finish…and no matter how hard this start up will work on further updates…it remains a remote control for things that got out of control long time ago…
walkie talkie alike press the button and talk is already oldschool…
next level is personalized upfront training a little and giving “it” a name of ur liking…
and like “hey siri” it will be at ur service whenever u call it by the name U gave it…
answering/narrating/discussing with a truly fluent human voice u choosed from a wider list of character and gender options…
If you want to make music with a handheld device designed by teenage engineering, the playdate already exists. I haven’t seen any indication that the R1 is going to have music sequencing apps, but maybe I’m wrong. This device looks pretty limited and I don’t expect it to be popular enough to draw too many developers to it, despite apparently selling out the first batch of preorders
Large Action Model. ( LAM ) Teaching it to use the software you already use, in the way you want it to be used. Being able to iterate and refine, under your careful supervision, without you needing to attend to every mouse click and the minutiae. Asking it to make suggestions, and then make your own decisions among options presented.
I see the demand for the back end applications, with the interface that allows this model to control them. So you like Ableton Live, you have a licensed version that this operating system can control. Just for an experiment, you can take something you’ve done with it under Live and have it redo and refine it it with Bitwig.
Not satisfied completely with the work done, you can take the work already done, let’s say with Live and modify it to get exactly what you want and teach the system that refinement.
Even hardware synths could sit somewhere as long as there was a way for this model to control them.
If it had CV and/or Midi out that would be cool! But it seems most of the tasks are performed “in the cloud” with rabbits connected to your services accounts.
When the R1 can perform a PerFourmer or A4 with confidence, I’m all in😁
I see it as a device to copy /paste information - and that alone could mean more free time to actually make music - but then you are a free tamagotchi trainer. That large action model opens so many doors - and may close so many doors again.
…alphabeth calls it bard…microsoft calls it copilot…
don’t follow the rabbit…it’s all same same, not different at all…all trains roll on the same tracks to large language models…next stop at large action models…real data training was yesterday…synthetic data training is now…
and what cuppertino will call their next level siri next autumn, no one knows…for now they just get started with AR shakes hands with VR, via gestures, eyetracking and voice control all at once next month…and they wait an extra while before they join that whole ai cake with their very own take on this…
so call it what u want…we all will tell the software tools of our choice, what we want them to do for us…not tomorrow…around tee time, today, already…brave new world, here we come…
and no one knows how deep this rabbit hole will go…but i’m pretty sure it won’t be orange…
That dude really has a weird energy about him, a weird smugness of some sort…
But valid points raised… i don’t care too much about the venture capital aspect of this, they make a lot of money, they lose a bunch of money. They follow the hype. That was crypto/NFTs, now has moved on to AI, so Rabbit could potentially have seen the downfall of crypto, talked to the investors, and changed direction into emerging AI. But yeah, as a consumer it’s alarming behaviour
I hate subscription models, but for crap like this I think it is the only way they make sense, give the gadget free and charge a monthly fee, so that when eventually the crappy product and/or service stops working, the consumer stops paying.
Feel sorry for anyone who shelled out £200 for this crappy Tamagotchi.
Oh man, that is a huge difference, and so brittle. Devs mostly only use things like playwright for critical path testing ON THEIR OWN SYSTEMS because IT’S BRITTLE AND BOUND TO FAIL when used on the whole system. Imagine relying on someone elses system… oh my god…