Aaaaaand… it’s funded!
Seen in the comments:
Polyphony is limited by the number of fingers the surface can sense. For now it’s six, but we’re trying to push for more.
Yeah that’s to be expected. If you put your whole hand on the surface you wouldn’t expect this sort of capacitive system to make intelligible sense of that. But it will make sense of reasonable finger motion and playing. So I am not too worried on this score.
One needs to experience and learn how these sorts of things work, and how to set up your controls to get sound and control that interests and pleases you. Practice, practice, practice.
I backed it on an impulse. Good price, I really like Aodyo, and the Phi was cool. Also looking forward to the Omega more than any other announced gear this year. Seems fun and I’m curious as to how it’ll pair with my Iridium core.
Doing aces on my NGNY this year so far and I have until April to decide whether to pull out or not. Great bang for buck it seems, we’ll see how it stacks up against the Push 3 MPE.
On sales, the 3-octaves started clearly stronger but the 2-octaves seems to be catching up.
I’m skeptical, too. The Addictive Drums demo has no discernible dynamics—it sounds like all one velocity. At this price, it’s worth a try. If I can’t reliably control at least three levels of dynamics with drum patches, then it gets liquidated.
It’s a bit hard to convey, as many said here, but here’s another very quick demo that could help you. In practice, velocity is derived from a mix between “impact force” and contact area size.
So it’s similar to the MicroFreak keys in terms of how to get velocity control.
there is no “impact force” sensing on the MicroFreak …
True but the rate of contact area being added as you touch determines the velocity.
This sounds like a nice dynamic range to me (disclosure: backer and percussionist in my non-electronic music endeavors).
I got that right, didn’t i ? It’s a little blurry, it does read ARP doesn’t it ?
Perhaps i missed it, but i didn’t read about an arpeggiator feature. Or did i miss it ?
Here’s what i’m thinking could be possible, beside a regular arp. You could arpeggiate between bent notes. You could apply modification to notes to things like ratchet, or note length, note velocity, or MIDI CC to only that note in the arpeggiated sequence.
I know that the Korg Keystage ( thread ) already has something similar to that second sort of arpeggiation i just described. And the Native Instruments S-Series Mk3 controller, is in process of adding something like that to their standalone operation too.
The thing is, the “impact force” part of the equation is, as you say, only going to factor in to initial velocity calculations. For calculation of aftertouch, there will be no impact force, just finger smoosh. So there’s the potential for much greater velocity resolution (great for finger-drumming) than general z-force (what you’d use for, say, CS-80 style pads), yeah?
Not a bad thing. As a ribbon you’d have a keyboard to lend its aftertouch to the equation. On the loom stand alone, I think this is what the action zone is meant to provide (at least in a channel aftertouch sense)?
And, I mean, all “flat” MPE controllers are trying to map 3-D space into two dimensions, so all will have different sets of tradeoffs. It’s just good to know the configuration of the tradeoffs the loom is making.
Reading the responses from Aodyo in the Kickstarter comment section ( a good place for info ) they say :
Q : What about an arpeggiator function always useful)?
Aodyo : we planned to discuss such topic later but there is a clue on the front panel
Good questions jemmons.
The implementation of the code, that works with the electronics and the mechanical parts of this sort of system gets complicated. A lot is still possible, especially now with higher capability processors, to work quickly on volumes of real-time data. Another aspect with this is i expect there will be several sensitivity parameters that users can dial in to set things up for their own choice of optimal feel.
Early enough in the design still that Aodyo is likely still tweaking with the electronic, mechanical, and software parts of this system. With the software, that still can be perfected after release.
I mean, sure. But also it cannot transcend the limitations of its sensors no matter how clever the code. Everything we’ve seen leads me to believe the wood panel is behaving like a capacitive touch surface such as a smart phone (or, perhaps more aptly given the wood, the top of an Expressive E Touche or the Morphee of a Polybrute. Of course, interesting thing about the Touche and the Morphee is that neither rely on finger smoosh for calculating Z-force. They each have an independent sensors for that).
The Moog Voyager has a similar touch surface that it uses finger smoosh to calculate Z-force for and it’s… fine? Better than nothing. But not what I would call a precise expressive tool, and no amount of adjustment around the sensitivity will change that.
Then you have an iPad with touch surface and gyros. Between the two you can get decent velocity sensitivity in Garage Band, but no real attempt at noninstantaneous z-force like aftertouch.
So given what we know about the limits of these sensor technologies, it seems like we should expect a Garage Band-like velocity sensitivity and a Voyager-like z-force. Maybe a slightly improved version of each given technological progress over the years.
But what we should not expect is an order of magnitude improvement as seen in, say, a Linnstrument. The grid of resistive strips necessary to make that happen wouldn’t work with a rigid touch-surface like wood. And the inclusion of a separate, independent sensor specifically for precise measurement of z-force (the action zone) would seem to confirm this on some level.
But yeah, I’d love for Aodyo to set some explicit expectations around z-force/aftertouch. Letting customers’ imagination run wild with what’s possible is a good recipe for disappointment.
I’ve done touch surface UIs since the late 1980’s, and also a lot of high speed mechanical sensing systems as well. ( I could do this one. ) You are right, there is a wide range of possibilities, depending on the implementation. I also am aware how good this sort of thing can be when done right. The current state of the art is very good.
This is just part of the thing with crowdfunding, you are buying something that floats in the future, that not even the developers know completely the final result. It’s an important question.
I think Aodyo can, and likely will, do this so that it is considered quite special.
Seems like you’re in a good position to tell us what expectations should be, then. Assuming a state of the art implementation, should we be expecting aftertouch performance akin to an Osmose? A Linnstrument? A Microfreak? An iPad? Given your experience, what are the upper- and lower-bounds on “quite special”?
I backed. I’ve been looking for a ribbon controller lately and my expectations are based around that: I assume this will be an excellent ribbon controller with bonus features. At the crowdsourcing price the cost is not much more than ribbon controllers from Eowave or Doepfer but offers some interesting customization.
Overall I think the synth world is limited by the lack of novel interfaces, so I welcome anyone trying to provide a new experience or mode of expression
Good range here, and also a mix of technology.
I still use my Sensel Morph, and was an early adopter. That is likely a good target, given its advanced sensing system. Not expecting better than that, with a non moving surface.
IPad currently doesn’t do pressure. It is a different function and different requirements so you have room to focus in with the Loom. And like I said you can allow users to tune into their specific needs. Percussive, versus flowing, fat fingers versus thin, etc. This is for musical expression, and so I would approach it from that perspective.
The processor in this case is pretty much dedicated to the interface, not like the iPad, where you’re doing the display, and calculations, and communications, and …
Human movement is sloooow, when viewed by a processor running like they do now. Even sound is slow from that perspective, and music even slower. There is a lot of relatively high precision data coming from a capacitive sensing system, and so you can process it in a DSP sense, even potentially an AI sort of approach ( which is probably the state of the art approach ). The system could learn your patterns and take that as feedback to improve.
The Haken system stuff was mentioned earlier. Being that that was tuned in to musical use i’d have a Continuum in the lab for the engineers to play with, to get a feel for that, though given it’s difference in base technology, it isn’t necessarily what you measure against.
I’d also set up benchmark objectives. For instance, put your left finger and right finger on the surface and move them past each other left to right and follow both as they cross.
The striking of two fingers close to each other at different velocities, and forces, and timings would be another benchmark test.
In fact i’d probably rig some sort of mechanical fingers, to create repeatable tests to measure against and scientifically improve the system, with musical performance being the objective.
Post too long, stopping here.