Imho there’s no practical way to do this with any sense of consistency… especially if you consider the modulation of sounds and the many ways that the sounds can be transformed. Even if you were to somehow fetch the sound parameters of a sound by requesting a dump (if Elektron even explain if that is possible, I don’t expect them to provide any documentation) it would be onerous to process this in a timely manner.
I think the best strategy to easily have a partial handle on the sound parameters would be to define those externally and use the external control of those parameters as your source data for further processing. Obviously there are many ways in which the Elektron sequencer can usurp these endeavours, but it would be a practical way to explore the potential. You’re going to have to have a means to send CC values for the settings you want to animate audibly/visually.
Just a thought, if you do most tweaking and playing live, then you can send the user input out as midi info, but not the sequenced data. So a played note can be spied on as can any tweak of a parameter, those feeds are easy to monitor, but negotiating with the device is on another level of complexity and very far from trivial.
There’s no easy way to do what you want imho and anything beyond the trivial suggestions will involve a disproportionate amount of effort for potential . payoff. Sequencing externally affords you the simplest way to follow the parameter changes.
My 2c, but perhaps await a second opinion or alternative strategies