already there, telling from the audio conference ive been to last year. live source separation is all over DJ apps, turns out your phone has a chip powerful enough to do things like thatI think the question I'm more interested in is, what does it look like to have a CDJ with proper machine learning built in. Opens a lot of possibilities for how to collage or work between different material on the fly in ways that weren't possible before.
realtime separation->realtime transfer is definitely on its way, im sure a better tech spec could already handle itSource separation is nice but I wonder about say, rewriting lyrics in real time, or doing style transfer to make the drums in one track sound like ones from another. Or bridging multiple tracks by synthesizing space between them. A ton you can do especially if you don't quite need studio quality.
as in vector space embeddings?.
if you have two different sounds encoded into their latent representation, they are essentially points in space you can move between. ie interpolate between think about it and amen break, or subtract them from each other.
will post some demos in a bit for those interested
afaik autoencoders are a bit deeper than simple vector embeddings but I think the principle of the vector arithmethics is the same, or at least looks like that to meas in vector space embeddings?
![]()
US air force denies running simulation in which AI drone ‘killed’ operator
Denial follows colonel saying drone used ‘highly unexpected strategies to achieve its goal’ in virtual testwww.theguardian.com