hckrnws
From my brief testing in the playground, it is not very good. Maybe it needs better prompting than the 1 word examples.
Funny that:
- This feature is awesome for sample-based music
- Sample music is not what it was due to difficulties related to legal rights
- This model was probably created by not giving a damn about said rights
I hope we keep making progress in isolating tracks in music. I love listening to stems of my favorite songs, I find all sorts of neat parts I missed out on. Listening to isolated harmonies is cool too.
It shall also allow to make re-recordings in higher quality of stuff that are impossible to find in good quality. Like that cover that that band played only once at that obscure concert and that was recorded on an old tape. Or many very old reggae songs: although many from Jamaica/Kingston had great recordings (there was know-how and great recording studios there) there's also a shitload of old reggae songs that are just barely listenable to because the recording is so poor (and, no, it's not an artistic choice by the artist: it's just, you know, a crappy recording).
I use moises frequently for track separation for learning songs. It does pretty dang well. I was shocked that the score of moises is ranked way worse than just about everything else, including lalal.ai, which I also used before buying moises. Perhaps lalal.ai has gotten better since I last tried it.
Maybe I'm totally misinterpreting, but the chart I'm looking at says "Net Win Rate of SAM Audio vs. SoTA Separation (text prompted)", so perhaps a lower number means that the alternative model is better?
As someone recording myself playing music, I've been meaning to see if any of these tools are good enough yet to not only separate vocals from another instrument (acoustic guitar for example), but do so without any loss of fidelity (or least not a perceivable one).
The reason I'm interested in this is because recording with multiple microphones (one on guitar, one on the vocal), has it's own set of problems with phase relationship and bleed between the microphones, which causes issues when mixing.
Being able to capture a singing guitarist with a single microphone placed in just the right spot, but still being able to process the tracks individually (with EQ, compression, reverb, etc), could be really helpful.
Would be great for the hearing impaired and CAPD sufferers when combined with Meta glasses or the like.
very cool idea
Can this be used to nuke the laugh tracks?!?
Playing with the background I tried to Isolate just the espresso machine and the train sounds in one of their demos and it seemed to fail. Maybe not the desired use case, but I thought it was odd that I could break it so easily on the sample material.
Footsteps worked pretty well when I tried that on the other hand. I wonder if lot of it has to do with how well the model understands what the english description of the sound should sound like...
super amazing demo performance being able separate out music voice and background noises. do you have to explicitly specify what type of noise to separate?
mSAMA haha, get it
[flagged]
"Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
Basically the same thing musicians said about the synth and music made by computers back in the day
100%. The music world has gone through the "but what will we do now?" at least 6-7 times. Music videos ("video killed the radio star"), sampling, the DAW (and time aligning), home studios, auto tune, plugins and amp simulators, napster/piracy, etc, etc.
That’s pretty much been the story since the Neolithic revolution though?
Comment was deleted :(
Crafted by Rajat
Source Code