pouët.net

Go to bottom

LADSPA/NYQUIST challenge

category: music [glöplog]
 
I need a bit of help:
I have an analog recording from radio broadcast, but heavily polluted with other station getting in tune. So basically it is a conversation between people, which is polluted by almost equally loud music from the other station. i have clean recordings of all he music pieces, that are in the background and now would like to extract that very conversation (something like getting raw vocal when You have a song and karaoke version of it).
The obvious and naive solution (trying to mix the music in opposite phase to cancel out) won't work, I've figured that this should be made in FFT domain, yet Audacity's "noise removal" filter is a bit too stupid o do that, and my NYQUIST skills are nonexistent.

Can anyone point me to the right tool or script?
added on the 2011-03-30 18:40:12 by bizun_ bizun_
I wonder if you take two radios and keep one of them at the right distance to get it 180s out of phase audio. I don't know if that's even possible
added on the 2011-03-30 22:58:44 by sigflup sigflup
I just played about in Goldwave and I think that'll do what you want.

I mixed together 2 music tracks, and used the expression evaluator to subtract one of the originals from the mix, leaving me with the other original again. I just used "wave1(n)-wave2(n)". The instruction manual for the expression evaluator is simple enough to understand.
still i think the waveforms from the broadcast-music and the clean studio-versions will be different enough to make it not so easy. its with no doubt possible and probably rather easy to just clean it up a bit but to remove the music near 100% could be getting messy.
added on the 2011-03-31 01:14:04 by wysiwtf wysiwtf
side note: if the music is 'stereoish' and the voices are in the middle you can use the old 'karaoke'/'voicekiller' trick of phasereversing one channel and adding it to the other so you get a mono track where the stuff in the middle is louder than the stereo bits ... unfortunately all music has the majority of instruments in the middle or only slightly to the sides ... you never know, might clean it up a bit ... my two cents
@weyland: In my mind, your recipe will cancel out the shared part, not amplify it...
I think you need to simulatethe effect of radio modulation. The main effect of this should be low pass filtering of the music at something like 8KHz (but you may try other values around that). This is not counting the effect of the air broadcast, which depends on the weather, number of walls the waves went through, and so on, so it may not be enough.

Then, you need to remove this from the recording. Brutal substraction in the time domain can work but you need very good synchronisation. Substraction in the FFT domain allows to analyse things on wider time windows (like 512 samples) so the time sync may not be as important. You don't need to care about the phase since the substraction should be done on the amplitude spectrum, not the phase one. I'm not sure what to do with the phase then, I don't remember if it matters for the ear and not the eye or if it is the reverse...

Anyway, that's assuming the radio modulation didn't introduce a frequency shift somewhere :)
@hooverphonique: quite right .... I'm sorry
Could you not then subtract that from the original to do what you originally suggested?
added on the 2011-03-31 14:47:04 by psonice psonice
This sounds very much like "echo cancellation" which is commonly used in telephony systems. It is used to suppress sounds in the recording which were emitted by the loudspeaker.

Not that I could recommend any echo cancellation software, but maybe the info helps for your research.
added on the 2011-03-31 15:19:27 by chock chock
Thanks folks.
Simple phase-cancellation and even homodyne detection don't work: the recording is mono tape cassette, the original signal was two close FM broadcasts.
The music (that need to be removed) was put through FM pre-emphasis filter, then LFP at 19kHz. but the music I have is in MP3 format, and slight phase round-ups cause the time-domain methods to fail.
@Pulkomandy
What I invented, was quite the same to Your proposal, like this:
1) synchronize two tracks, one with original recording, second with flattened and filtered music pieces
2) for each frame:
--get FFT of both signals
--convert Sine[] and Cosine[] to just module M[]
--Subtract the module vectors y[]=M1[]-c*m2[], where c is a magic number to be guessed by fail and trial,
--perform inverse fourier transform and save signal to third track
The output I expect to be heavily mutilated (phase information will be lost) but or just speech it can work. If i can get the useful signal and background apart by at least 20-24dB i call it success.
I heard NYQUIST is good for that, but my problem is, I lack proper skills in programing/coding.
added on the 2011-03-31 17:43:14 by bizun_ bizun_
all of this can likely be done easily in matlab or scilab, the latter being free software. There is an FFT function doing all the hard work there :)
The FFT is overkill in most cases and won't bring _any_ advantage to such problems. In fact, the FFT is like XML, instead of having one problem, you now have two.

I would suggest trying an adaptive filter to separate the two recordings. If you want, I could give it a try.
added on the 2011-03-31 20:31:54 by trc_wm trc_wm

login

Go to top