How to get into synth / executable music?
category: music [glöplog]
I think his point was to show he knew something really weird and all and maybe he wanted to show us something new though Bidule is far from being new or maybe he tried to troll (in this case, he won)
nope, I just didn't know Bidule, and since dataflow languages popped out I tought it was at least partly relevant.
4mat... come on, xm < it
knl: pffff.... so untrue. :)
pffff ff fff ffff fffffffffffff TO YOU!
pffff ffff ffff with volume envelopes and a nicer interface back to you!
PFFFFfffff with a nice lowpass and super control commands over my 64 channels, not even using my NNAs
pffffFFFfff XM guys can do it in half the channels and lowpass was a one-trick pony
ok so we all go back to protracker and stfu now? :)
no, now we move onto directx vs opengl
why not ribbon vs glow?
interface first, fx next. heh typical IT user wanting to do it the other way. ;)
hey, who need a mouse cursor? come on noobs...
Just wondering:
Has anyone here ever tried to use adaptive filters for vocoder like audio effects?
The way I understand adaptive filters is, that they have two inputs, one signal that the filter constantly learns from, and one input that gets processed by the filter. Imho the following setup could sound interesting:
Feed a melody line into the adaptive filter learn input. Program the filter in a way that it adapts to the frequency response of the input (aka it learns to pass the harmonic content of the melody).
Then feed a harmonic rich signal into the filter input. Said signal could be a simple square wave, noise or a maybe just a thick reverb trail of the original signal.
On paper this method would imprint the harmonic content of the learning signal onto the second input. Somewhat like a vocoder works, just different. The envelope will get lost but can be reconstructed with a simple envelope follower.
Just wondering.. I read a lot about alternative filter topologies these days, but I've never heard that someone used adaptive filters for musical applications.
Has anyone here ever tried to use adaptive filters for vocoder like audio effects?
The way I understand adaptive filters is, that they have two inputs, one signal that the filter constantly learns from, and one input that gets processed by the filter. Imho the following setup could sound interesting:
Feed a melody line into the adaptive filter learn input. Program the filter in a way that it adapts to the frequency response of the input (aka it learns to pass the harmonic content of the melody).
Then feed a harmonic rich signal into the filter input. Said signal could be a simple square wave, noise or a maybe just a thick reverb trail of the original signal.
On paper this method would imprint the harmonic content of the learning signal onto the second input. Somewhat like a vocoder works, just different. The envelope will get lost but can be reconstructed with a simple envelope follower.
Just wondering.. I read a lot about alternative filter topologies these days, but I've never heard that someone used adaptive filters for musical applications.
Yes, I have tried using adaptive filters for vocoding purposes in the late 90ies; they do not adapt quickly enough to track the incoming (voice) spectrum. You're better off using a regular filterbank.
SaneStation has been released for The Gathering 2012, with an associated SaneStation music compo. You get the complete VST in its latest incarnation (well, LPC is removed since we didn't want people to go nuts with samples, which sort of would defeat the purpose of the compo :-) ), full documentation, and pretty much everything you need to compose 64k music in all its glory.
Head over, play with the synth, squeeze our your best sounds and hand in your entry! You have two weeks (well, minus two days). Even if you won't be at The Gathering, you can surely find someone willing to hand in the entry for you.
Head over, play with the synth, squeeze our your best sounds and hand in your entry! You have two weeks (well, minus two days). Even if you won't be at The Gathering, you can surely find someone willing to hand in the entry for you.
Pouet entry for SaneStation as well.
On this very topic - consider going to Revision 2012 in a week or so. There'll be a seminar on next-generation softsynths for size-restricted compos. Not sure exactly how relevant it is to your question, but I'll sure as hell be seated in the ordience for that seminar! ;)
Audience. I keep misspelling that!
ornli ordience!
for some reason I started cloning 4klang in javascript a while back. I lost interest after I only did one or two ops. Javascript is really annoying for things that would work well in binary.
May revisit this clone as an iphone app someday, but that seem like an even more ill advised idea.
May revisit this clone as an iphone app someday, but that seem like an even more ill advised idea.
oh, would you look at that, a src release of 4klang and on my birthday too. Ill advised or not, this seems like a sign.
On the subject of soft synths (specifically VSTI based ones), do people prefer a fixed structure (a la V2, Z3TA etc) or a stack based system?
I'm working on a synth at the moment, which will hopefully be useful for 64k intros when it's done (will be too big for 4k stuff) and have gone with a stack based approach. I'm just curious as to what other people/musicians prefer using. IMHO, stack based systems seem to a bit more complicated for the user and and make it much easier to totally screw up patch design :)
I'm working on a synth at the moment, which will hopefully be useful for 64k intros when it's done (will be too big for 4k stuff) and have gone with a stack based approach. I'm just curious as to what other people/musicians prefer using. IMHO, stack based systems seem to a bit more complicated for the user and and make it much easier to totally screw up patch design :)