How to get into synth / executable music?
category: music [glöplog]
kb: ...but then, there's the fact that most of today's popular software synths (i.e. Native Instruments) only expand in types of oscillators (MASSIVE) and wavetables / a few different delay effects (Absynth), but other than that it's still same old same old subtractive synthesis, and still the music that comes out of it is perfectly fine. (In fact, what I seem to miss from my "normal" music production and 64k stuff is that in a DAW I can always bounce into wave and chop things up - but again this is just a technique for the music I tend to like.)
djh0ffman, don't forget that nic0 considers this gay funk.
And on a different level, even if you have, say, Vocaloid-level voice synthesis in your stuff, it'll still be secondary to the visuals in an intro and completely overlooked in an executable music compo, so there isn't exactly a large motivation to innovate.
Gargaj: On the surface, yes.
But: Adding more waveforms to your oscillator is increasing the multitude of sounds you can do with it. In my book, this is good. Also, "real" synths don't alias like fuck as soon as you get into the upper octaves. Also, they've probably got FM, ring modulation and osc syncing and STILL don't alias like fuck. Or they try to model some analog circuitry or at least use waveforms sampled from analog hardware so their sawtooths don't sound so goddamn _boring_.
How about filter modelling that gets beyond the cheap state-variable designs or Butterworth models ripped from musicdsp.org? Ladder filters? How about "extreme" conditions like saturation in the filter feedback loops? Even the really simple "oversample the standard SVF to get to the full sample rate" trick I employed in V2 isn't too commonly used. No miracle if people just copy the formulas from somewhere and treat them as magic. Sorry, I kind of prefer products from people who at least have _heard_ the term "phase response", kthx.
Distortion/Waveshaping that doesn't alias? EQs that don't royally fuck up the phase? Compressors with proper response curves and transient handling? Non-grainy-as-hell reverb? Tape delays? Exciters/enhancers? Speech synthesis that you can understand and that's not the Windows speech synth behind a vocoder? Physical modelling that at least employs a rudimentary model of the acoustic properties of the instrument it's supposed to emulate instead of "delay line+1st order lowpass"?
And don't even get me started on the fact how important the GUI look and feel is for a sound designer. Also, compare demoscene synths and "real-world" software in this regard.
There's SO much difference between a 4/64k synth and proper VSTis it's not even funny anymore. And with today's CPUs it should be very possible to do something good in 32 or 64 kilobytes. Sadly everyone else seems even lazier than me. :)
But: Adding more waveforms to your oscillator is increasing the multitude of sounds you can do with it. In my book, this is good. Also, "real" synths don't alias like fuck as soon as you get into the upper octaves. Also, they've probably got FM, ring modulation and osc syncing and STILL don't alias like fuck. Or they try to model some analog circuitry or at least use waveforms sampled from analog hardware so their sawtooths don't sound so goddamn _boring_.
How about filter modelling that gets beyond the cheap state-variable designs or Butterworth models ripped from musicdsp.org? Ladder filters? How about "extreme" conditions like saturation in the filter feedback loops? Even the really simple "oversample the standard SVF to get to the full sample rate" trick I employed in V2 isn't too commonly used. No miracle if people just copy the formulas from somewhere and treat them as magic. Sorry, I kind of prefer products from people who at least have _heard_ the term "phase response", kthx.
Distortion/Waveshaping that doesn't alias? EQs that don't royally fuck up the phase? Compressors with proper response curves and transient handling? Non-grainy-as-hell reverb? Tape delays? Exciters/enhancers? Speech synthesis that you can understand and that's not the Windows speech synth behind a vocoder? Physical modelling that at least employs a rudimentary model of the acoustic properties of the instrument it's supposed to emulate instead of "delay line+1st order lowpass"?
And don't even get me started on the fact how important the GUI look and feel is for a sound designer. Also, compare demoscene synths and "real-world" software in this regard.
There's SO much difference between a 4/64k synth and proper VSTis it's not even funny anymore. And with today's CPUs it should be very possible to do something good in 32 or 64 kilobytes. Sadly everyone else seems even lazier than me. :)
And yeah, audio is always overlooked (haw haw)/underappreciated - so I can kinda understand why nobody's up for the task. :/
Quote:
Perhaps that's because people get discouraged when they find out that only 30% of the stuff on musicdsp.org actually works, or there's simply way less scientific papers to steal from than there is in graphics. I don't know.
just my 2 cents here ...
the lack of good tutorials/resources is for sure one thing. you find lots of simple examples and explanations about basic synthesis (similar to the nehe tutorials, if you want to compare that to graphics programming), but after a while of dealing with synth programming you could have found out most of that yourself anyway.
in my experience decent documents about new/advanced techniques are:
1. hard to find
2. even harder to understand (at least i dont want to get a mathematics degree first to understand that)
also i personally find it much harder to experiment with sound code for getting fresh results than doing that with graphics.
and after all ... many papers exist and extensive resarch is done for graphics, especially by the vendors and game companies. that amount of work and brains spent there is not even remotely comparable to what is spent on synthesis/sound. if you have gigabytes of data for a game, why spending much time for research in that field, when you usually only play back wave files anyway.
Well, I never expected to cause such a huge discusion on the subject!
In summary, I will be trying out the two synths mentioned and seeing what I come up with!
In summary, I will be trying out the two synths mentioned and seeing what I come up with!
I think it's quite interesting how in computer games there has been so much improvement in graphics, and lately also physics and other things, but not really in sound/music. What about music which actually adapts to the current situation in the game in a more complex way than just switching between a few prerecorded songs? I believe that is very rare, I remember that X-Wing in DOS had a nice way of mixing MIDI files, which I have not seen anywhere else since then.
Or, what about song synthesis - programs which can generate songs? There have been some promising results recently without really a lot of effort, so I think a program which can be told to make an e.g. "60% Tiesto, 30% Bach, 10% Nightwish" song, could be written by a medium-sized software company or some researchers, it's just that there is no market or interest for it.
I think that is not so much because people care less about the music, it's just that music is perceived differently. We can relatively easily tell apart ok from good, or good from great visuals, but what about music? We usually don't unconsciously listen to it, so it really only needs to be "ok". The same also applies at the lower level of the shaders/synthesizers. The Crysis of software synthesizers simply wouldn't be easily recognizable.
Or, what about song synthesis - programs which can generate songs? There have been some promising results recently without really a lot of effort, so I think a program which can be told to make an e.g. "60% Tiesto, 30% Bach, 10% Nightwish" song, could be written by a medium-sized software company or some researchers, it's just that there is no market or interest for it.
I think that is not so much because people care less about the music, it's just that music is perceived differently. We can relatively easily tell apart ok from good, or good from great visuals, but what about music? We usually don't unconsciously listen to it, so it really only needs to be "ok". The same also applies at the lower level of the shaders/synthesizers. The Crysis of software synthesizers simply wouldn't be easily recognizable.
I meant "don't consciously" - and while we can say we like a certain song more or less, it's much harder to define "why".
Right, well the fabrausch V2 synth doesnt seem to want to let me set the patches for each midi channel in cubase 5! :(
There are actually quite some interesting books on the subject of sound synthesis.
For instance, "Musical applications of microprocessors" by Hal Chamberlin. Old & out-of-print, but very hands-on and easy to understand. Then there's Curtis Roads' The computer music tutorial, which covers many synthesis techniques.
Other interesting reads are user and service manuals for old synthesizers, such as the Oberheim Xpander (interesting filter), EMS VCS-3 (routing capabilities) and various yamaha FM units, PPG/Waldorf units (REAL wave table synthesis, not that sample playback crap).
Finally, there's Julius Orion Smith II at CCRMA and the guys/galls at Helsinki university (hi, antti!)
So there are plenty of sources to "borrow" your ideas from!
For instance, "Musical applications of microprocessors" by Hal Chamberlin. Old & out-of-print, but very hands-on and easy to understand. Then there's Curtis Roads' The computer music tutorial, which covers many synthesis techniques.
Other interesting reads are user and service manuals for old synthesizers, such as the Oberheim Xpander (interesting filter), EMS VCS-3 (routing capabilities) and various yamaha FM units, PPG/Waldorf units (REAL wave table synthesis, not that sample playback crap).
Finally, there's Julius Orion Smith II at CCRMA and the guys/galls at Helsinki university (hi, antti!)
So there are plenty of sources to "borrow" your ideas from!
Hmm, I forgot the downloadable proceedings of the DaFX conference and the (non-free) book by the same name.
Quote:
And yeah, audio is always overlooked (haw haw)/underappreciated - so I can kinda understand why nobody's up for the task. :/
And don't forget that boiling your dick in molten lead is more fun than coding a GUI for a VST. Oh, the agony.
If you have the right params at the right place, you don't really need a gui :3
kb: So, even though V2 is a nice piece of software, and in many ways is the canonical one, I find it interesting that you always feel the need to slam everybody else's synths as completely useless every time somebody touches the topic. :-) Is there _nothing_ in all the music that's been released that you like the sound of? Did you ever consider that people might have done something cool in a synth but the musician was unable to properly showcase it (or perhaps rather, wanted to make music instead of showcasing the synth)?
As I see it: Like any other demomaking, making decent 64k music involves combining the efforts of two roles, a coder and a musician. The problem (or at least, one of them) is that while most coders can have some kind of relation to what _looks_ good, it's a lot harder to know what kind of tools a musician need to make things _sound_ good. You need a closer collaboration than just the usual “oh, here's the sound track, make cool visuals to it”. To make things worse, most musicians are simply not interested; why spend your time on some limited demo synth when you can let your creative abilities roam free with all the generators and effects in the world at your disposal? Of course, this problem becomes moot when the musician and the coder is the same guy — witness the success of yourself, Gargaj and loaderror in this field.
As I see it: Like any other demomaking, making decent 64k music involves combining the efforts of two roles, a coder and a musician. The problem (or at least, one of them) is that while most coders can have some kind of relation to what _looks_ good, it's a lot harder to know what kind of tools a musician need to make things _sound_ good. You need a closer collaboration than just the usual “oh, here's the sound track, make cool visuals to it”. To make things worse, most musicians are simply not interested; why spend your time on some limited demo synth when you can let your creative abilities roam free with all the generators and effects in the world at your disposal? Of course, this problem becomes moot when the musician and the coder is the same guy — witness the success of yourself, Gargaj and loaderror in this field.
Just ask @kb and @gargaj
If they not really fighting about whatever is a better method to do sound they might TEACH you how to get some good basic sound out of a self developed synth.
I'm sure they know what I mean. -.-
If they not really fighting about whatever is a better method to do sound they might TEACH you how to get some good basic sound out of a self developed synth.
I'm sure they know what I mean. -.-
Quote:
witness the success of yourself, Gargaj and loaderror in this field
and spookysys!!
Sesse:
I think there's some "hidden" V2 criticism in there as well. ;)
Quote:
Speech synthesis that you can understand and that's not the Windows speech synth behind a vocoder?
I think there's some "hidden" V2 criticism in there as well. ;)
Quote:
this problem becomes moot when the musician and the coder is the same guy — witness the success of yourself, Gargaj and loaderror in this field.
add brothomstates thank you :)
knl: It was not meant to be an exhaustive list. :-)
hopefully we are here to make good justice: )
well, I know at least one person working on a brainblasting new 64k synth at the moment.....
4mat, teasing is bad, except if you send the synth to france.
knl: it's not me :), but his tech is interesting, different approach to other ones I've seen.
4mat: s t o p teasing and DENOUNCE!