64k synths
category: music [glöplog]
glow: those waveform options look luverly.
I had some ideas for a softsynth that uses a seldom seen kind of "synthesis" but I'm a musician, not a programmer so....
I had some ideas for a softsynth that uses a seldom seen kind of "synthesis" but I'm a musician, not a programmer so....
Quote:
ton? lack of competition????
this 64k intro isn't enough for you? :(
Well , one things is, the synth was made in a year if not more before that prod iirc.
Another thing is, not sure it's enough, but we should do our own 64k about it someday before saying anything. =D
Quote:
Ton : Is it from accept corp ?
Nope, it's me and upi (throb vs t-rex)
Quote:
You are correct red. But I am still learning graphics too! I am just an amateur programmer, I just try to do what I can in my spare time and when I see what the top groups like Mercury can produce I just don't think I have the time or skill to do anything like that.
drift: nah. It's not about skill. It's about dedication/motivation, communication in the group and the will to challenge everything you are told (read all the stuff, but roll your own experiments). Also, the whole point of a production is to make it look like it would be impossible to achieve. But like a magic trick, when you are told how it's done, the magic fades away and you realize it's "just" work (plus some artistic vision, ideally). If you move the camera in the timeless, you see glitches and artifacts and, worst of all, how simple and repetitive and cheap all the things really are.
Quote:
Well , one things is, the synth was made in a year if not more before that prod iirc.
Another thing is, not sure it's enough, but we should do our own 64k about it someday before saying anything. =D
First use of the synth was less than a year before that prod, at evoke. Since then, red had been working on it some more and you can hear it :)
I'm really happy to hear you say our 64k is not enough. I think the same.
Looking forward to your production - "someday" is not okay though. Get to work already. If you start now, there's just the right amount of time until revision.
Hey Cupe,
first use of the synth i am talking about happened in 2012 actually (I was talking about ours, not yours. =) And thanks for kicking us out! Though we'd have to contact compo orgas before the delivering because of a couple of our new reasons.
first use of the synth i am talking about happened in 2012 actually (I was talking about ours, not yours. =) And thanks for kicking us out! Though we'd have to contact compo orgas before the delivering because of a couple of our new reasons.
current state of 64klang2
Quote:
drift: nah. It's not about skill. It's about dedication/motivation, communication in the group and the will to challenge everything you are told (read all the stuff, but roll your own experiments). Also, the whole point of a production is to make it look like it would be impossible to achieve. But like a magic trick, when you are told how it's done, the magic fades away and you realize it's "just" work (plus some artistic vision, ideally). If you move the camera in the timeless, you see glitches and artifacts and, worst of all, how simple and repetitive and cheap all the things really are.
I appreciate the motivational words Cupe. I should not have mentioned skill as that is something people earn through hard work. I didn't mean to imply that I don't want to put in effort, it's just that time is a big factor and I have to be selective in what I spend my time on. I will make a synth one day of course.
drift: I'm currently making a track with Farbrausch's v2. I've never actually completed anything with it but I think it's a solid synth with great features and a nice sound. Even though it's old I think it still has a lot to offer as long as you (or your musician) is willing to experiment to create something different from the same old.
v2 was great when I tried it as well (minus some really strange crashes followed by pure white noise storms :D), but I think there's a major fundamental flaw in the whole "everything in a single vst instance" approach that outside of my synth and gargaj's (and surely some others, but I don't think any others previously mentioned in this thread) haven't really been addressed. Sure, the "works in any daw" is really cool, but I think there's much more to nailing the workflow than that. Especially for a lot of these synths that only a few musicians actually end up using. And many of them seem to end up Renoise-only anyhow, and I'm not exactly sure why :) . But yeah, I can rant more about that when I publicly release my synth, which I will likely do in the coming week. I should also probably try to get the video from when I gave a seminar on it at TG13; it seems to have been taken down from their youtube channel but I'm sure someone has the raw video somewhere.
Quote:
Sure, the "works in any daw" is really cool, but I think there's much more to nailing the workflow than that.
I think the issue is that while in graphics you have plenty of documented interchange formats to work with (PSD, AI, FBX, Collada) , in audio that's not really available just yet.
That being said theoretically it should be possible to use the "record track" trick V2 introduced, with every module / VST / instance pooling into the same inter-process memory mapped file, or just have a "recording process" that receives data from the modules.
Yeah, that makes sense, but how would you cover track/device routing?
There's that yeah, plus what most musicians who work in a DAW will eventually want samples - which is doable with a sampler VSTi, but not simply audio clips unless you know the format.
but why would you want to 'address' that anyway? i'm personally very happy with having everything in a single vst instance. you only don't have to design/implement the mixer and sends/inserts then, which doesn't look like big deal versus parsing the daw project's structure.
It's probably also worth noting that unless you have a nice curve detection algorithm that you can apply to the recorded automation data it's probably better to have the original curves anyways :)
Quote:
i'm personally very happy with having everything in a single vst instance.
Your musician won't be :)
for reaper and live, project parsing was not that hard at all. Reaper is xml-like enough to do trivially, and live is gzipped xml with a bunch of crap you can ignore :)
+1 for gargaj
seriously, why? i am my musician, i've so far done test projects that way in reaper, reason, mpt and live; quite sure it's possible in bigger daws like cubase/studio one. i think multitimbral single instance vst is quite a common workflow actually, evem for commercial products.
-reason+renoise. =)
I've actually used zero all-in-one vst's outside of 64k synths in the demoscene, and the ones I used I hated XD it's much simpler if I can just write music how I normally do (use the daw + a bunch of little plugs I can combine how I want in the normal way), run the project through a little program and bam, header. No fuss, and it works :)
can keep with a minimalist ui for each plug too, then things stay nice and comfy :D
Quote:
seriously, why? i am my musician, i've so far done test projects that way in reaper, reason, mpt and live; quite sure it's possible in bigger daws like cubase/studio one.
"Test projects" or actual finished work? :)
Quote:
i think multitimbral single instance vst is quite a common workflow actually, evem for commercial products.
Name 5, 'cos I can't.
Seriously, the more monolithic your setup is, the less creative flexibility you get, and you really don't want to hit the "aww I wish I could take the high frequencies of this signal and put them through a reverb" wall WHILE you're working.
Well yes, it's true that the workflow of both demoscene synths I've used is quite terrible and far, far away from how I normally make music in Renoise (easy envelopes. automation, retrigger, portamento etc.) And as Gargaj said, being able to use samples makes a big difference in terms of getting it sounding polished and 'commerical', though in a way I quite like having the restriction and being forced to synthesize everything - it's like coding for musicians then. But props to ferris for coding his synth which sounds amazing and has an excellent workflow concept - I'm interested how it will do for other styles of music.
Quote:
Don't think it is anyhow meaningful in this context, but that should be every vst sampler/module, if it is claimed to work in GM mode - edirol's gs vst, ni kontakt and all products powered with it, sampletank, halion, hypersonic, etc, etc. I'd say you can count all the GM era moduiles in as well actually, since it was usually one and only gm instrument to compose on.Name 5, 'cos I can't.
I mean, i can see Ferris' point, - he's just used to things and want to keep it that way. But certainly not sure it's anyhow more flexible. Would be interesting to try this workflow in Reaper, if it'll be public to get the feeling. But I'm pretty much able to send my high frequencies to my compressor sidechain and to the wall right now. Doesn't make the music any better though =)
Quote:
Seriously, the more monolithic your setup is, the less creative flexibility you get, and you really don't want to hit the "aww I wish I could take the high frequencies of this signal and put them through a reverb" wall WHILE you're working.
But isnt that is more a question of how your "monolithic" setup actually works and what features it provides?
I mean many (and not only) scene synths around seem to look/work like vintage hardware synths. A predefined maximum number of units producing and processing sound for one channel with some predefined path switches and modulation routes.
Take that one step further and allow the user to define his own unit pipeline/tree and you already get much more flexibility.
Take that even further and allow (basically) arbitrary unit networks including feedback and across channels and you get even more possibilities and creative options like the one you mentioned above, sidechaning, ...
And automation isnt really an issue. Just provide nodes which listen to the CC commands and use it as input for other nodes.
In the end you cannot use all the nifty 3rd party plugins around anyway if you target demoscene compos like 32k/64k with your synth.
So then why not simply allow the musician to use any combination of nodes at any stage to get the most possibilities? The rest is more a question of good presets, UI/Usability and having an open minded approach to new things.
And besides for demotools that very same concept seems to work for the visuals and creativity pretty well, so why wouldnt it for audio?