pouët.net

Go to bottom

Experimental music from very short C programs

category: code [glöplog]
I would probably like to hear the final results but strictly stay out of the hall during the compo itself...
added on the 2020-05-25 19:06:26 by Preacher Preacher
weren't there a few IBNIZ livecoding compos about a decade ago?
added on the 2020-05-26 17:56:02 by porocyon porocyon
Quote:
Maybe a sonic pi battle??

This looks cool! With that it's also much less likely to accidentally create eardrum rupturing sounds.

I also understand (and expected) the concerns of you guys regarding viewer experience. But with some moderator/DJ ^^ doing the switching/mixing (if the BPM matches, maybe with some timeshift buffer) between both contenders, also adjusting the volume, this could work out nicely. The contenders could also flag if they created a new listenable variant, which might play "on air" while they modify the code further, or tune the on air version.

The resulting amount of music/melody might be low (like 1-2 patterns), more like a main part of a track.
If we start doing 4klang battles, then within a few years we can probably merge it with shader showdown and can just go for team intro showdowns.
added on the 2020-05-27 06:24:08 by pestis pestis
Now THAT would be awesome to watch.
added on the 2020-05-27 08:05:43 by Preacher Preacher
A (shameless plug) couple of years ago I've hacked together a very crude set of shell scripts to enable live-coding music in C for prototyping what eventually became ikubun.
It's so much fun to play with! Being full C, it's not limited to bytebeat expressions and you can do whatever.
added on the 2020-05-27 10:09:54 by provod provod
I am currently extending 4klang, to which my plan is to have a browser based interface for cross-platform support. Sending the current patch & song data to a server periodically should not be too difficult and, as long as the BPM / pattern size would be fixed for both contestants, the server could render both contestants songs and play them synced, through a multichannel sound card. DJ could then just then mix the two songs, and perhaps queue which part of the song to play next for each contestant. Importantly, what the audience hears would not be what the contestant is hearing.

Verdict: doable. With bpms automatically synced, maybe even enjoyable, as the DJ could concentrate on making nice fades and can play the parts of the song that start sounding nice.
added on the 2020-05-27 10:22:56 by pestis pestis
From there, the intro showdown would be "just" a massive shit ton of integration work with bonzomatic.
added on the 2020-05-27 10:29:17 by pestis pestis
I don't even know where to begin explaining what you're underestimating, overlooking and/or ignoring. Maybe the best way to find out is to try to organize such a compo... Good luck with that, you'll need it.
added on the 2020-05-27 14:15:40 by havoc havoc
Speaking of liveplay music tools, I have a c64 sequencer coming up that uses direct control of SID registers through text editing. Some videos on my twitter recently:
https://twitter.com/4mat_scenemusic/status/1260355771017166849
added on the 2020-05-27 15:08:20 by 4mat 4mat
This was more of a "note to self" kind of a thing that as I'm writing the interface for live rendering of sound (I'm doing that anyway, as one needs to hear the results live when tracking a song), I'll make the interface flexible enough that the song data could come from different sources e.g. two different contestants.

I agree, the "integration with bonzomatic" part is probably in the realms of "no-one really has the time needed to do this".

But the live tracking part with DJ doing the mixing is definitely in the realms of doable.
added on the 2020-05-27 15:45:48 by pestis pestis
So why don't you go ahead and organize a music compo like that?
added on the 2020-05-27 16:30:24 by havoc havoc
No promises, but I'm really really intrigued by the possibility and I will definitely try to write the software to make this possible. For organizing a compo though, I'd need a party with orgas interested in this; or someone who's more experienced with streaming. I'm just a coder.
added on the 2020-05-27 16:55:19 by pestis pestis
Quote:
No promises, but I'm really really intrigued by the possibility and I will definitely try to write the software to make this possible. For organizing a compo though, I'd need a party with orgas interested in this; or someone who's more experienced with streaming. I'm just a coder.

Kudos for your motivation! And there are actually two approaches:

  • Tools like Sonic Pi or 4mat's SID sequencer, which lead to listenable music very quickly and permit significant changes with low effort, but which might depend more on the taste of the listeners, or
  • Bytecode variants resp. creating sound sample values by C/C++/JS or ASM code, e.g. in provod's environment (I watched your interesting livestream recording about creating sound in Ikubun 2x already), which might be more challenging to create something cool.

So we might even try both.
I'm sure it's been posted here already, but you have heard of Orca? https://100r.co/site/orca.html ? That was one of the main inspirations for my sequencer.
added on the 2020-05-28 14:34:54 by 4mat 4mat
Honest question that I'm trying to ask as neutrally as possible: if live-coded music is such a good fit for the demoscene, why is there apparently so little crossover between the algorave scene and the demoscene? Why do we not have Sonic Pi live sets all the time at demo parties?

It seems to me that if you want this to take off, you should start by building the community and establishing interest - get people in the demoscene interested in algorave and vice versa, play those live sets - and then think about building the infrastructure for a compo. That's how the 256b compo at Revision happened - not from people saying "you should totally do a 256b compo, it'll be really great", but from a whole load of people proving demand for it by submitting intros into the wild compo.
added on the 2020-05-28 19:52:19 by gasman gasman
Quote:
Honest question that I'm trying to ask as neutrally as possible: if live-coded music is such a good fit for the demoscene, why is there apparently so little crossover between the algorave scene and the demoscene? Why do we not have Sonic Pi live sets all the time at demo parties?


What I had in mind is more of a live tracking thingy with modular synthesizers; not writing text-based code. I think tracking is far better for making actual music than coding, but that's probably a personal preference :) But still, I _am_ more interested in the live tracking / algorave side of it, the discussion was just about organizing it as a compo. The "scene"-part of it that you can compile it easily into a tiny executable.
added on the 2020-05-28 20:24:21 by pestis pestis
Quote:
What I had in mind is more of a live tracking thingy with modular synthesizers; not writing text-based code. I think tracking is far better for making actual music than coding, but that's probably a personal preference :) But still, I _am_ more interested in the live tracking / algorave side of it, the discussion was just about organizing it as a compo. The "scene"-part of it that you can compile it easily into a tiny executable.

I think this is a matter of taste. But Sonic Pi or algoraves were some valuable inputs here. I never heard of both of them before (so thanks for the info to who brought them up respectively). Watching how such an algorave takes place, I'm somehow reminded of a shadercompo or at least some YT live coding stream. So I think it is what I was looking for.

Compared to tracking the process of coding this stuff might just have both a faster progress (just change a few values or commands and you get a whole new pattern kind of), and a bigger chance for surprises. Unlike using a tracker, where you can see the sample being picked and put into several rows, or how some effect is being modulated. For that it would be easy to check the fulfillment of the requirements of a live coding compo to satisfy the viewers' interests (becomping surprised, being astonished, getting lot of different impressions, whatever they are) when watching YT videos about livecoding music or creating patterns in a tracker.

In the end it might just depend on the use of effects and generators. I don't know.

P.S.: Your post was #666 in this thread (according to pouet frontpage).
One thimg that could sort of merge the two is that 4klang songs are already .asm files (with heavy use of macros); that's how you compile them. It might actually make sense to be able to switch from visual tracker into writing asm, because text is faster when you know exactly what you want.

GUIs are better at giving hints what is possible. The few times I've tested Sonic PI, I spend most my time reading the manual to figure out what different commands are there and their syntax.

Coming back to the competition discussion, the text-based format is probably best for viewers, as they can see a nice overview of the synth & data. Inside a GUI, normally the complexity would be hidden in tabs or something and that's not good because the viewers cant see everything and figure out what's going on.
added on the 2020-05-29 06:27:26 by pestis pestis
There have already been Shader Showdowns with algorave performers providing the audio. At Cookie party, in 2018 and 2019, to be precise. Collaborative sessions (so multiple coders working on the audio) have also been tried. But nobody I ever talked to while working on those events has a clear idea for how to combine the Showdown with an audio competition in realtime. How is the crowd going to know what they're listening to while simultaneously paying attention to the shaders? In how far is it still a fair competition when a DJ is needed to mix those signals, what if he emphasizes the good or bad parts/samples/whatever the audio coders produce? And how much do algorithmic music creators want to compete in such a format anyway?

Please don't let my questions distract you from creating new tools, that's not the point. But at the same time, please don't assume the tool you're making will be the "missing link" to be able to mix in an audio competition with the Showdown, it might be, it might not be, or maybe you'll end up creating something that's useful for a completely different format. Or maybe it really is a dead end... I can't say for sure, so there's only one way to find out I suppose ;)
added on the 2020-05-29 06:42:16 by havoc havoc
Yeah, I fully agree that there might be too much going on in a compo with combined shaders and music being written; and that is a good argument why it would not work in practice. And even algoraves / live tracking events / Sonic PI battle / friendly collaborative tracking etc. are a question mark, both in technology (how to actually run it) and in viewer experience.

Some kind of double buffering mechanism, as proposed by Dresdenboy, is needed: both contestants streams have "a version on the air" and the latest version of the song/algorithm is swapped in when the mixing volume is down is on the other contestant, preferably at the command of the DJ. The DJ can monitor when the latest version has something decent and not ear-killing. I believe these streams should be renderings of the entire song as a waveform, so the DJ can cue parts of the song to play.

It would be funny if, at the start of the compo, both contestants would write down their proposed BPM, and these proposals would be then revealed & the compo ran at an average of the two BPMs. So if the contestants had some BPM in mind, they would have to adapt :)
added on the 2020-05-29 08:41:30 by pestis pestis
i share all the euphoria and doubts for a "sound showdown", sounds interesting and hard to pull off. I have small objections: in a shadershowdown, one also only sees a part of the code, depending on how much the coder writes. So that wouldnt rule out any GUI that "hides" stuff. The BPM agreement could be "hacked" (i suggest 400 BPM because i know you chose ~100, then we land at 250 and i happily do my Gabba :D)
Personally, i'd like to see pure code, in a "byte beat + X" manner. That said, i'd just leave "you can look away, but you can't hear away" here. I know there are suggestions, but i wait with my hats off until i see (hear) that live =)
added on the 2020-05-29 09:29:11 by HellMood HellMood
...and I know your usual game and propose -300 BPM to average at 100 BPM, but this time you thought that it was I that was going for the gabba, so you also propose -300 BPM. Now we both have to track 300 BPM gabba with the song running backwards, even though both actually wanted to do a mellow 100 BPM slow beat :(((( Serves us right.
added on the 2020-05-29 09:40:48 by pestis pestis
Assuming that you were going to propose 500 BPM, not 400 BPM, but anyway, the gaming could be prevented simply by limiting the proposals to one binary order of magnitude i.e. 80 - 160; you can make 200 BPM gabba when the other guy is doing 100 BPM just by having more notes per beat.
added on the 2020-05-29 09:47:00 by pestis pestis
Yeah that could work ;) No mindgames though :( ;)
I really like to see/hear something like this but i don't know how long that would take from scratch, and live
added on the 2020-05-29 09:58:41 by HellMood HellMood

login

Go to top