Live Coding Compo Framework
category: code [glöplog]
uhm. topy just posted this picture from wecan on irc:
is that... the same tool? because it appears to have a text overlay. Did i just miss it somehow? I am confuse.
is that... the same tool? because it appears to have a text overlay. Did i just miss it somehow? I am confuse.
the one on the picture is the last tool that we used on wecan. It had three outputs:
1. VS IDE
2. clean view
3. view ith code overlay
the we have splited 2&3 and passed it to video mixer (4 inputs). Mixer operator was doing fancy mixing like on the picture above.
The problem was that using a second display automatically reduced the GPU performance by 50%.
The other thing was that people actually were not taking care about the views with the code overlay. Except the main projector we had two big LCD's above contestants heads and this was causing major interest of the crowd. Those interested in code could also look at the monitor of the contestant.
That's why we decided to do it other way around on Revision. But now I'm ware that probably we won't have a video mixer for the compo, so we will need to think out something.
Probably we will use MIDI and allow the VJ's to control the code overlay??? I know that it may piss the contestants since they will not see clean output. Any other ideas?
As for FFT - it will be done, if not today, then tomorrow for sure.
1. VS IDE
2. clean view
3. view ith code overlay
the we have splited 2&3 and passed it to video mixer (4 inputs). Mixer operator was doing fancy mixing like on the picture above.
The problem was that using a second display automatically reduced the GPU performance by 50%.
The other thing was that people actually were not taking care about the views with the code overlay. Except the main projector we had two big LCD's above contestants heads and this was causing major interest of the crowd. Those interested in code could also look at the monitor of the contestant.
That's why we decided to do it other way around on Revision. But now I'm ware that probably we won't have a video mixer for the compo, so we will need to think out something.
Probably we will use MIDI and allow the VJ's to control the code overlay??? I know that it may piss the contestants since they will not see clean output. Any other ideas?
As for FFT - it will be done, if not today, then tomorrow for sure.
Quote:
I know that it may piss the contestants since they will not see clean output. Any other ideas?
Press F11 to hide code.
as a contestant i'm pretty afraid of the midi thing actually. it's 16 controls and you don't know how they are used. toggle buttons? faders? push buttons? without control of that for the coder, it's prolly dadmn hard to make proper use of it. but on the other hand it's just 25 minutes, so playing around with midi in between takes some time...
cupe: that "overlay" is the screen of the _other_ contestant, they used a video mixer with PiP :)
ah, too late to the party.
Ok, but then contestant will hide it and the crowd will not see it. The contestant will need to be aware that the crowd will like to see what he's doing. We are not going to leave VS IDE. It has too many editing options that we just don't want to implement. If you have seen the code base for that tool, then you know that it's super simple and let's keep it this way. You have undos/redos, copy/paste, writing at several lines at once etc. Everything that you like about VS
skomp: as for MIDI. We will bring MPD32 http://www.akaipro.com/product/mpd32
We are just using tap buttons. Someone will have to tap the buttons to the rhytm. If DJ's will agree then we can plug MPD into their instruments, but that's not necessary. the good thing about it is that the coders will be able to use it. For example when finishing they can spend last minute on stage with the DJ and make proper use of his shader :).
We will also bring a backup MPD26 so we can plug two different MPD's for each of the contestants or we can split the signal from one MPD to both contestants (on WeCan we were splitting it)
We are just using tap buttons. Someone will have to tap the buttons to the rhytm. If DJ's will agree then we can plug MPD into their instruments, but that's not necessary. the good thing about it is that the coders will be able to use it. For example when finishing they can spend last minute on stage with the DJ and make proper use of his shader :).
We will also bring a backup MPD26 so we can plug two different MPD's for each of the contestants or we can split the signal from one MPD to both contestants (on WeCan we were splitting it)
Quote:
Skomp, want me to tap your's? :DSomeone will have to tap the buttons to the rhytm.
*tap tap tap tap*
that person better be sober *thinks back of sync by spacebar demo scripting after beers*
Quote:
You have undos/redos, copy/paste, writing at several lines at once etc. Everything that you like about VS
So does https://github.com/sopyer/ScintillaGL after ~5 minutes of hacking. Even would have autocomplete if I could be arsed. FFT etc is probably not an issue but I don't speak GL.
I might as well fork the thing and see if I can get it up to speed / put the textures / FFT in.
+1 to what kb said. PiP worked miracles during the compo back at wecan.
kudos to the brave and silly souls planning to do this.
What cupe said.
Text on screen: This is what makes it live coding. How about splitting the bigscreen into four quadrants and have a layout like this:
Upper left: first contestant's effect.
Lower left: first contestant's code.
Upper right: second contestant's effect.
Lower right: second contestant's code.
I am looking forward to writing comments in my code commenting on my opponent's code. ;)
FFT/Controls: A central element of such input is history - that the shader can access the values not just as they are now, but also as they were a shader-specified distance back in time (somehow rounded or interpolated, of course). This enables an event to have an effect on the visuals lasting some time, rather that just being another vu-meter / spectrum analyzer. :)
(Looking into the future can be useful as well, but that is a bit tricky at a live event...)
And another thing: When (not if) a contestant writes an infinite loop, and the graphics driver is restarted, the visualizer should come back automatically where it left off (with the culprit's output disabled until the next compile).
Text on screen: This is what makes it live coding. How about splitting the bigscreen into four quadrants and have a layout like this:
Upper left: first contestant's effect.
Lower left: first contestant's code.
Upper right: second contestant's effect.
Lower right: second contestant's code.
I am looking forward to writing comments in my code commenting on my opponent's code. ;)
FFT/Controls: A central element of such input is history - that the shader can access the values not just as they are now, but also as they were a shader-specified distance back in time (somehow rounded or interpolated, of course). This enables an event to have an effect on the visuals lasting some time, rather that just being another vu-meter / spectrum analyzer. :)
(Looking into the future can be useful as well, but that is a bit tricky at a live event...)
And another thing: When (not if) a contestant writes an infinite loop, and the graphics driver is restarted, the visualizer should come back automatically where it left off (with the culprit's output disabled until the next compile).
Quote:
I am looking forward to writing comments in my code commenting on my opponent's code. ;)
Comments??? in code?? what are they?
Ok,
First of all. Contestants please try Gargaj branch. Here's a compiled build http://dl.dropboxusercontent.com/u/3417034/ScintillaGL.zip
If this is something that will be better in your opinion then please elaborate. It's pretty solid in my opinion but I don't know if it locks with longer shaders during compilation. Also some may prefer Visual Studio editing.
Blueberry: now I get it why you need the longer FFT history - I think that we can handle it. Actually if you have spare minute, you can do it in exact way that you think it should work. The idea with releasing the code is to make changes and branches until we will come with something that satisfies the most of us, and could be used in the future.
First of all. Contestants please try Gargaj branch. Here's a compiled build http://dl.dropboxusercontent.com/u/3417034/ScintillaGL.zip
If this is something that will be better in your opinion then please elaborate. It's pretty solid in my opinion but I don't know if it locks with longer shaders during compilation. Also some may prefer Visual Studio editing.
Blueberry: now I get it why you need the longer FFT history - I think that we can handle it. Actually if you have spare minute, you can do it in exact way that you think it should work. The idea with releasing the code is to make changes and branches until we will come with something that satisfies the most of us, and could be used in the future.
Will try this evening. Which gl version is that?
Gargaj's fork seems to lack the SDL DLLs.
...and BASS; I didn't commit up any of the binaries.
Ok,
So here's next iteration:
http://plastic-demo.nazwa.pl/LiveCoding/liveCoding_v0_91.rar
Added what BoyC requested window parameters exposed to config file, including always on top
Added FFT smooting in 8 bins with history like blueberry requested. Please check it out if this is something that you wanted.
Also added a noise texture.
Right now by default there are wave files being processed, just for testing
So here's next iteration:
http://plastic-demo.nazwa.pl/LiveCoding/liveCoding_v0_91.rar
Added what BoyC requested window parameters exposed to config file, including always on top
Added FFT smooting in 8 bins with history like blueberry requested. Please check it out if this is something that you wanted.
Also added a noise texture.
Right now by default there are wave files being processed, just for testing
humm, i guess i need some clarification here: how many textures will be accessible?
also the question somehow is whether it is good to provide a noise texture or anywhing like that. i know, i know, i said it would be nice. on the other hand it kills an advantage for the coders who actually can implement noise. If there are many usable textures like tiled perlin noise, cellular noise, and so on, there will probably be a lot of texture usage and less actual code.
maybe, after having thought about it, it's really better if every contestant can bring exactly one texture with specified size and such, maybe a different one for every battle. dunno, it's up to you to decide that anyway :)
also the question somehow is whether it is good to provide a noise texture or anywhing like that. i know, i know, i said it would be nice. on the other hand it kills an advantage for the coders who actually can implement noise. If there are many usable textures like tiled perlin noise, cellular noise, and so on, there will probably be a lot of texture usage and less actual code.
maybe, after having thought about it, it's really better if every contestant can bring exactly one texture with specified size and such, maybe a different one for every battle. dunno, it's up to you to decide that anyway :)