Coding assistants
category: code [glöplog]
Quote:
there were just some locals that were asking me if they would be *allowed* to enter that sort of thing because "they didn't know how to code".
This is what fucks my mind up every time. Why on earth would someone who doesn’t know how to code want to release a computer demo? Why? That sort of thing baffles me. It will not gain him /her/ points with the chicks /jocks/ or something.
Like how some long time sceners of swapper fame discovered that they were in fact talented artists in the last 2 years or so since Midjourney and such took off. It’s not the “great quality” of this work that I see as threatening, on the contrary, it’s the abysmal quality of it and the fact that people who made it plus some of the audience doesn’t seem to understand just how bad it is. That’s what’s disappointing and threatening. And all this years, all these decades, you thought these people understood what they were looking at and aplauding when looking at great digital art along with you. BTT.
Quote:
Perhaps because a demo can be something else than just a showcase vehicle for shiny code.Why on earth would someone who doesn’t know how to code want to release a computer demo?
Tool-made demos (with the original tool coders barely or not at all involved) were a thing before LLMs took off.
Quote:
Why on earth would someone who doesn’t know how to code want to release a computer demo? Why? That sort of thing baffles me. It will not gain him /her/ points with the chicks /jocks/ or something.
Not sure about the last part - sure, in the end it's Internet points, but it IS Internet points; we have built a fairly large and effective system to promote the scene and its work as a whole, while also lowering the barrier of entry, so it would be stupid not to exploit it.
I have had a running theory for a while now that a sizeable portion of the scene doesn't actually enjoy the process of making stuff - the learning, the collaboration, the challenge, the fastidious progress of building a project - but they love the high of releasing something and getting the crowd reaction.
I like both.
FWIW this seems like it's been the most civil discussion on this topic. A lot of that probably has to do with the way Blueberry started everything off by immediately pointing out that this most likely isn't a black and white issue, but like everything there are many shades of grey.
(How many shades of grey may depend on your platform though ;P)
(How many shades of grey may depend on your platform though ;P)
Quote:
Perhaps because a demo can be something else than just a showcase vehicle for shiny code.
It should be both, ok. Shiny code and kick-ass art. We had plenty of discussions about it here.
I personally don't care for AI usage in the professional context - your employee owns your code anyways, and you just have to get the job done.
But in the demoscene context it would be just cooler if people would strive for 100% AI-free productions in the mainstream compos. If you want to create AI-friendly scene, go for it, why not. In fact fragmentation of the demoscene would be beneficial IMHO I bet some people will participate in both "movements".
* employee -> employer
Quote:
same.I like both.
second that
Quote:
I like both.
Me too, of course.
On that note, I've actually found that in some situations, using an AI code assistant enhances the development experience for me. It's shiny new technology, and I'm having fun figuring out how to utilize it best. It tickles my brain in a different way than my usual coding flow.
Quote:
- Coding assistants mainly accelerate editing tasks, rather than provide creative input. In my experience, the code resulting from a tab completion or prompt is basically what I would have written anyway, just produced faster.
Are you sure you don't mean decelerate editing tasks?

Quote:
- The primary task that coding assistants do provide novel input for is using unfamiliar APIs. While this can take up a (frustratingly) substantial portion of many coding tasks, it is not my impression that this work is something that is particularly prized in the Demoscene.
I don't think having an LLM generate boilerplate for you is any worse or better than copying someone else's boilerplate code. Both have the same questionable "is this actually copyrightable or not" murkiness around it, and I don't think it's clear if you'd properly own the resulting code in either case.
Quote:
- In contrast with graphics, generating code with AI is typically not viewed as stealing the work of specific individuals. You don't ask your coding assistant to produce code "in the style of John Carmack" or anything like that.
Weeeeell... Bad example, perhaps?
More seriously, anywhere other people's work gets somehow fed into your code, be wary. You never know with an LLM how much of it comes from a single source or is somehow "put together" by the LLM. But the LLM has no agency; this is all someone else's code, and the result is a derivative work. Good luck adhering to the license of all the code your code lends on now.
If you feel confident that all the LLM did was help you avoid some common mistakes and not invent anything of substance for you, that's probably fine.
But there's also something to be said about turning off your brain a bit and letting the LLM take the steering wheel; I've read multiple reports of people feeling like they've "lost something" over time when using CoPilot and whatnot. Staying sharp, especially while aging, requires constantly pushing yourself.
Quote:
- The line between non-AI and AI coding assistance is blurry. Tab completion and other high-level editing features have gradually increased in sophistication over the years. Many demo coders have probably been using AI features for several years already without thinking that this might be something that contaminates their code with respect to compo rules.
I don't think there's a blurry line here at all. Is there an LLM that's trained on other people's work and output the result or not. That's where the line goes, and pretending like it's blurry is honestly doing a bit of a disservice to the discussion.
Quote:
- Many demos use third-party libraries under various permissive licenses. With no tradition (let alone requirement) for libraries to declare any use of AI, it can be difficult to guarantee that your demo is completely free of AI generated code.
I've heard variations on this argument before, but it's not really new or unique to LLMs. When you contribute to an open source project, you need to make sure that your contribution adheres to the license. And you don't have the legal authority to re-license code you don't own. This also goes for code that was generated by an LLM.
So this is really a problem of people not respecting licenses, and this is a "problem" that predates LLMs. But it's clear what the problem is, and who's at fault. Some projects deal with it a bit more explicitly than others. Some require that their developers sign the DCO.
But you can never be 100% certain that a 3rd party project only contains code that they are allowed to. See for instance the SCO-Linux dispute. I don't really think LLMs add anything new to the problem here, it just makes it a bit... easier to do bad things?
Quote:
- Coding assistants can be a very powerful assistive technology. If a person has a disability that makes it difficult to type fast on a keyboard or look at a screen for extended periods, AI code generation can be a game changer, enabling creative work that would otherwise be infeasible.
The hard part of programming isn't the typing, it's coming up with clear, concise and correct logic that does what you want it to and only that. LLMs do not help with this at all, if anything they make this harder.
Assistive technology is great, and I work with several developers who depend on them daily. But I have not yet seen compelling evidence that LLMs help in a meaningful way there. I'd love to be proven wrong, though.
Quote:
I am interested in hearing views on this, especially from compo organizers who are restricting AI use in competitions involving code.
Most of these are just my thoughts, and are not representative for Black Valley or any other party I've organized compos at (or might in the future). There's a bit at the end that relates directly to Black Valley, though.
I don't think it's practical to vet the source code of productions in any of the compos I organize, I have some pause to adding explicit anti-LLM rules to limit vibe-coding etc. It's certainly possible to put in "good faith" rules, where you expect people to adhere, though.
But most parties include a requirement that the organizers can redistribute your entry, technically speaking you might be in very theoretical "trouble" when using an LLM to generate code like this; you can't distribute code (neither in source nor binary form) you don't own without a license to do so. LLMs can't grant such a license, at least not currently.
Again, this is not new with LLMs. People could (knowingly or unknowingly) use improperly licensed code before as well. And as I say, I don't think it's practical to police this.
At Black Valley this year, we disqualified an entry that was entirely generated by AI, but not handed into the one compo we had that explicitly allows AI. It was a difficult decision to make, and there were some disagreement within the compo-team about it. It made no attempt to hide the fact - in fact it was very clear about it on the beam slide. But in the end we felt it violated the rule we have about all productions needing to stand on their own creative legs.
We'll certainly have a discussion about how to make the rules more clear about this next year. I doubt the result will explicitly cover LLMs for programming, though. It might implicitly do so, though. And I don't expect we'll ask to vet anyone's source code or stand over their shoulders while they program either!
leaving out the moral, ethical implications etc etc of all this (not that that's even possible) and speaking just from the pov of a coder trying to get things done ..
as blueberry said up front, llms are handy for annoying api work, particularly where the docs + examples on the internet are sketchy; or for some "give me the code for this math" kind of problems.. if you check the result enough. it generates example code one would otherwise have to trawl the depths of misery of stack overflow for.
but in general, i've already working with a massive codebase built up over years which is well architected, abstracted and has loads of useful stuff in it to work with. i'm not building stuff "from scratch". doing things is fast anyway, i dont need an llm to make it faster, the biggest problem is the architecture +logic not the typing, and the llms don't know my api so they can't really help - i didn't give it all the lovely training data it needs to know what it's doing.
.. which is why 4k/8k is particularly likely to be affected by this. smaller code, repetitive, loads of training data on the internet (/shadertoy). it's the ideal candidate for vibe coding. here the arguments comparing to ai generation of images probably stand up a bit more: ripping shaders off shadertoy (more obvious and easier to catch) vs having an llm do it for you (obfuscated).
the touch designer community for example is already full of examples of this, and some software products are advertising using an llm to generate shaders as "content generation tools".
one raymarched forest scene in the style of iq, but with dutch colour scheme and audio reactive bouncing trees please mr llm.
as blueberry said up front, llms are handy for annoying api work, particularly where the docs + examples on the internet are sketchy; or for some "give me the code for this math" kind of problems.. if you check the result enough. it generates example code one would otherwise have to trawl the depths of misery of stack overflow for.
but in general, i've already working with a massive codebase built up over years which is well architected, abstracted and has loads of useful stuff in it to work with. i'm not building stuff "from scratch". doing things is fast anyway, i dont need an llm to make it faster, the biggest problem is the architecture +logic not the typing, and the llms don't know my api so they can't really help - i didn't give it all the lovely training data it needs to know what it's doing.
.. which is why 4k/8k is particularly likely to be affected by this. smaller code, repetitive, loads of training data on the internet (/shadertoy). it's the ideal candidate for vibe coding. here the arguments comparing to ai generation of images probably stand up a bit more: ripping shaders off shadertoy (more obvious and easier to catch) vs having an llm do it for you (obfuscated).
the touch designer community for example is already full of examples of this, and some software products are advertising using an llm to generate shaders as "content generation tools".
one raymarched forest scene in the style of iq, but with dutch colour scheme and audio reactive bouncing trees please mr llm.
If there is a good use of AI in coding, then why not? However everyone has a different meaning of what's good use. Some praise some kinda of mythical 10x productivity which I doubt. But even if it was true, something must be lost through the process.
In very few cases AI helped me, like asking it for some API function, or just give it code to analyze or suggest things on clean up or performance. It also made me revisit some old code and notice mistakes myself that it didn't, more like duck programming. Found something I forgot to free after malloc but didn't found I included a header twice. Something a static code analyzer could have figured out anyway. And most of it's suggestions on performance improvements are either things I have thought and tried or junk.
That's ok, maybe others have found better cases or made more proper use of it as they claim. But the one annoying this is the push, where they tell you you miss if you don't use it and you should fully integrate it in your development pipeline (to me it's just an extra obnoxious step).
The other thing that bothers me is new generations will easily rely on these tools and never learn to code from blank slate. One of the harder things for young new people start to program is given a problem, how do I start figuring out the solution. It's easier to get a piece of code, look at it, and see the patterns between the problem and what's written already. But what if you have to start from nothing? Some young people I meet who finished computer science but never did this for hobby or being self taught, they told me from blank slate they can't write the code from an idea. They have to see an example of already written code, copy it and modify it. It's a skill you need to develop (some never manage to). Now with AI generate the code for you, that goes into the toilet for the new generations.
In very few cases AI helped me, like asking it for some API function, or just give it code to analyze or suggest things on clean up or performance. It also made me revisit some old code and notice mistakes myself that it didn't, more like duck programming. Found something I forgot to free after malloc but didn't found I included a header twice. Something a static code analyzer could have figured out anyway. And most of it's suggestions on performance improvements are either things I have thought and tried or junk.
That's ok, maybe others have found better cases or made more proper use of it as they claim. But the one annoying this is the push, where they tell you you miss if you don't use it and you should fully integrate it in your development pipeline (to me it's just an extra obnoxious step).
The other thing that bothers me is new generations will easily rely on these tools and never learn to code from blank slate. One of the harder things for young new people start to program is given a problem, how do I start figuring out the solution. It's easier to get a piece of code, look at it, and see the patterns between the problem and what's written already. But what if you have to start from nothing? Some young people I meet who finished computer science but never did this for hobby or being self taught, they told me from blank slate they can't write the code from an idea. They have to see an example of already written code, copy it and modify it. It's a skill you need to develop (some never manage to). Now with AI generate the code for you, that goes into the toilet for the new generations.
Quote:
some software products are advertising using an llm to generate shaders as "content generation tools"
ref : https://madmapper.com/extensions/madai
ref : https://innovation.disguise.one/projects/generative-visuals-renderstream-shader-toolkit-chatgpt
Quote:
one raymarched forest scene in the style of iq, but with dutch colour scheme and audio reactive bouncing trees please mr llm.
At some point Shadertoy started to fill up with uninteresting LLM-generated shaders which look like they came from coding tutorials. I think those were being posted for the sake of it without much concern for whether or not they would actually be worth showing off - and no-one seemed to be complaining. If people are fine with that then they will probably be fine with the exact thing you just described, unfortunately.
Pretty much what smash said...
Yeah, this. Also, it's sad that over-sharing our tricks for "all the good reasons" ultimately backfired.
Come on, Blueberry, what "they" have done to you ;-) blink twice if you need help.
Quote:
ripping shaders off shadertoy (more obvious and easier to catch) vs having an llm do it for you (obfuscated)
Yeah, this. Also, it's sad that over-sharing our tricks for "all the good reasons" ultimately backfired.
Quote:
On that note, I've actually found that in some situations, using an AI code assistant enhances the development experience for me. It's shiny new technology, and I'm having fun
Come on, Blueberry, what "they" have done to you ;-) blink twice if you need help.
In 30 years time, youngsters will be like "What? Actual people used to program computers?! No way!"
Quote:
Not refuting your example, but NB that the fast inverse square root technique was not invented by John Carmack.Quote:Weeeeell... Bad example, perhaps?- In contrast with graphics, generating code with AI is typically not viewed as stealing the work of specific individuals. You don't ask your coding assistant to produce code "in the style of John Carmack" or anything like that.
Quote:
which is why 4k/8k is particularly likely to be affected by this. smaller code, repetitive, loads of training data on the internet (/shadertoy).

The shader code of 4ks is barely repetitive apart from the two lines of raymarching loop(if employed) – everything else is highly specific to their visual output over time, a crucial context LLMs don't have, and while the code is smaller it's orders of magnitude more dense than "normal" code. Particularly the sdf becomes a big blob intractable to LLMs, hence last time I checked LLMs were incapable of generating anything but artifact ridden mashups of simple example raymarchers and I don't expect them to graduate from that any time soon if ever.
Apart from providing more ergonomic access to API docs it proved rather lackluster when questioned about more left-field coding problems, even hurting the process by narrowing ones perspective, tempting one to stop thinking. I'm truly concerned what this will (and already does) do to society in the long run, but that's off-topic here.
LJ: By the same token (:D) i expect similar outcomes for anything that is not boilerplate.
"Boilerplate" esp. in demoscene context may mean everything done before (logo - scroller - music), however, so here's to hoping that this whole conundrum will help somewhat in breaking out of old molds. =)
"Boilerplate" esp. in demoscene context may mean everything done before (logo - scroller - music), however, so here's to hoping that this whole conundrum will help somewhat in breaking out of old molds. =)
Quote:
everything else is highly specific to their visual output over time, a crucial context LLMs don't have
...yet. It's actually fairly easy to add with multimodal LLM - their vision capabilities improve as we speak.
Last time, I've tried vibe coding this was exactly what's missing, so it's logical next thing to add (with multiple tries, views etc...).
Quote:
The hard part of programming isn't the typing,
This. If LLMs make you feel more productive then why not, but the tricky part of programming, what makes people pay you money for doing it, remains correctly understanding the problem you are trying to solve. I like to say that if you can give an LLM a precise enough description of the problem you are trying to solve that the solutions it provides are reliable, then what you have is basically another programming language, in the form of an LLM prompt.
From a PC demo coders perspective, that category has been killed by allowing usage of game engines and non-self-developed demo tools in the same compo anyways, so who cares about AI in context of PC democoding.
Nobody gives a fuck about the whole DIY coding / "use your brain" aspect of demo coding anymore it seems.
Ofcourse, on non-PC and intro categories things still are different fortunately.
Nobody gives a fuck about the whole DIY coding / "use your brain" aspect of demo coding anymore it seems.
Ofcourse, on non-PC and intro categories things still are different fortunately.
nobody suggested any vibe coded productions would be any good.. :)
quality of result has been vastly dominated by visual/directorial/musical ability for years now in all but the most extreme categories. even with the incredible capabilities of publicly available game engines and demo tools nowadays people are still able to use them to produce total garbage
quality of result has been vastly dominated by visual/directorial/musical ability for years now in all but the most extreme categories. even with the incredible capabilities of publicly available game engines and demo tools nowadays people are still able to use them to produce total garbage
In short, the current dilemma seems to be: (A) will “AI” be used by super talented people that bring us excellent prods, so that they will now be able to give us double the quantity, or (B) “AI” will be used by not-so-talented people/beginners to up the volume of garbage produced. Of course one does not completely exclude the other, but only one will go down in history as the essence of the first wave of “artistic AI demoscene productions”.
Recently I listened Shulman raving how (paraphrasing here): “today, music production sucks and people hate making music because it’s pain in the ass”, and how all that will come to an end once we all take up “AI music creation”. Reminded me of how ‘chores’ were mentioned here. Tweaking knobs on synths are not chores. Tweaking intricate parameters of code, seeking total sync with the metal is not a chore. For some. If it’s a chore, perhaps you should think of doing something else with your time.
Recently I listened Shulman raving how (paraphrasing here): “today, music production sucks and people hate making music because it’s pain in the ass”, and how all that will come to an end once we all take up “AI music creation”. Reminded me of how ‘chores’ were mentioned here. Tweaking knobs on synths are not chores. Tweaking intricate parameters of code, seeking total sync with the metal is not a chore. For some. If it’s a chore, perhaps you should think of doing something else with your time.