pouët.net

Go to bottom

AI cat

category: general [glöplog]
The AIs could just prefix every answer with "I'm not sure, but this is what a friend of mine told me" and postfix with IIRC
added on the 2025-06-13 22:41:36 by rloaderro rloaderro
Quote:
If I am not mistaken, something similar has been demonstrated about humans. ;)


This is the article I was referring to.

I bet if I asked you to do math in your head, and you are an honest person, you will memorize and explain your process faithfully. For example, when I was trying to add 24 and 36, I first added tens (2+3) and got 50, then I added 4+6 and got 10 and added it to 50 and 10 to get 60. Humans can actually verbalize it (or visualize if you are visual thinker) in their head while doing it. The problem is Claude can't.

There are definitely deeper thinking patterns that are more "subconscious" (not verbalized or visualized) for humans too. Then, of course, you won't be able to explain your thinking process.
added on the 2025-06-13 23:32:48 by tomkh tomkh
@Optimus:
To me it’s not impressive at all. In fact it’s a dangerous glitch that spells doom for this tech. Perhaps wider doom if they try to keep it hush. We need no extensive research to witness this manifested. ChatGPT would say anything for an applause. Anything. So, when confronted with questions about origins of its previous answers, it will willingly make up quotes from nonexistent “historical” events, nonexistent research papers etc. This behaviour manifested mostly in law. On several occasions it dreamed up past lawcases that never happened. This ended up in courts, causing major embarassment for lawyers using it without doublechecking its output, baffling judges and jurys. The more they try to get rid of this behaviour, the more “aligning” human hours they have to sink into it. Chasing their tails. Up until now this glitch was not remedied and I don’t think it can be remedied, which is (in my opinion) the main reason they are steering it towards art (music, pictures, videos) instead of critical stuff that needs to be strictly precise, exact, factual.
added on the 2025-06-13 23:42:40 by 4gentE 4gentE
Quote:
which is (in my opinion) the main reason they are steering it towards art (music, pictures, videos) instead of critical stuff that needs to be strictly precise, exact, factual.


Are they though? I think they actually double-down on more use-cases, also fairly serious stuff.

The tech *is* very impressive, but it is what it is. And the trend to rely on it in more and more places is actually not just scary, it's to put it simply - fked up.
added on the 2025-06-16 14:39:07 by tomkh tomkh
Quote:
Are they though? I think they actually double-down on more use-cases, also fairly serious stuff.

Yes they are. You’re right. They are doubling down on nonrealistic promises while knowing that they are not achievable as of right now, hoping they will sort it out in the future. Remember Musk’s Hyperloop? Remember self driving cars? I have a local example of Rimac cars taking a shitload of free EU money promising self driving taxicabs. When he was making the promise, anybody knowing ANYthing about levels of autonomy already knew they cannot possibly deliver. They still haven’t delivered. The deadline was 2 years ago. On the ‘presentation’ they operated the cab via remote control. Got caught, admitted it, nobody took the fall for that. It’s the same.

Back to what we were talking about, my feeling is that they redirected “AI” to “art” when they hit the wall with more serious/critical stuff. The tech was not reliable, so they snatched a LOT more money, hyping it up, buying themselves more time and hype with the whole “art” shebang while hoping this new training and new ideas from the world at large can go back into their research and furtherly advance their tech to the point it perhaps by some miracle delivers on the promises.
added on the 2025-06-16 15:18:45 by 4gentE 4gentE
Quote:
They still haven’t delivered


Yeah, except delivering the working product is never a goal in today's economy. Sometimes even the opposite. The goal is to aim for investment potential. You over-hype (pump), collect bonuses and dividends and dump at the right moment, before it bursts. This to me explains a bit the seemingly irrational "moves".
added on the 2025-06-16 16:56:14 by tomkh tomkh
Wait, are you suggesting that the 21st century unregulated “free market” does not magically auto-align for maximum efficiency and benefit? ;)
added on the 2025-06-16 18:26:57 by 4gentE 4gentE
Quote:
free market


You can call it whatever you want. It's a complex economic structure with many players, complex rules and rule enforcements that are not necessarily on the side of weaker players.

So it's just a game. And there are few "winning" strategies. Unfortunately, some trade general population loss to personal gains.
added on the 2025-06-16 19:11:58 by tomkh tomkh
(Oldskool) demoscene = some theoretical knowledge + perfect understanding
LLM = all the “knowledge” in the world + zero understanding

LLM vibing = Stitching (language) code snippets half-blindly VS Democoding = understanding exactly which screws to turn, by which amount and why.

See how the two are the exact opposite?
added on the 2025-06-25 13:21:40 by 4gentE 4gentE
A lot of democoding was just cowboy-style shooting from the hips already long before LLMs were a thing, though. :)
added on the 2025-06-25 14:55:41 by Krill Krill
Really? I don’t really code, so I wouldn’t know. I just push and pull switches and see what happens. Rastersplits and text scrollers are about as complicated as it gets. I use assembler like a machine code monitor with labels, that’s how primitive I am. Just to describe where this comes from.
I thought all relevant coders knew exactly what they were doing.
added on the 2025-06-25 15:20:14 by 4gentE 4gentE
Quote:
I thought all relevant coders knew exactly what they were doing.
They do now, more or less. :)

In the olden days, proper documentation either did not exist or was hard/expensive to come by, and also those raster and other hardware tricks/hacks had to be found by accident or otherwise, with nobody at first quite exactly knowing just how they work, until that stuff was looked into only a while later, after numerous productions using said effects were released.

Also note how many of especially older productions aren't quite compatible with this batch of machines or that ROM and whatnot, while nominally all the same platform.
added on the 2025-06-25 15:38:01 by Krill Krill
Human coders sometimes might do hacks, or copy some code they don't understand, but a lot of it was build with a plan in mind and keeping/growing of current understanding of the codebase in general. I have done random things or copied code I don't understand in few cases just to make something work. But I keep in my mind an understanding that I am confident about most parts of code as I write and maybe I hacked something here and copied something there (sometimes you put comment to remind you: This Works, I don't why, will check later). So there is conscious overall perspective of what's going with your codebase even if few of them are hacks you slotted in and you know which they are and should be worried about.

The LLM? Just slots everything in without second checking. It's also inconsistent, in one place it might do things one way, in another entirely different weird way. As it just picks and matches from all codebases from different people. It doesn't have consistency and specific plan in mind. A human can allow it to slot and then he has to code review the changes. But I think people will get lazy, why would you code review, when the LLMs are promoted as the magic formula that will write the code for you and you don't have to think?
added on the 2025-06-25 18:54:26 by Optimus Optimus
Quote:
But I think people will get lazy, why would you code review, when the LLMs are promoted as the magic formula that will write the code for you and you don't have to think?

Exactly. Not only lazy but defeated. Humiliated. Stupid HR departments seek people that “know/use AI” be it coders, graphic designers, whatever, and promote them at the expense of people who “don’t know/use AI”. Because of reasons. Because it’s the latest fad. The latest magic bullet. Now, graphic design quality will suffer but nothing will stop working. But when the experienced “non AI” software engineers get pushed out orbfed up, it will take a little while before things really start falling apart. Just like what happened when DOGE started laying people off left and right in the USA. Those layoffs orchestrated by musk’s minions, by those doped-up 20 something year olds “in the know”.
added on the 2025-06-25 21:05:07 by 4gentE 4gentE
Yeah, yeah, current LLMs sux.. it's well known.. but...

Hypothetically speaking, what would you do if (say in the future) there will be AI system that is actually performing like a senior developer: designing the code top-down, writing down implementations and benchmarking, unit testing them, doing several iterations of coding / testing on real hardware, even as a user tester, keep codebase clean and consistent etc...
The only "issue" would be, this AI would be trained on human developed know-how by blatantly violating user permissions and licenses. Moreover, it will not just plagiarise - on top it will develop new strategies and insights (say by fine-tuning, tabu search, etc...).
Also as a bonus, only selected few (shareholders) will own this system and only them will profit from it.
Is it possible this to happen?
added on the 2025-06-25 23:14:54 by tomkh tomkh
Sounds like this maps to general AI, and then we're either having bigger problems... or living in The Culture. :)
added on the 2025-06-25 23:25:47 by Krill Krill
@tomkh:
Frankly, I would dare to predict that this concept (compute based AI programming other computers) was always coming, was always the endgame, and humans programming computers was only the first, primitive, unnatural, clumsy, painful but necessary step. Not unlike how up to a certain point you could actually engineer new CPUs without the use of an existing computer - pen on paper. This concept is long gone. So could humans-programming-computers.
added on the 2025-06-25 23:57:01 by 4gentE 4gentE
added on the 2025-06-26 00:13:32 by 4gentE 4gentE
Quote:
What is one to make of this? https://www.bbc.com/news/articles/cpqeng9d20go


I always wonder about these stories. Who promotes them and why? It can't be conscious. It's just a box waiting an input and giving an output. When the output seems like deceiving behaviour, the human researches are "wow, it tried to trick us!". And maybe rumours of these products being so good they are approaching human deceiving behaviours might convince investors and users that they are much better than they really are.

Now, if the AI was some living AI agent, in realtime getting input continously from the environment and making it's own decisions, maybe even adapting and updating it's own data, then maybe I would be worried. But those stories about present LLMs makes no sense to me.
added on the 2025-06-26 10:49:44 by Optimus Optimus
Optimus: the real-time interaction with environment is just a matter of deployment. If AI agents are sufficiently sandboxed, of course there is no threat, but some dudes do want to run them outside of sandbox, e.g. browse websites, fill up forms, send e-mails or even run any commands on their servers. Why? Cause idiots. Nevertheless, the malicious behaviour is ofc not coming from LLM "itself" (as it has no agency as you say), but it's relatively easy to pollute training data (poison) and at the scale data is collected it will go unnoticed.
added on the 2025-06-26 17:13:20 by tomkh tomkh

login

Go to top