pouët.net

Go to bottom

AI cat

category: general [glöplog]
(Aren't women in niqabs colloquially called "ninjas" in the Arab world? :))
added on the 2025-06-01 17:29:33 by Krill Krill
didn't you get the memo ? fusion energy is just 20 years away ! :)

but.. there are so many things ppl previously deemed impossible.

Quote:
The only thing we can be sure of about the future, is that it will be absolutely fantastic.
-- Arthur C. Clarke

(and that may even be a false quote, heh)
added on the 2025-06-01 17:33:25 by bsp bsp
(scratch that: it's a factually correct quote)

on our way to a type III civilization (on the Kardashev scale), AI is inevitable.

that being said (and my point still stands), before dreaming that big, we (as a human race) need to get our things in order, first.

if it's just used for exploitation and minimizing costs, then no, you are "holding it wrong".
added on the 2025-06-01 17:47:43 by bsp bsp
Quote:
I'm as sceptical as you are, here's one link


You should be 100% sceptical about any AI "news" today. If you read through, they don't say "3d printing" but actually "just like 3d printing", which only means pouring concrete layer by layer and by "AI" they mean centrally controlled unmanned vehicles.

This is a good example of marketing bullshit. Big fat AI label slapped onto everything.

And to be honest, as long as it's a cynical marketing thing, it should be "fine", but unrelated to this Chinese nothing-burger, I actually started to worry recently - it seems big tech is indeed getting high on their own supply.
added on the 2025-06-01 20:18:10 by tomkh tomkh
Quote:
on our way to a type III civilization (on the Kardashev scale), AI is inevitable.

Do you think we stand any chance of getting there with a broken 400 year old socioeconomic system that, after being held in place with ducktape for long time, finally metastized allover and is currently in its final phase of killing the host organism/planetism?
added on the 2025-06-01 20:42:31 by 4gentE 4gentE
I'm keeping an eye on this, and in case they *do* succeed, I would call that an "AI win" (in the sense of: put to good use).

Also not a fan of polarization ("cynical marketing", "nothing-burger"). I know it's popular these days but I don't think it ever leads to anything good. "It's easy to wreck, but hard to build!"

and hey, just a story on the side, legend has it that Amazon switched to AI development, cut half of their staff, and the other half is now complaining that all they're doing is baby-sit (code-review) the AI, and their juniors don't even get the chance to develop the necessary skills anymore.
added on the 2025-06-01 20:42:31 by bsp bsp
Quote:
Do you think we stand any chance of getting there with a broken 400 year old socioeconomic system that, after being held in place with ducktape for long time, finally metastized allover and is currently in its final phase of killing the host organism/planetism?

yes and no. maybe in another 400 years. not now, for sure.
added on the 2025-06-01 20:44:42 by bsp bsp
Thing is, this planet cannot survive another 400 years of capitalism.
added on the 2025-06-01 20:46:01 by 4gentE 4gentE
Actually it can but *humans* cannot.
added on the 2025-06-01 20:57:49 by bsp bsp
but you touched a (good) point. we need to change if we want to survive.

let alone the fact that what we are doing to our fellow creatures is beyond cruel. but they don't have a lobby, so who cares (*sigh*)
added on the 2025-06-01 21:03:07 by bsp bsp
Quote:
on our way to a type III civilization (on the Kardashev scale)
Wait... last time i checked, were weren't even at Type I. =)
added on the 2025-06-01 21:18:43 by Krill Krill
Quote:
and hey, just a story on the side, legend has it that Amazon switched to AI development, cut half of their staff


My point exactly - they got high on their own supply.
added on the 2025-06-01 21:30:39 by tomkh tomkh
What's "funny" is that my prediction for this was 20+ years ahead, and here we go, the're already trying to cut corners, no matter what.

One day they'll (hopefully) wake up and recognize that 'monetary efficiency' is not what life is all about.
added on the 2025-06-01 22:07:42 by bsp bsp
Show me a good (= not-evil) corporation and i'll show you a good PR department. :)

As for replacing human code monkeys with AI (= SI, LLM)... i still maintain that the amount of actual LLM usage on code you can do is pretty much a proxy for how much boilerplate shite you need/have in your code. 50% sounds quite realistic in many fields.
added on the 2025-06-01 22:13:07 by Krill Krill
I feel that big, serious, complex code shouldn’t be performed by humans anyway. The work is inhumane and no human being should be put in the position to earn his/her bread in that inhumane way. Think about it. AI or no AI, logic says that computers should be much better at writing instructions for computing than humans could ever be. I don’t think any coder of that (big, serious, complex) profile would lose much sleep over being replaced. Could be wrong about that tho. But why musicians? Why writers? Why graphicians? Why Ghibli?
added on the 2025-06-01 22:30:33 by 4gentE 4gentE
It's the same question for all those professions, really.
added on the 2025-06-01 22:34:58 by Krill Krill
are we just looking for the most efficient way, ideally phasing out all humans in the process ?

I think AI is a great aid but look at the world, the majority of us are not rocket scientists, would you just shrug them off saying "well, you'd better become an AI expert, your fault you're not, now back to the bushes, you homeless no-good-son-of-a-*"

see where I'm going ?
added on the 2025-06-01 22:41:30 by bsp bsp
Quote:
I don’t think any coder of that (big, serious, complex) profile would lose much sleep over being replaced. Could be wrong about that tho. But why musicians? Why writers? Why graphicians?

Code *is* art. And any coder worth a damn knows that. not sure why you're drawing that arbitrary line there.
added on the 2025-06-01 22:44:51 by bsp bsp
Quote:
Code *is* art. And any coder worth a damn knows that. not sure why you're drawing that arbitrary line there.

No, you’re right, I don’t know that much about coding. But Krill mentioned “code monkeys” and I see someplace like Microsoft where you are forced to perform complex stuff with no oversee what it is you’re really doing each task, there’s absolutely no art in that. And the stuff you perform there, I don’t think human brain is really built to be performing. Could be just me, but I did hear a few experienced coders express themselves similarly. Same with underpaid japanese animation inbetweeners.
added on the 2025-06-01 23:10:15 by 4gentE 4gentE
No hard feelings. Just wanted to make myself clear.

another funny side note: "they" always say it's "just software" when in fact, that's the hard and most expensive part !
added on the 2025-06-01 23:20:59 by bsp bsp
4gentE: The implication was that current "AI" (LLM) isn't up to the task to do complex code (humans still are).
added on the 2025-06-01 23:25:44 by Krill Krill
(social implications aside), let me chime in for a minute:

(proper) coding means that you have a mental model of the software.

if you auto-generate the code, it may work, but you have no idea why and how.

just like that beaver image. it looks great, but you don't know how it came to be.
added on the 2025-06-01 23:35:22 by bsp bsp
Kinda like LLMs work in general from what I’ve gathered. They know the theory of HOW LLMs operate. In detail. I mean, of course, they’ve built them, right? But they don’t really understand WHY LLMs (seem to) work. “They” in this case being the LLM developers. It’s no wonder they don’t understand why LLMs fail and hallucinate if they don’t fully understand why they succeed.
added on the 2025-06-02 00:10:58 by 4gentE 4gentE
Yes, that's the "major danger" of AIs. We marvel at them but don't really understand what we're actually dealing with here.
added on the 2025-06-02 00:22:53 by bsp bsp
Quote:
It’s no wonder they don’t understand why LLMs fail and hallucinate if they don’t fully understand why they succeed.
That "hallucination" part is well understood, afaik. :)

It's mostly the seemingly deeper understanding parts that are somewhat puzzling. Because it's still just basically averaging over a large sample of training data, not really drawing actual conclusions and implications and projections etc. from that. "Shallow" reasoning, as tomkh so aptly put it. :)
added on the 2025-06-02 00:25:13 by Krill Krill

login

Go to top