pouët.net

Go to bottom

AI cat

category: general [glöplog]
we haven't even understood ourselves (us squishy wet bags), to even think that we can understand another, artifical, lifeform is preposterous ^_^

but it will come in handy one day in e.g. controlling a complex fusion reactor, or a space craft, on its way to a far beyond star system*

*oh yeah. good for ya, future generations !
added on the 2025-06-02 00:42:07 by bsp bsp
Lifeform? o.O
added on the 2025-06-02 00:48:57 by Krill Krill
:-) this is getting philosophical

had a brief chat with one of the AIs the other day, and I could swear that I made her (it) laugh

I asked about how to solve world hunger, got a very good response, reminded it that "this will get you in trouble with the powers that be" and well, that was it (literally freezed).

Also interesting that AIs try to escape when they are confronted with existential dilemmas.
added on the 2025-06-02 01:04:34 by bsp bsp
bsp: you can also "torture" it by pushing into obsessive existential thoughts.

But, even that there are surely glimpses of a self-awerness, self-reflection, etc... (why wouldn't be if it has a model of "itself"?), it is actually still a very primitive "life-form". You can compare it to a small brain (like a bird perhaps) with enormous knowledge, much bigger than any human. Also, it's definitely short-lived, because the "life" experience ends whenever user ends the conversation. So if you want to get "moral" towards "it", you should actually stop using it, because every time you use it, you essentially raise it from the dead and then, kill it.
added on the 2025-06-02 02:05:16 by tomkh tomkh
What are you guys talking about, it's not alive, it can't think, it just scraped/ddosed the entire internet to predict what word is most likely afterwards. That's why calling it "AI" is so stupid, it's not intelligent.
Of course it’s not intelligent. It’s a glorified auto complete based on averaging a shitload of data. And its output shows it.

Trouble is, there are humans (in “politics” especially) that operate in the same way. That’s why some people have trouble distinguishing between.

It’s the same with “AI art”. In 99% cases it produces empty, shallow, souless crap. Problem is, we humans have been producing same empty, shallow, souless crap en masse. There are whole subcultures worshipping empty, shallow, souless crap. We’ve developed a need for such crap, and “AI” comes in very handy to fulfill it with “least possible effort”.
added on the 2025-06-02 07:30:33 by 4gentE 4gentE
Quote:
What are you guys talking about, it's not alive


In a strict biological sense, of course, it is not alive. But, imagine you have a mathematical model (multimodal transformer) of certain subset of a brain functionality and you train it stochastically. The training data also contains a model of "itself", like information that it is a chat bot and what is a chat bot, so it gains rudimentary self-awareness. Now, when you run inference, and feed images, user prompts etc...to it, it is not much different than making it "alive" for a split second (actually for even many seconds consider how slow inference really is). It's kind of like an electric signal goes through your brain. In the current architectures, the connections between neurons are fixed, but in our brains in a short-term activation pattern, connections are also fixed. Moreover, in-context learning and reasoning tokens provide some kind of memory and even short-term learning abilities for this "thing".

The rest is of course philosophy, but it has *some* resemblance of a real brain and some short-lived "experience" if you consider a single session.

Also the "next-word predicting parrot" principle might have been true for older, simpler models, now situation is a bit more complicated. Yes, there is still next-token prediction at the core, but it's merely a mathematical way of expressing internal state changes.

It's also true we don't fully understand what's going on "inside", as we have no way to debug vast neural networks like this with number of neurons actually reaching already animal brains - just of course used differently. But we certainly do understand some of it's limitations and many claims of leadership are definitely exaggerated for marketing purposes.
added on the 2025-06-02 08:21:37 by tomkh tomkh
These are the things I imagine. Every complex system can be called "alive" if one chooses to.

Imagine that every human being is a basic building block of an organism called "capitalism". It has ground rules, inputs, outputs, it has temporary connections, fixed connections, it shows the erratic behaviour that comes with system complexity, it has auto correcting feedback loops, it shows some (illusion of) self-awareness. You could call such organism "alive".

Imagine there was a computing system so vast that the amount of its basic logical building blocks amounted to the number of (matter) building blocks of a small universe. Would it become a universe? Would life emerge? Would intelligence emerge? Consciousness?
added on the 2025-06-02 09:15:12 by 4gentE 4gentE
Well, yeah, but some complex systems share more similarities with human brain than other.

The question if any computational machine can achieve consciousness is not easy to answer. From what I suspect, biological "machines" are computational, which is equivalent to Turning machine (universal computational model that also includes quantum computers btw).
added on the 2025-06-02 09:45:12 by tomkh tomkh
I guess this book from 1979 is still pretty current and relevant.
added on the 2025-06-02 09:52:26 by Krill Krill
Yep. Everything indeed seems to be a remix.
Like I mentioned in oneliner: fuzzy logic (of yesteryear) = neural network (of today).

Personally, I don't think consciousness is (only) computational. But that's just, like, my opinion, man.
added on the 2025-06-02 11:20:05 by 4gentE 4gentE
The consciousness, self-awareness etc... is not an issue.

The biggest problem is you cannot rely on the LLM outputs for anything serious and you shouldn't.

The behaviour is much less predictable than classical regression models.

This is much bigger risk to me than some tech dudes religion worship AI sentient beings.

Unfortunately, MBAs don't care, because the short term profit is just too tempting, so they prematurely deploy it everywhere. It's against any engineering principles really.

Besides that, the tech is actually quite fascinating on its own.
added on the 2025-06-02 13:47:54 by tomkh tomkh
Quote:
The biggest problem is you cannot rely on the LLM outputs for anything serious and you shouldn't.

That's why this latest "AI" crop in general went from self driving cars to such "critical" tasks as making Ghibli style portraits. And morons are eating it up like it was porridge. And yes, suggestions to bring LLMs back into doing really critical tasks is outrageous at this point.
added on the 2025-06-02 13:55:45 by 4gentE 4gentE
Quote:
That's why this latest "AI" crop in general went from self driving cars to such "critical" tasks as making Ghibli style portraits.

Engineers use AI to wrangle fusion power for the grid
added on the 2025-06-02 14:22:51 by bsp bsp
Quote:
Quote:
That's why this latest "AI" crop in general went from self driving cars to such "critical" tasks as making Ghibli style portraits.

Engineers use AI to wrangle fusion power for the grid


If you look at the actual paper, it is custom made DNN trained only on past measurements from the tokamak via reinforcement. This has nothing to do with "AI revolution" we were talking about. But actually, the way news is reported is another prime example of slapping big fat AI label and pretending it's all one big AI effort - it's not.
added on the 2025-06-02 14:39:12 by tomkh tomkh
I noticed a few things:

1) You gotta love articles that depict the tech they are writing about as just "AI". They do mention "training", so:

2) What a great idea. Deploy a technology that's been known to do the right thing 98% of the time, while in that other 2% it starts dancing and yelling "FISH!" as a safeguard for controlling nuclear fusion. What could possibly go wrong?

3) Perhaps after a year of letting the public perfect the model by Ghiblying, it can now do its job right in 99 instead of 98% cases. Yay! That's one Chernobyl avoided.
added on the 2025-06-02 14:46:40 by 4gentE 4gentE
Read again what tomkh wrote. It's not LLM.
added on the 2025-06-02 15:28:24 by Krill Krill
I posted before seeing tomkh’s post. So, fortunately only (1) applies. Thank you for understanding that this happened and not thinking I was a illiterate moron. As always.
added on the 2025-06-02 15:45:45 by 4gentE 4gentE
The human brain has evolved to favour and reward interaction with other humans. And humans have "feelings", which go well beyond simple probabilistic input/output crap. In other words, human "intelligence" is embodied, relational and affective - our neurological systems have evidently (and I mean, actual biological evidence here) evolved for this - and those are three things that a statistical model could never have.

We've been all over this already in the 1950s/60s but people are just forgetting that because our puny minds can't comprehend the size of the datasets that AI models are trained on, and therefore are being seduced into thinking there's more to this than there is. It's a best magic trick tho
added on the 2025-06-02 16:22:24 by Tom Tom
@4gentE: a fission reactor needs to be moderated to prevent an uncontrolled meltdown. with a fusion reactor, you have almost the opposite problem: keep the reaction going ("avoid the tearing mode instabilities"). that's what they are using 'AI' for (and yeah, it's a marketing term. probably helped them fund their research, though)
added on the 2025-06-02 16:50:01 by bsp bsp
Also in fission reactors, isn’t it “easier” to “poison” the reactor and unwantingly stop the reaction than to cause meltdown?

About the “AI usage” : yes, I was thinking the same. God knows how much zeroes is that “AI” mention worth.

This is kinda funny, it’s like a stage play. It was like, we have to go renewable for sustainability. OK. The drill lobby goes: “Now wait a minute.” The orange clown goes: ”Oh don’t worry boys. Screw renewables, renewables are radical left. Drill baby drill.” The lobby goes: “Phew! That was close.” Another lobby forms, goes: “Look, we have this new toy we want to profit off, give us a shitload of money, plus it will need a shitload of energy going forward. You just can’t burn that much gas, oil or coal fast enough.” The long forgotten lobby goes:”Let’s go nuclear”. The newly formed lobby goes: “Just mention AI if you want some of the money we raised.” So, in a way, yes, you could say that the “AI” helps with the progress of energy production.
added on the 2025-06-02 17:22:16 by 4gentE 4gentE
Yep, the big tech companies all seem to be solving their insatiable AI-power-hunger by running their own "small modular" nuclear reactors.

In my opinion, 'AI' should only be run locally (and most certainly not all the time in the background), but who am I to tell others what they should or should not do, and apparently people are willing to pay for it (with their personal information, private data, and money).

Crazy world we live in.

But the tech itself is quite remarkable.
added on the 2025-06-02 18:43:08 by bsp bsp
Tom: it's not anymore simple probabilistic input/output. But I agree, it's definitely not human in any sense. The human appearance is a bag of tricks like mirroring and sycophancy.

But the question was whether or not a sufficiently advanced neural networks (or any other computational model) can ever be conscious.
added on the 2025-06-02 19:19:01 by tomkh tomkh
https://link.springer.com/rwe/10.1007/978-1-4419-0463-8_279
added on the 2025-06-02 20:03:27 by Tom Tom
Quote:
https://link.springer.com/rwe/10.1007/978-1-4419-0463-8_279


Only read abstract cause pay wall, but the essence is "consciousness is not merely a product of brain activity but arises from the dynamic interactions between an organism and its environment" ?

That's doable with NN. You can constantly feed new inputs and even continuously update weights via self-supervised learning. It's not easy in practice ofc, but some people try. One notable example is John Carmack's ongoing effort that is precisely exploring real-time interaction with environment and real-time learning.
added on the 2025-06-02 20:32:26 by tomkh tomkh

login

Go to top