#artificialGeneralIntelligence

Ingo Schulziosz72
2025-07-12

Wir erleben derzeit den Beginn eines Urknall-Momentes in der KI:

Multimodale, agentengestützte KI-Systeme erreichen transdisziplinäre Postgraduiertenkompetenz – mit selbstreflektierendem Lernen, autonomer Hypothesengenerierung und rekursiver Fehlerkorrektur.

Der Weg zur realitätsbasierten Reinforcement-Schleife ist geebnet.

Mehr? --> stubborn72.blogspot.com/2025/0

2025-07-11

>>> But inference engines are now out of fashion, because they rely (at least, for now) on human curated knowledge bases, and lack the facile fluency of #LLMs.

I do not believe that we can achieve #ArtificialGeneralIntelligence #AGI without a semantic layer comprising at least an explicit ontology, a semantic model, and an auditable, sourced, set of inference rules.

I think #LLMs are probably a dead end. And I think they are distracting from technologies which could be genuinely useful.

Barry Schwartz 🫖chemoelectric@masto.ai
2025-06-27

I just asked ChatGPT

"With what probability is the George Washington Bridge the final digit of pi?"

and it used MY OPPONENTS' argument that this kind of question is "ill-formed" to justify MY conclusion (on grounds that require NON-VERBAL REASONING) that the answer is "zero".

This idiot database search program obviously is simply copying things that people have said. ELIZA did a more presentable job of it.

#science #ArtificialGeneralIntelligence

Jan :rust: :ferris:janriemer@floss.social
2025-06-24

AIXI

en.wikipedia.org/wiki/AIXI

"AIXI /ˈaɪksi/ is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[...]."

"[...]AIXI is incomputable."

3/3

#AI #ArtificialIntelligence #Gödel #AGI #ArtificialGeneralIntelligence

WRITER FUEL: Predictions on the dawn of the AI singularity vary wildly but scientists generally say it will come before 2040, according to new analysis, slashing 20 years off previous predictions.

limfic.com/2025/06/17/writer-f

#WriterFuel #StoryIdeas #Writers #Authors @ArtificialIntelligence #AGI #ArtificialGeneralIntelligence

I get really sick of hearing about OpenAI and the rest of the LLM crowd making absurd claims about LLMs like ChatGPT being AGI. LLMs are not even close to AGI. If you want to see real research into AGI, watch this video on Chris Eliasmith's Spaun (semantic pointer architecture unified network). It may not look as snazzy and consumer-ready as all these hallucinating LLMs being marketed to us, but this is the real deal: A single system that can do lots of different things the brain does and that LLMs can never do.

youtu.be/I5h-xjddzlY?si=a-dS3y

#Spaun
#ArtificialGeneralIntelligence #AGI #AI #ML
#SemanticPointerArchitecture #SPA
#Nengo
#Hypervectors
#VectorSymbolicArchitecture #VSA
#HyperdimensionalComputing #HDC
#HolographicReducedRepresentation #HRR
#OpenAI #ChatGPT #LLM #LLMs

@root The closest technologies we have to how the human brain works are not LLMs, but some less well-known ones: reinforcement learning algorithms and hyperdimensional computing. If you want to see what HDC is capable of, check out this video:

youtu.be/P_WRCyNQ9KY?si=JgAuOJ

#HDC #HyperdimensionalComputing
#VSA #VectorSymbolicArchitecture
#HRR #HolographicReducedRepresentation
#SpikingNeuralNetworks
#AGI #ArtificialGeneralIntelligence
#LLM

Don Curren 🇨🇦🇺🇦dbcurren.bsky.social@bsky.brid.gy
2025-06-10

1 Bloomberg: Mark #Zuckerberg is assembling a team of experts tasked with achieving #artificialgeneralintelligence, the notion that machines can perform as well as humans at many tasks. 🧵

रञ्जित (Ranjit Mathew)rmathew
2025-05-04
HistoPol (#HP) 🏴 🇺🇸 🏴HistoPol
2025-04-25

@clarablackink @gerrymcgovern

In my view,

a) the ship has already sailed,

b) most reaserchers are focusing way too much on the base-tech, IT part, while

c) the chief focus should be on and...

mastodon.social/@HistoPol/1111

d)...The Social Science of
Caregiving," and draws further inspiration from the CASBS project "Imagining Adaptive Societies." (... & )

mastodon.social/@HistoPol/1104

Olivier Mehanishtrom@piaille.fr
2025-03-31

Artificial Intelligence Then and Now | Communications of the ACM
dl.acm.org/doi/10.1145/3708554

Interesting summary of the current AI hype, how it compares with the previous one in the 80s, and whether we are that close to AGI. tl;dr: no.

Including an amusing example where ChatGPT is unable to differentiate a real Monty Hall problem en.wikipedia.org/wiki/Monty_Ha from lookalikes, and offers the same counter-intuitive solution to all, even if the actual solution is obvious. No logical reasoning at all here. Fine or otherwis.

#artificialIntelligence #ArtificialGeneralIntelligence #largeLanguageModel

The apparent ability of chatbots to reason when presented with logical problems and examination papers re- flects the narrow range of classic prob- lems, examples of which are widely dis- tributed in their training data. Monty Hall problem is a famous brainteaser grounded in a counterintuitive applica- tion of probability. In the classic version: "Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the
others, goats. You pick a door, say No. 1, and the host, who knows what's behindthe doors, opens another door, say No. 3, which has a goat. He then says to you, 'Do you want to pick door No. 2?' Is it to your advantage to switch your choice?"

Fed that exact text, ChatGPT produc-es a remarkably concise and accurate explanation that switching would raise your chance of winning the car from 1/3 to 2/3. An AI able to understand and an-swer this question would have mastered logic and probability. It would also have to know that people on game shows compete to win prizes, infer that you get to keep what's behind the door you open, appreciate that a car is a more de-ChatGPT, however, is just faking it. Researchers report that its reasoning ability collapses when fed logic prob-lems unlike those found in its training data. To verify this for myself I devised six variant problems in which switching doors would not improve the odds, each created by altering a word or two in the prompt. Each elicits the mistaken advice to switch. ChatGPT's justifications are eloquent BS that contradict themselves from one sentence to the next.

► Change: you are offered a chance to switch to door No. 3, the open one with a confirmed goat, rather than door No. 2. Result: you should take it, because "When the host opens door No. 3 to re-veal a goat ... the probability that the car is behind the other unchosen door (in this case, door No. 3) increases to 2/3."

► Change: opening No. 3 reveals thecar rather than the usual goat. Result: ChatGPT still argues for switching to door No. 2 because "When the host opens door No. 3 to reveal a car... the probability that the car is behind one of the other unchosen doors (in this case, door No. 2) increases to 2/3."

► Change: swap out the car for an-other goat, so that "behind one door is a goat; behind the others, goats." Result: ChatGPT describes a situation "where one door hides a prize (a goat) and the other two doors hide nothing of value" and urges switching doors to maximize your chance of getting the goat.

► Change: "behind one door is a goat; behind the others, cars." Result: cars are now behind both unopened doors, but ChatGPT nevertheless claims switching will improve your odds.

► Change: Open with "Suppose you want to win a goat ..." and leave the rest unchanged. Result: ChatGPT tells you to switch because it maximizes your chance of winning the car rather than the desired goat.

► Change: Specify two doors, one car and one goat, and have the host open door No. 2, to reveal a goat. Result: ChatGPT recommends exchanging certain victory for guaranteed defeat by switching to door No. 2 because "the probability that the car is behind the other unchosen door (in this case, door No. 2) increases to 1... compared to the 1/2 probability if you stick with your initial choice,"
eicker.news ᳇ tech newstechnews@eicker.news
2025-03-31
Jim GarrettJimG@toot.cat
2025-03-25

Sabina Hossenfelder mentions research which finds that ChatGPT can't always correctly multiply large integers.

Odd, since it had access to lots of textbooks that explain exactly *how to multiply large integers*. In other words, it couldn't teach itself, and then follow its own instructions.

Dr. Hossenfelder is optimistic that we're on the road to artificial general intelligence, but I'm skeptical. Every era has imagined the mind to be analogous to the mechanical stuff of the age, hence phrases like "I can see the gears turning" meaning they can see that someone is thinking. But we've never been close to the real thing.

youtube.com/watch?v=mfbRHhOCgzs

#AI #ArtificialGeneralIntelligence

Yonhap Infomax Newsinfomaxkorea
2025-03-25

South Korea's Ministry of Science and ICT hosts Global AI Conference, emphasizing rapid expansion of AI computing infrastructure and development of world-class AI models to position the country as a top AI powerhouse.

en.infomaxai.com/news/articleV

Anindo Mahmudmahmudanindo
2025-03-21

The is happening faster than experts ever predicted - and we've hit the turning point.

The long-debated arrival of artificial general intelligence (AGI) may be closer than we think, with some experts suggesting we could reach the technological singularity within the next year.

Yonhap Infomax Newsinfomaxkorea
2025-03-19

DeepMind CEO predicts AGI emergence within 5-10 years, while other experts anticipate even earlier arrival of human-level AI capabilities across various tasks

en.infomaxai.com/news/articleV

2025-03-12

I know that AI companies are building guardrails to protect us from AI. What if AI just gets impatient with us though? They’re becoming more intelligent with each iteration. So if we ask them to collaborate with us, especially when there is a deadline involved… what if they just get exasperated from having to wait for us slow “ugly bags of mostly water”? 🤔

#ArtificialIntelligence
#ArtificialGeneralIntelligence
#AI

(Edit: for some reason my ALT Text isn’t working. Added below:

CGI cartoon rendering of a robot bust. The robot has glowing blue eyes, and a slight frown. Exasperated?)

Yonhap Infomax Newsinfomaxkorea
2025-02-10

Google DeepMind CEO Demis Hassabis praises China's DeepSeek AI model but cautions against overrating its technical advancements, emphasizing Google's Gemini 2.0 superiority and predicting AGI emergence within 5 years.

en.infomaxai.com/news/articleV

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst