Are LLMs capable of reaching AGI?
Are LLMs capable of reaching AGI?
➕
Plus
91
Ṁ17k
2100
60%
chance

This resolves YES if there exists an architecture that would unambiguously count as both an LLM and AGI, and could be trained and run on all the world's computing power combined as of market creation.

This market resolves after there's a broad consensus as to the correct answer, which likely won't be until after AGI has been reached and humanity has a much better conceptual understanding of what intelligence is and how it works. In the event of disagreements over what constitutes an LLM or AGI, I'll defer to a vote among Manifold users.

(In order to count as an AGI, it needs to be usefully intelligent. If it would take 1000 years to answer a question, that doesn't count.)

(Note that there are two forms of non-predictive bias at play here. If your P(doom) is high, you'll value mana lower in worlds where LLMs can reach AGI, since we're more likely to die in those worlds than if we don't obtain AGI until much later. But if your P(doom) is low, this market probably resolves sooner if the answer is YES, so due to your discount rate there's a bias towards betting on YES.)

Get
Ṁ1,000
and
S3.00


Sort by:
3mo

This effectively cannot resolve to no and will just resolve to yes as soon as AGI exists whether that be in 5 years or 50 lol

3mo

Emmett Shear: "It has been increasingly obvious that "just scale up transformers bigger" is not going to lead to human level general intelligence. [...]"

https://x.com/eshear/status/1858660987530023148?t=0t5lYNS07G1Txp1uGa_E2Q&s=19

bought Ṁ250 YES from 54% to 63% 3mo

[double post. can be deleted]

3mo

"Ilya Sutskever, co-founder of AI labs Safe Superintelligence (SSI) and OpenAI, told Reuters recently that results from scaling up pre-training - the phase of training an AI model that use s a vast amount of unlabeled data to understand language patterns and structures - have plateaued."

from https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/

bought Ṁ100 NO at 37% 3mo
bought Ṁ500 YES from 40% to 65% 3mo
3mo

@bbb he said pre-training has plateaued, not LLMs.

3mo
8mo

could be trained and run on all the world's computing power combined as of market creation

Given arbitrary training data?

3mo

@MartinRandall imo giving it training data like: "these are the thousand shortest aays to create an AGI" would not make the LLM itself an AGI.

What hypothetical data do you have in mind?

bought Ṁ100 YES

does this count LMM like GPT-4o as LLMs?

i.e. is the question more: are autoregressive transformers capable of reaching agi? or is the transformers architecture capable of reaching agi? (including things like SoRA)

9mo

This would probably never resolve to no.

How does this question resolve if the architecture uses LLMs as the cruicial subcomponent behind it's intelligence, but nonetheless it's overall architecture isn't an LLM. Specifically I'm thinking of agentic systems like AutoGPT, which have a state-machine architecture with explicitly coded elements like short-term and long-term memory, but use LLMs to form (natural language) plans and decide on what state transitions should be made. If these systems become AGI when LLMs are scaled up, how does the question resolve.

1y

What counts as AGI here? Is it sufficient for it to do all text-based tasks as well as the average human?

1y

hmm, what if I implement a dovetail by tweaking the weights of a transformer architecture and clocking it with a loop? then it implements all programs simultaneously, including AGIs.

1y

@Mira Each sub-program may be an LLM, but I think you'd be hard-pressed to say that the overarching one is. Also, it would be too slow to qualify as an AGI. Same problem faced by the computable variations of AIXI.

1y

@IsaacKing Oh no, I meant a single model frozen and unchanging during the whole process, which when clocked implements a universal dovetail. So there would be a only one program.

But it would take more than 1000 years to destroy humanity, so your update wouldn't count it...

predictedYES 1y

@Mira Oh, I see. Yeah that's not what I had in mind, so I've edited the description to fix that.

1y

@IsaacKing Also Mira's proposal would not work in the real world, not even after 1000 years. The machinery / memory / whatever would fail long before anything intelligent happened.

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Win cash prizes for your predictions on our sweepstakes markets! Always free to play. No purchase necessary.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like trading still use Manifold to get reliable news.
How do I win cash prizes?
Manifold offers two market types: play money and sweepstakes.
All questions include a play money market which uses mana Ṁ and can't be cashed out.
Selected markets will have a sweepstakes toggle. These require sweepcash S to participate and winners can withdraw sweepcash as a cash prize. You can filter for sweepstakes markets on the browse page.
Redeem your sweepcash won from markets at
S1.00
→ $1.00
, minus a 5% fee.
Learn more.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules