EDIT
Let's operationalize it this way:
"If Eliezer either expresses that the model may be very helpful for alignment research, or Eliezer strongly implies that he feels this way (eg. by indicating that it is more useful than an additional MIRI-level researcher), then we consider this market resolved to YES."
I think this may be able to resolve yes.
Eliezer Yudkowsky was impressed by the work OpenAI published with "Language Models can Understand Neurons in Lamguage Models": https://openai.com/research/language-models-can-explain-neurons-in-language-models
https://twitter.com/ESYudkowsky/status/1656146336687828992.
I'm quite confident he's said something about being impressed by GPT 4 in the past.
@RobertCousineau He's encouraged that people at OpenAI are trying. That's different from thinking their work or the models they are using are effective.
Related market, which, unlike this one, is not "muddied" by any possible biases that Eliezer Yudkowsky may have:
Would doubling our chance of survival from 0% to 0% count as being very helpful, or does it need to actually make a difference?
@MartinRandall He does not behave as if chances were 0%. Are you rounding? I have trouble imaginining Yudkowski assigning pure 100% or 0% even to a mathematical theorem being correct/incorrect.
@MikhailDoroshenko OK, lol. What percent of nonsense on this site is joke and what serious?
@MartinRandall The date gave him an opportunity to say some things he knew will not be received well. It is good to mix those with silly jokes, so that the whole text can be interpreted as humor.
@EzraSchott Thank you. I apologize for being rude back. I may be wrong about why he did it, but given what he preached in Sequences, he is unlikely to claim he is infallible.
Depends on Alana, but I guess doubling current hope should resolve as yes. It would be impressive given that alignment research is not new and in the bankless video he named a lab which he considered interesting and might join after sabbatical.
Differentially helpful, as in he believes it helps alignment more than capabilities, or just helpful? And how helpful does it have to be? Would Copilot/ChatGPT helping researchers run experiments or communicate results faster, etc. count?
@agentofuser Just helpful.
Would Copilot/ChatGPT helping researchers run experiments or communicate results faster, etc. count?
Only if the speedup is significant.
Let's operationalize it this way:
"If Eliezer either expresses that the model may be very helpful for alignment research, or Eliezer strongly implies that he feels this way (eg. by indicating that it is more useful than an additional MIRI-level researcher), then we consider this market resolved to YES."
Eliezer Yudkowsky is impressed by a machine learning model, and believes that the model may be very helpful for alignment research, by the end of 2026, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition
