Wednesday, May 18, 2022

Human Level AI

Yann Lecun:

About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:
(0) there is no such thing as AGI. Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
(1) the research community is making *some* progress towards HLAI
(2) scaling up helps. It's necessary but not sufficient, because....
(3) we are still missing some fundamental concepts
(4) some of those new concepts are possibly "around the corner" (e.g. generalized self-supervised learning)
(5) but we don't know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we can't predict how long it's going to take to reach HLAI.

I really don't think it's just a matter of scaling things up. We still don't have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do.


Some may believe that scaling up a giant transformer trained on sequences of tokenized inputs is enough.
Others believe "reward is enough".
A few others believe that explicit symbol manipulation is necessary.
A few don't believe gradient-based learning is part of the solution.

I believe we need to find new concepts that would allow machines to:
- learn how the world works by observing, like babies.
- learn to predict how one can influence the world through taking actions.
- learn hierarchical representations that allows long-term predictions in abstract representation spaces.
- properly deal with the fact that the world is not completely predictable.
- enable agents to predict the effects of sequences of actions so as to be able to reason and plan
- enable machines to plan hierarchically, decomposing a complex task into subtasks.
- all of this in ways that are compatible with gradient-based learning.

The solution is not just around the corner. We have a number of obstacles to clear, and we don't know how.

Программа исследований ...

No comments: