Friday, May 20, 2022

Wednesday, May 18, 2022

Human Level AI

Yann Lecun:

About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:
(0) there is no such thing as AGI. Reaching "Human Level AI" may be a useful goal, but even humans are specialized.
(1) the research community is making *some* progress towards HLAI
(2) scaling up helps. It's necessary but not sufficient, because....
(3) we are still missing some fundamental concepts
(4) some of those new concepts are possibly "around the corner" (e.g. generalized self-supervised learning)
(5) but we don't know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we can't predict how long it's going to take to reach HLAI.

I really don't think it's just a matter of scaling things up. We still don't have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do.


Some may believe that scaling up a giant transformer trained on sequences of tokenized inputs is enough.
Others believe "reward is enough".
A few others believe that explicit symbol manipulation is necessary.
A few don't believe gradient-based learning is part of the solution.

I believe we need to find new concepts that would allow machines to:
- learn how the world works by observing, like babies.
- learn to predict how one can influence the world through taking actions.
- learn hierarchical representations that allows long-term predictions in abstract representation spaces.
- properly deal with the fact that the world is not completely predictable.
- enable agents to predict the effects of sequences of actions so as to be able to reason and plan
- enable machines to plan hierarchically, decomposing a complex task into subtasks.
- all of this in ways that are compatible with gradient-based learning.

The solution is not just around the corner. We have a number of obstacles to clear, and we don't know how.

Программа исследований ...

Friday, May 13, 2022

Отравленный ИИ

Bloomberg о проблеме отравления данных в машинном обучении. Для бэкдора в системе на базе ML может быть достаточно специальным образом изменить вего лишь 0.7% данных.

Такова жизнь

Реальная производительность приложений машинного обучения. А если будут еще и состязательные примеры?

Wednesday, May 04, 2022

Sunday, May 01, 2022

INJOIT vol. 10, no. 5

Вышел пятый номер журнала INJOIT в 2022 году. И десятый год издания журнала.

Темы статей:

  • Полурешётки подмножеств потенциальных корней в задачах теории формальных языков. Часть II. Построение инверсного морфизма
  • A Survey of Adversarial Attacks and Defenses for image data on Deep Learning
  • Applying a probabilistic algorithm to spam filtering
  • A Prediction Model for Lung Cancer Levels Based on Machine Learning
  • On a formal verification of machine learning systems
  • Система поддержки принятия решений при выборе наилучшей альтернативы (на примере оптовой закупки коровьего молока)
  • Contrast and Contrast Enhancement (in Logic of Visual Perception of Graphic Information)
  • Имитационная модель обработки данных ветроэлектростанции на основе нейронной сети
  • Multi-Objective Model Predictive Control
  • Практическое применение функционального программирования и регулярных выражений в библиометрическом анализе
  • Классификация потоков данных комплексов управления и принципы дифференциации на модули элементов таких систем
  • Построение семейства сценариев использования с целью анализа функциональной безопасности систем управления
  • Об Основах Методологии Оценки Качества Больших Технических Систем в Процессе Эксплуатации

Архив журнала находится здесь.

/via OIT Lab