THE FUTURE IS HERE

François Chollet on OpenAI o-models and ARC

François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set. They explore two core solution paradigms—program synthesis (induction) and direct prediction (“transduction”)—and how successful solutions combine both. Chollet emphasizes that human-like reasoning requires both fuzzy pattern matching (deep learning) and discrete, step-by-step symbolic processes. He also reveals his departure from Google to establish a new research lab focused on program synthesis, and provides insights into the next-generation ARC-2 benchmark.

SPONSOR MESSAGES:
***
CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments.
https://centml.ai/pricing/

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events?

They are hosting an event in Zurich on January 9th with the ARChitects, join if you can.

Goto https://tufalabs.ai/
***

Read about the recent result on o3 with ARC here (Chollet knew about it at the time of the interview but wasn’t allowed to say):
https://arcprize.org/blog/oai-o3-pub-breakthrough

TOC:
1. Introduction
[00:00:00] 1.1 Is o1 reasoning?

2. Interview Starts: ARC Competition 2024 Results and Evolution
[00:02:30] 2.1 ARC Prize 2024: Reflecting on the Narrative Shift Toward System 2
[00:05:33] 2.2 Comparing Private Leaderboard vs. Public Leaderboard Solutions
[00:08:21] 2.3 Two Winning Approaches: Deep Learning–Guided Program Synthesis and Test-Time Training

3. Transduction vs. Induction in ARC
[00:11:08] 3.1 Test-Time Training, Overfitting Concerns, and Developer-Aware Generalization
[00:14:39] 3.2 Gradient Descent Adaptation vs. Discrete Program Search

4. ARC-2 Development and Future Directions
[00:18:55] 4.1 Ensemble Methods, Benchmark Flaws, and the Need for ARC-2
[00:20:39] 4.2 Human-Level Performance Metrics and Private Test Sets
[00:24:48] 4.3 Task Diversity, Redundancy Issues, and Expanded Evaluation Methodology

5. Program Synthesis Approaches
[00:25:22] 5.1 Induction vs. Transduction: Different Solutions for Different Task Types
[00:27:15] 5.2 Challenges of Writing Algorithms for Perceptual vs. Algorithmic Tasks
[00:29:27] 5.3 Combining Induction and Transduction (Kevin Ellis’s Paper)
[00:32:09] 5.4 Multi-View Insight and Overfitting Regulation

6. Latent Space and Graph-Based Synthesis
[00:33:21] 6.1 Clément Bonnet’s Latent Program Search Approach
[00:35:14] 6.2 Decoding to Symbolic Form and Local Discrete Search
[00:36:19] 6.3 Graph of Operators vs. Token-by-Token Code Generation
[00:40:54] 6.4 Iterative Program Graph Modifications and Reusable Functions

7. Compute Efficiency and Lifelong Learning
[00:43:09] 7.1 Symbolic Process for Architecture Generation
[00:45:37] 7.2 Logarithmic Relationship of Compute and Accuracy
[00:47:24] 7.3 Learning New Building Blocks for Future Tasks

8. AI Reasoning and Future Development
[00:48:19] 8.1 Consciousness as a Self-Consistency Mechanism in Iterative Reasoning
[00:51:34] 8.2 Reconciling Symbolic and Connectionist Views
[00:55:17] 8.3 System 2 Reasoning Necessitates Awareness and Consistency
[00:58:09] 8.4 Novel Problem Solving, Abstraction, and Reusability

9. Program Synthesis and Research Lab
[01:00:57] 9.1 François Leaving Google to Focus on Program Synthesis
[01:04:59] 9.2 Democratizing Programming and Natural Language Instruction

10. Frontier Models and O1 Architecture
[01:09:42] 10.1 Search-Based Chain of Thought vs. Standard Forward Pass
[01:11:59] 10.2 o1’s Natural Language Program Generation and Test-Time Compute Scaling
[01:14:39] 10.3 Logarithmic Gains with Deeper Search

11. ARC Evaluation and Human Intelligence
[01:17:59] 11.1 LLMs as Guessing Machines and Agent Reliability Issues
[01:20:06] 11.2 ARC-2 Human Testing and Correlation with g-Factor
[01:21:20] 11.3 Closing Remarks and Future Directions

SHOWNOTES PDF:
https://www.dropbox.com/scl/fi/epf2pysdd9uxc77c9shqr/CHOLLETNEURIPS2.pdf?rlkey=9knnyuj9o28ke7qpezrtmspsd&dl=0