
I’ve recently started going back into neuroscience research after reading some of the proposals from Yann LeCun about the current approaches to GenAI being a dead end. LeCun’s perspective is that the focus should be on objective-driven AI as the future of the field. For anyone looking for a quick summary, here it is:
Current GenAI mimics a small part of the brain’s architecture. It was inspired by how the brain creates neuronal networks, but GenAI, a subset of deep learning, is a much simpler version. A biological brain is much more complex.
LeCun’s AI framework is a close approximation to a functioning brain, but most of it is still theoretical. There are some really interesting parallels between how objective-driven AI and deep learning models work and a biological brain, but the complexity of the brain far exceeds current AI systems. This means a fly brain can still outthink current AI systems when faced with the real world.
Now, into some more detail. Being able to map out the entire neural network of a fly’s brain might seem trivial. Still, it is an opportunity to compare our approaches to AI with those of a real biological neural network. Here goes:
A quick summary of what Yann LeCun’s Objective-driven framework of AI is about. In this model, an AI system is able to achieve goals through a hierarchical structure of objectives. There is a world model, a cost function and an actor that selects actions to minimize the cost function. This system learns to represent the world and make decisions without extensive human-labeled data. Instead, it relies on intrinsic objectives and interactions with the environment. This sounds a lot like how any living creature figures out how to find food when hungry, so these AI systems are intriguing. Here’s where they align:
- Hierarchical Organization – LeCun’s model has a hierarchical structure, which aligns well with the brain’s organization. The brain processes information through hierarchical pathways, particularly in the neocortex (that’s only in mammals).
- Predictive coding – LeCun’s model predicts future states, something the brain constantly does through mechanisms like predictive coding.
- Reward systems and cost functions – The cost function of LeCun’s model evaluates the desirability of the predicted states and is very similar to the brain’s reward system. Anyone who is familiar with the concept of dopamine in the brain can understand how the brain rewards certain behaviours to guide learning and decision-making.
- Action Selection – The actor model in LeCun’s framework is responsible for selecting actions and is comparable to a biological brain’s motor cortex and basal ganglia.
- Self-supervised learning – LeCun emphasizes self-supervised learning of AI systems. These systems can learn from unlabeled data and interactions with the environment, which is similar to how the brain learns. A biological brain continuously learns from the environment without explicit external supervision.
- Distributed Representations – Both LeCun’s AI framework and the brain use distributed representations to encode information. In the brain, memories and concepts are thought to be represented by patterns of activity across large groups of neurons rather than in single cells.
Although there are a lot of intriguing parallels between LeCun’s AI framework and the brain, there are a couple of things to consider:
1. LeCun’s system is theoretical and has yet to be built.
2. The brain’s complexity still far exceeds current AI systems.
The brain’s ability to generalize, adapt, and operate across multiple timescales and domains is still unmatched in the world of artificial systems. Insights from neuroscience can certainly inform AI research and vice versa. There are many opportunities for dialogue between the two fields.

