What a fly brain can tell us about AI

Leonardo.ai and Affinity Photo

I’ve recently started going back into neuroscience research after reading some of the proposals from Yann LeCun about the current approaches to GenAI being a dead end. LeCun’s perspective is that the focus should be on objective-driven AI as the future of the field. For anyone looking for a quick summary, here it is:

Current GenAI mimics a small part of the brain’s architecture. It was inspired by how the brain creates neuronal networks, but GenAI, a subset of deep learning, is a much simpler version. A biological brain is much more complex.

LeCun’s AI framework is a close approximation to a functioning brain, but most of it is still theoretical. There are some really interesting parallels between how objective-driven AI and deep learning models work and a biological brain, but the complexity of the brain far exceeds current AI systems. This means a fly brain can still outthink current AI systems when faced with the real world.

Now, into some more detail. Being able to map out the entire neural network of a fly’s brain might seem trivial. Still, it is an opportunity to compare our approaches to AI with those of a real biological neural network. Here goes:

A quick summary of what Yann LeCun’s Objective-driven framework of AI is about. In this model, an AI system is able to achieve goals through a hierarchical structure of objectives. There is a world model, a cost function and an actor that selects actions to minimize the cost function. This system learns to represent the world and make decisions without extensive human-labeled data. Instead, it relies on intrinsic objectives and interactions with the environment. This sounds a lot like how any living creature figures out how to find food when hungry, so these AI systems are intriguing. Here’s where they align:

  1. Hierarchical Organization – LeCun’s model has a hierarchical structure, which aligns well with the brain’s organization. The brain processes information through hierarchical pathways, particularly in the neocortex (that’s only in mammals).
  2. Predictive coding – LeCun’s model predicts future states, something the brain constantly does through mechanisms like predictive coding.
  3. Reward systems and cost functions – The cost function of LeCun’s model evaluates the desirability of the predicted states and is very similar to the brain’s reward system. Anyone who is familiar with the concept of dopamine in the brain can understand how the brain rewards certain behaviours to guide learning and decision-making.
  4. Action Selection – The actor model in LeCun’s framework is responsible for selecting actions and is comparable to a biological brain’s motor cortex and basal ganglia.
  5. Self-supervised learning – LeCun emphasizes self-supervised learning of AI systems. These systems can learn from unlabeled data and interactions with the environment, which is similar to how the brain learns. A biological brain continuously learns from the environment without explicit external supervision.
  6. Distributed Representations – Both LeCun’s AI framework and the brain use distributed representations to encode information. In the brain, memories and concepts are thought to be represented by patterns of activity across large groups of neurons rather than in single cells.

Although there are a lot of intriguing parallels between LeCun’s AI framework and the brain, there are a couple of things to consider:

1. LeCun’s system is theoretical and has yet to be built.

2. The brain’s complexity still far exceeds current AI systems.

The brain’s ability to generalize, adapt, and operate across multiple timescales and domains is still unmatched in the world of artificial systems. Insights from neuroscience can certainly inform AI research and vice versa. There are many opportunities for dialogue between the two fields.

The day an algorithm started painting…

I’ve been reading and writing a lot lately on the future of work, specifically about how many jobs may no longer exist in the future because of automation. This means that algorithms and robots might be able to do the work more efficiently and cheaply than any human alternative. I’ve started to see some examples of this in the news. One was a robot that was able to inspect bridge infrastructure and alert someone if it found the concrete pillar of a bridge was crumbling. The robot boat could travel up and down a river, inspecting the underwater structures in ways that had been previously handled by human divers. There are other examples of robots that use magnets to cling to a bridge, allowing them to crawl over the structure and inspect it for defects. Both of these are great examples of robots replacing tedious and potentially dangerous jobs that are currently being done by people.

For many people researching the future of work this isn’t such an issue. It makes sense that these kinds of jobs will become a hybrid of people and machines working together to solve a problem. In some cases it might mean a machine can completely take over the job.

There has always been a sense of security that the more “human” jobs would be free from any encroachment from machines. Those were the more complex jobs that required creativity and complex problem solving. It seemed like a good partnership, machines could handle the tedious tasks and we could tackle the real problems of the world.

Then I heard about “The Next Rembrandt” project. The project analyzed all of Rebrandt’s paintings using 3D scanning that could captured the brushstrokes and the painting itself,  pixel by pixel. When the team was done it asked the algorithm to take what it knew about a real Rembrandt:

DP145921

Credit: H. O. Havemeyer Collection, Bequest of Mrs. H. O. Havemeyer, 1929

and used it to create a completely new Rembrandt using 3D printing to replicate the colour and brushstroke texture of an original painting. It then told the algorithm to take what it had learned and paint something. They gave it some parameters, the program should painting a portrait of a 30 to 40 year old Caucasian male with facial hair, he should be wearing dark clothing and have a collar around his neck and a hat on his head. The program then took all this information and returned this:

next_rembrandt

Credit: The Next Rembrandt

It seems I was a bit smug about what machines wouldn’t be able to tackle in the future. Art has traditionally been considered an exclusively human domain. On the surface it would seem that that domain has been breached. In truth, it hasn’t happened yet. The machines that created this painting were carefully directed by human programmers and designers. It didn’t decide that it needed to become a painter because of some intrinsic need, it was following orders. It managed to do this in an original and unpredictable way which is most important take away from this project. If humans are able to design these kinds of machines we are going to be able to experiment and innovate in much more interesting and accelerated ways. The real thrill from this kind of experiment is projected it forward to the next challenge. Imagine an algorithm being feed decades of diagnostic information on cancer, all of the available pharmaceutical and medical techniques available and then being told “cure cancer”. Initially it won’t be that expansive, more likely the command will be “cure my cancer” but you can start to see how these tools can begin to actually change the way we tackle problems.