Building brains - is bigger always better?

Because chemical connections between neurons are slightly unreliable, it doesn’t always help to have more...
20 May 2019

Interview with

Timothy O'Leary, Cambridge University

HUMAN-BRAIN-NEURONE

A pyramidal neurone in the human cerebral cortex

Share

This month, Cambridge University scientists have discovered that having a bigger brain doesn’t necessarily make you smarter. Engineers used a mathematical model to show that, because the chemical connections between neurons are slightly unreliable, it doesn’t always help to have more of them. Theyalso say that having redundant connections in the brain can make you learn faster. Timothy O’Leary is one of the authors of this study, and he took Phil Sansom out for a stroll on Coe Fen in Cambridge, to explain more.

Tim - Like lots of discoveries in science, you set out looking at one particular question, and then as you dig deeper you find that there's a related question that ends up being more interesting. The question we set out to address was to understand how the brain rewires is itself to learn new information.

所以我们要做的是找到一个学习规则that was sufficiently simple that a bag of chemicals, like the connection between two neurons or a synapses as it’s called could actually carry out this rule. Of course there's this fancy clever learning algorithms that are used in artificial intelligence, for example, but currently the field doesn't really believe that these can be implemented in the brain because it requires all of the connections essentially to know what all of the other connections are doing at any point in time. And that just didn't seem realistic.

Phil - Simple learning rules and algorithms have been tested on models of the brain neural networks before. But what Tim and his colleagues did is have one neural network learning from another. That way the teacher network could use a completely random rule that the learner would have to figure out.

Tim - So what we found was that almost independent of the actual learning rule that any neural circuit uses, if you simply add redundant connections, the network will learn faster. So, to take an example, I could think of a map of the country and I could think of all of the roads between two points. And we could have a connection between neuron a and neuron B and instead of just being a single connection there'd be multiple alternative routes just like there'd been multiple possible roads you could take from one city to another.

Phil - Scientists have known about these redundant connections in our brains for a while but this result might finally explain them. They actually seem to help us learn quicker, but that's a confusing idea. How can redundant connections be useful? Well funnily enough Tim says it's a little bit like hiking.

Tim - So if you imagine you're out here and it's very very foggy, and that does happen, and all you can see is the few square feet around your feet and you're standing on a slope but you'd like to get to the bottom of that slope. What do you do? You follow the steepest.

菲尔-你们ah. You go down where you can see it goes down.

Tim - Exactly. But then if you think about the wider landscape, what you might be doing is following the slope down into a local gully.

Phil - Right. You might end up in like a force like dip.

Tim – Exactly. And up in some kind of gully.

Phil - What does this have to do with learning?

Tim - The way learning works is you can think of the height of the hill as a measure of how badly you're doing a task. The higher up the hill the worse you're doing so what you want to do is get to the bottom. This is essentially how all learning rules work. And what we found was that adding these redundant connections to the brain actually smooths out the area landscape, so it makes the crinklyness smaller relative to the immediate slope underfoot.

Phil - It would be like it is here, a very simple slope down to the bottom.

Tim – It’s a very simple slope and the slope doesn't change much as you move around.

Phil - This idea that redundant connections help us learn. That is result number one. Number two has to do with another part of Tim's model and that's the noise. In real life, the chemical transmissions between our neurons aren't perfect and sometimes they mess up. That's what's called intrinsic synaptic noise. People don't include this noise when they programmed neural networks and with those networks the bigger they are the better they do. But Tim and his colleagues did include it. And when they did, they got to a point where bigger stopped equalling better. And that is where the noise started to drown out the signal, the signal being how we learn. What they found is that there's actually an optimal size for a neural network that's trying to learn something

So, what does that mean? Say I'm God. I want to create a brain to learn a task. Here is how I do it. I'm making neurons and adding brand new connections to those neurons until the brain is physically able to learn this rule. Then I start adding redundant connections between neurons that are already connected, and this makes the learning quicker. But I only add them up to the point where this synaptic noise becomes too great. And there you go. This is only for one learning task but I've made the perfect brain.

Comments

Add a comment