Getting on the same wavelength
Thanks to neuroscience, the ways in which we are educated and are trained have become much easier to understand.
by Moran Cerf
The German philosopher Immanuel Kant believed that education differs from training in that the former involves thinking whilst the latter, in his view, does not. A staunch advocate of public education, was Kant correct to make the difference? Are education and training not one two sides of the same learning coin? Further still, how does one learn?
How the Brain Learns
The knowledge we now have on learning is mostly based on research on animals whose brains we could investigate directly to observe the traces of learning. A tiny worm, the C. elegans, houses only a few hundred brain cells and neurons. Still, it has taught us a lot about learning. Because of its simplicity, we can track its entire learning process, so from this humble creature we have learned many complex things on the learning process.
What we have been able to examine is that every time the worm makes a choice, a feedback loop begins. If the action is perceived as positive (rewarding, pleasant, that decreases pain, etc.), then the pathway and connections that determined the action are strengthened. Otherwise, they are weakened.
A simple example of this would be placing your hand in a fire. In this case, a negative result (pain) would lead you to refrain from repeating that behavior in the future. This is something we learn quickly, yet there are other things, such as deciding between two meals we like almost equally at a restaurant, which we learn much more slowly. The connections are still reinforced, but it takes more time due to both having positive outcomes.
Overall, this form of learning is called reinforcement learning, and it is how much of the learning in the brain occurs. Why is it relevant to technology? Because in today’s world, technology tries to mimic the brain in order to achieve our ability to learn. The most well-known and common technology that is attempting to do so is machine learning (ML). Machine learning is effectively an implementation, in machines, of the same basic rule of learning. No instructions and rules. No supervision. Instead of me teaching it how to identify cat videos, I show it millions of examples of videos with/without cats and hope that it will derive some rules that would enable it to classify any future video as part of the right group, based on its similarity to the previous sets. In a world where data exists in abundance, we can do this and let computers do the heavy lifting.
However, what is less known is that this technology is actually also teaching us a lot about how our own brains work. And the best way to explain this involves learning about chickens and sex.
Sex and Learning
When a chick is born, we have to wait for it to grow into an adult chicken before we can eat it. This costs time and money. If a farmer spends money raising a rooster rather than a hen, it is both money and time wasted since hens, unlike roosters, can also lay eggs: you get more bang for your cluck, so to speak. Because of this dilemma, there is a process called sexing (note: not sexting) which is done just after hatching. This is where workers are told to determine the gender of the chick, to avoid the above problem.
To find out if a baby chick is male or female, workers use their fast hands to check. However, there is an alternative way to simply checking the chick’s genitals. Hold tight. As it turns out if you hold a baby chick tight – as in squeeze the chick a little – it makes a little squeaking sound. This sound tells you if it is male or female and is the fastest way to distinguish the two. If you take a bag of little chicks and show them to an expert sexer, they can do this sorting with no problem: they will be very fast and accurate.
Now, if we took two experts, we would expect them both to be able to tell us which chick is male and which is female. But, if you asked them to explain the logic they used to determine the sex, then you would more likely than not be surprised. While they agree on the outcome (nearly 99% agreement between two experts classifying chicks) they are likely to not agree on the description of the rules they used. One would say it is more related to the length of the sound, or the pitch. The other may argue that it was the duration and vibrations in the sound. Many options. Very few agreements. Yet near-perfect synchronicity in successful classification.
Why? When training new workers on their first day in the job to acquire this poultry talent, you get them to squeeze the chick and guess whether it is a male or a female. If the worker makes a mistake, an expert taps them (for example) on the shoulder twice, without telling the worker why they were incorrect. The rules of the game are not explained at any point, and every time the new worker makes a mistake they are tapped this way. This may seem strange but, by the end of the day, the person who began with no knowledge that same morning is now an expert at sexing chicks. The worker somehow learns to become an expert, but they never understand how they did it. Tomorrow they will already be 99% in agreement with their trainer. They will also have their own set of rules as to how they do it. Those rules may well be very different than their trainer’s. But they work. This is basically how machine learning works. You do not explain the rules. Instead, you provide many examples and let the computer find out its own rules in order to obtain the correct answer.
And the incredible surprise is that recently we have learned that we can now reverse engineer this process and bring it back to our biological brain and take lessons from the computer. In a field I would call “sensory addition” we show that we can train humans to learn by using positive/negative inputs without even trying to come up with any rules: eliminating the common need humans have to build a narrative, a story, around their choices and solving complex problems using the power of the brain, specifically our senses, in order to find patterns and signals in complex data – without having a rationale to what we actually learned.
Learning With Your Feelings
To explain how this is done we will use another very intriguing example: an experiment where participants were asked to wear a vest that was fitted with motors that created pressure on the participants’ upper body. The vest delivered a specific pressure pattern in every trial. At the end of the brief tactile experience, the participant was asked to choose a pattern on a tablet placed in front of them: left or right. Participants had no idea why they were being asked this question, nor which direction would be correct. But they tried anyway because that was the experiment.
The participants picked a direction and were told if it was correct or not – but were never told why this was the case. If they were right, they won a dollar. If they were wrong, they lost one. This went on for a while, and, over time, participants got better and began to get more correct answers. They then started to find order and meaning in the patterns – an order that predicted the correct choice. Just as with the sexers, our participants came up with their own rules that worked. At the beginning of this experiment, there are growing pains, but, over time, the body becomes attuned to what is and is not correct, even if their conscious mind was struggling to reason as to why this was the case.
So what is the catch here? What determined if a participant was correct or not? The truth is that the participants were not just getting random patterns through their vest. They were actually strapped up to the S&P 500 stock exchange which was translated into a feeling on their body. They were, in fact, sensing the market, and their choices were actually buying and selling stocks. In a small period, the participants were able to get to grips with the stock market. So much so that they performed much better with the vest than when they were given the data in the standard form – a screen with loads of stock tickers running quickly. In fact, some of them performed better than savvy brokers who sat in front of a Bloomberg screen and tried to analyze the same data in the standard format.
What this shows is that the sheer power of the brain to quickly identify patterns and make sense of them is at the core of learning. Instead of thinking about the market, our brains can learn to tap into varying forms of learning. Some are buried deep under the hood and are not fully accessible to us. Instead of “thinking about the market” we can instead begin “feeling the market”.
Once we realize this power of the brain – to find meaning in complex data through nuanced signals buried deep within – we can begin to do lots of fun things that benefit from this remarkable tool: make you tell if a film is going to be good for example (and outperform Hollywood executives in gambling on film successes based on common attributes), navigate a complex cockpit of a plane, or feel your car so you can know if it is running properly (which we can now implement together with big car companies as a way to help race car drivers gain an advantage by ‘feeling’ the track, car, competition, weather, etc.). Essentially, if you can feel it – you can tell immediately if something is not right.
Practically, the sky’s the limit when it comes to using sensory learning to solve analytical problems. We can use our skills for much more complex tasks than we used to think, even more so than our cognitive abilities. This is good for both workers and businesses. Why? Because businesses use data analytics increasingly to solve complex problems, and this is why they frequently use machines. With this transformation of data analytics into a sensory affair, we are bringing the capabilities back into human hands.
Aligning Teachers and Students to Transform Education
But our renewed understanding of how learning happens and the variety of tools that can be used to aid learning are not limited to analytics. There are many additional applications for the novel tools that neuroscience gives us now that benefit learning and ‘meaning-creation’ across various other fields. Our understanding of learning through neuroscience could completely transform education. In the past, teaching was frequently done in a ‘one-to-many broadcast’ format: one person dictating information to many other people in a classroom. New teaching methods have not only failed to change this – they have actually worsened it by increasing amplification. E-learning, podcasts, or any online content delivery simply increases the numbers of listeners. It doesn’t change anything substantially in the way content is delivered or accessed.
This model is littered with problems. If, for example, I were to teach a class of 200 students, I would probably speak too fast for many, too slow for a few, and just right for some. Information would get lost within these disconnects or become harder to grasp for some people at different times during the lesson. The problem here is that I can’t see where these gaps occur; there is currently no real-time feedback model available for teachers in a classroom. The current model relies heavily on teachers reading their audience, but with neuroscience, they can do it so much more accurately.
One way this can be done is by analyzing the professor’s brain and figuring out whether the students’ brains are perfectly aligned. Comparing the professor’s brain with that of the students can help us understand if match. By matching I mean whether or not they speak and understand the same language, idioms, metaphors and all the other nuances of communication. Do they communicate at the same bandwidth and speed, for instance? If we match students to professors through brain alignment, we can then tailor the classes to enhance learning. These bespoke lessons will be based not on competency or age, but different brain profiles, aligning classes and professors’ minds and the understanding of one another. If we understand how learning takes place in the brain – we can start aligning the communication paths and optimize learning. Not only making sure the brain finds meaning in patterns in itself – as we did with the vest – but actually making sure that the signal being sent (the content) is optimal for the receiving brain.
And, of course, once we have brain data from teachers and students, we can do more. We can also solve the delayed feedback problem students face by having their neural signals decoded instantly and fed back to the professor. A teacher would know immediately whether or not the lessons and ideas actually went into the students’ brains. If everyone’s memory is underperforming when a message is communicated, the teacher will be aware immediately and can try a different way of explaining an idea. Maybe use a different language or different example. If an idea landed in everyone’s brains already – then the teacher can move on. This allows those teaching to impart an idea the optimal amount of time necessary instead of focusing on the wrong topics that were already easily digested by the students: making learning more efficient and allowing those in charge of education to identify the real problems within the system.
In our lab, we have also developed tools that allow us to see how engaging a topic is, not only whether or not it was understood. By using this tool, the professor can see which is the most engaging way to teach, and adapt their teaching behavior accordingly.
Ultimately, these are all variations of the idea that learning is effectively transferring content to the brain in a way that allows the recipient of the knowledge to find meaning in it, i.e., planting it, in ways that match their way of thinking. What we need to do is use the brain’s mighty powers of encoding content and channel the content optimally so that the brain can quickly find order in the content delivered.
What these technologies amount to is a measure we call cross-brain correlation: allowing us to view in the classroom what is interesting and understandable to students by seeing whether or not their brains are looking the same, or more colloquially, ‘getting on the same wavelength’. To expand this to the realm of corporate learning, we can now look at different team members brains and match them accordingly, making sure that teams are aligned and can think the same way – or of course the opposite, if that is what is desired. Outside of learning, this can and is being used for other things such as advertising, movies, and even assessment of politicians’ speeches – whatever correlates with the most brains is what is then used. Thanks to neuroscience, we can find the best way to deliver ideas, match professors with students and gain real-time feedback, leading to better value in the content delivered. In short, as we begin to understand better how the brain learns we can make learning more efficient which leads to even more understanding.
The Capacity to Learn
There is currently no evidence that there is a limit to the brain’s capacity for learning, so in theory, there is no reason why a person should not be able to speak many more languages or remember many more ideas. The only limits we have found are time limits (how many hours are you willing to give to learning new ideas versus using those you already have), an effort limit (how much energy do you have to give to learning), and… boredom limit (ultimately, sitting in a room with a vest that delivers pressure and making choices is quite boring…). But right now, capacity-wise, we are underusing our resources.
This means that maybe neuroscientists should start looking at ways to amplify learning in times where time, boredom, energy are in abundance. One of those timeframes that we are exploring now is… sleep. What can we learn when we are sleeping.
As it turns out, when you go to sleep your brain is, well, awake. It actually works hard. It does a lot of things in preparation for the coming days, removing unnecessary leftovers from previous days, shuffling ideas, rethinking them. A lot. One of the things the brain does at certain moments in the night is strengthening the connections that were already made in your brain before going to sleep.
However, even when we are sleeping, there are certain time frames when information can flow in. Where external input can alter thinking. Make our brain strengthen one memory at the expense of another. Select topics to rehash or reverse. Recent studies on learning while we sleep show that while we can hardly teach you new content when you are sleeping, we can certainly make your brain get better in knowing things that you learned during the day – when you were awake. Harness all of these ‘down’ moments to improve your knowledge. You read about the French Revolution when you are awake and put it in, and we will make sure that your brain does the heavy-lifting in solidifying the connection that will make you know it tomorrow – while you sleep. Early studies in our lab and others in progress have shown remarkable results. Maybe soon enough you will be able to go to sleep and wake up knowing Kung-Fu!
In short, we are now learning more about learning than ever before. Starting from the tiny C. elegans who learns to find a sugar drop in a room full of distractions, to our complex brains learning complex ideas with senses, engaging content, sophisticated teachers, or repeated efforts – what is clear is that technology and neuroscience can help learners maximize their infinite capacity, help businesses improve their performance and analytics, and help all of us maximize our brains’ potential for finding meaning in information.