Your deepest personal life lies between your ears: your experiences, desires and memories, your politics, your emotional problems and mental ailments.
It’s for you to keep secret or to share. But could scientists soon open up your mind for anyone to read?
Researchers armed with super-high-tech brain scanners and artificial intelligence programs claim they are forging new keys that may unlock our inner worlds.
They are developing technology to read our minds with such accuracy that it may reveal exactly what we’re looking at or imagining, discover our voting and buying intentions, and even reveal why we may be wired for illnesses such as depression and schizophrenia.
In fact, this technology is already being used to understand how our brains work in order to diagnose and treat a range of conditions, for instance enabling surgeons to plan how to operate on brain tumours while sparing as much healthy tissue as possible.
It’s also enabled neurologists and psychologists to map how cognitive functions such as vision, language and memory operate across different brain regions.
Scientists use it to track how our brains produce experiences such as pain, and are developing the technology to fathom problems such as addiction — and to test drugs for treating these and illnesses such as depression.
But now some experts are asking whether the results produced by this technology are robust enough to be relied on, with implications for how patients with mental health problems, for instance, are being treated.
The artificial intelligence system translated each volunteer’s neuron reaction from the fMRI back into computer code to recreate the photographic portrait
The technology at the heart of it all is MRI (magnetic resonance imaging), a scanning system first developed in the 1970s.
MRI uses a strong magnetic field to agitate tiny particles called protons inside our cells.
These protons respond differently according to the cells’ chemical nature. The differences in how cells respond enable physicians to discriminate between various types of tissues.
One of the most common types of MRI application is called fMRI (functional magnetic resonance imaging), which is used to watch how our brains are operating.
It relies on the fact that when regions of the brain become active, they demand energy in the form of oxygen-rich blood.
Oxygen molecules can be detected by fMRI, so the scanners can see where in the brain our neurons — brain nerve cells — work hardest (and draw most oxygen) while we have thoughts or emotions.
MRI technology is becoming ever more accurate. Late last year scientists working on the Iseult project, a Franco-German MRI machine-building initiative based in Paris, switched on the world’s strongest scanner.
At its heart is an extraordinarily powerful 132-ton magnet rated at 11.7 Tesla. Standard NHS hospital MRIs used for diagnostic scanning are typically 1.5 to 3 Tesla.
This titanic power enables the Iseult scanner to picture things in our brain as small as 100 microns; the size of our larger individual brain cells.
The MRI can also picture the connections between these brain cells, which are typically some 700 microns long.
Such clarity can enable scientists to see which brain cells are firing, and how they interact within vast networks.
But what do such interactions mean? To find out, investigators are using another cutting-edge technology, artificial intelligence (or algorithms — sets of mathematical instructions in a computer program), to interpret this electrical brain-cell activity.
In January, researchers at Radboud University in the Netherlands published startling results in the journal Nature Scientific Reports from an experiment where they showed pictures of faces to two volunteers inside a powerful brain-reading fMRI scanner.
As the volunteers looked at the images, the fMRI scanned the activity of neurons in the areas of their brain responsible for vision.
The researchers then fed this information into a computer’s artificial intelligence (AI) algorithm.
As you can see from the extreme likeness between the original faces and the portraits, the results are so astonishingly similar as to appear uncanny.
So how did the scientists do this? In order to ‘train’ the AI system, the volunteers had previously been shown a series of other faces while their brains were being scanned — the key is that the photographic pictures they saw were not of real people, but essentially a paint-by-numbers picture created by a computer: each tiny dot of light or darkness was given a unique computer-program code.
A person undergoing an MRI scan (file image) much like the fMRI which detected the volunteers’ neurons
What the fMRI scan did was detect how the volunteers’ neurons responded to these ‘training’ images.
The artificial intelligence system then translated each volunteer’s neuron reaction back into computer code to recreate the photographic portrait.
In the test, neither the volunteers nor the AI system had ever seen the faces that were decoded and recreated so accurately.
Thirza Dado, an AI researcher and a cognitive neuroscientist, who led the study, told Good Health that these highly impressive results demonstrate the potential for fMRI/AI systems effectively to read minds in future.
‘I believe we can train the algorithm not only to picture accurately a face you’re looking at, but also any face you imagine vividly, such as your mother’s,’ she says.
‘By developing this technology, it would be fascinating to decode and recreate subjective experiences, perhaps even your dreams.
‘Such technological knowledge could also be incorporated into clinical applications such as communicating with patients who are locked within deep comas.’
Her work is focused on using the technology to help restore vision in people who, through disease or accident, have become blind.
‘We are already developing brain-implant cameras that will stimulate people’s brains so they can see again,’ she says.
In an as-yet unpublished study, macaque monkeys were fitted with camera-vision implants and then underwent fMRI scans while they looked at the facial photographs.
Thirza Dado’s AI system was able to translate these images back just as accurately as with her human tests, suggesting the camera implants work effectively.
As AI brain-decoding systems become more sophisticated, Thirza Dado believes they could enable police forces to scan witnesses’ brains for memory-pictures of people involved in crimes.
‘In future, we may also be able to look at the ability to picture people’s ideas,’ she says.
Such mind-reading technologies present serious ethical questions about privacy.
Indeed, earlier this year another study showed how computers may, in future, even eavesdrop on what is perhaps the most personal and profound moment of our lives: the thoughts that may flash through the mind around the moment of death (see box, above far right).
U.S. scientists at Ohio State University say they can tell people’s political ideology with an accuracy rate of some 80 per cent using fMRI
But already, U.S. scientists say they can tell people’s political ideology with an accuracy rate of some 80 per cent using fMRI.
In a study published in May involving 174 adults, researchers at Ohio State University were able to predict accurately if they were politically conservative or liberal.
‘Can we understand political behaviour by looking solely at the brain? The answer is a fairly resounding “yes”,’ said study co-author Skyler Cranmer, a professor of political science at Ohio State.
‘The results suggest that the biological and neurological roots of political behaviour run much deeper than we’d thought.’
The study, published in the journal PNAS Nexus, examined how different regions of individuals’ brains communicated with each other, either when looking at pictures or simply doing nothing.
REVEALED, WHAT WE THINK ABOUT IN OUR DYING MOMENTS
Advances in technology and the ability ‘to read’ minds are already testing ethical boundaries.
In February, doctors reported how they’d unintentionally recorded the brain activity of an 87‑year-old patient at the point of his death, performing an electroencephalogram (EEG) on his brain to study his epileptic seizures, when he had a sudden heart attack and died.
In an EEG, sensors are attached to the scalp to pick up electrical signals produced by brain nerve cells as they communicate. This can reveal what activity is occurring in the brain.
Writing in the journal Frontiers in Aging Neuroscience, the doctors explained that because the EEG machine was kept running, they’d recorded the man’s brain activity at the end of his life — and found that we may experience a flood of memories when we die.
For some 30 seconds before and after his heart stopped, the scans showed increased activity in the brain areas associated with memory recall, meditation and dreaming.
Dr Ajmal Zemmar, a neurosurgeon at the University of Louisville in Kentucky, who published the report, speculates: ‘Through generating brainwaves involved in memory retrieval, the brain may be playing a recall of important life events just before we die.’
He adds: ‘Something we may learn from this research is: although our loved ones have their eyes closed and are ready to leave us to rest, their brains may be replaying some of the nicest moments they experienced in their lives.’
A super-computer’s AI system monitored this brain activity and compared it with the volunteers’ self-reported political ideology on a six-point scale from ‘very liberal’ to ‘very conservative’.
It then identified patterns of brain networking to predict political leanings.
Three areas — the amygdala, inferior frontal gyrus and hippocampus — were most strongly associated with political affiliation.
The amygdala is believed to be key in detecting and responding to threats, while the inferior frontal gyrus is key to our understanding and processing language; the hippocampus is central to learning and memory.
While this study did find a link between the brain signatures and political ideologies, it can’t explain what causes what — i.e. is brain pattern the result of the ideology people choose, or did the pattern cause the ideology?
Whatever the case, it’s chilling to think such technology could be developed by authoritarian regimes to detect people’s inner beliefs and punish them for opinions they’ve never voiced.
Yet some commentators argue that the technology’s mind-reading abilities are being seriously overclaimed.
The starkest example is its use as a lie detector, pioneered in 2001 by Daniel Langleben, a professor of psychiatry at Stanford University, California.
He theorised that the brain has to work harder to tell lies, as it has to construct a story and suppress the truth.
His fMRI studies showed increased activity during deception in areas such as the anterior cingulate cortex, thought to be in charge of monitoring errors, and the dorsal lateral prefrontal cortex, linked to behaviour control.
But its efficacy is yet to be convincingly proven. Moreover, studies, such as one by Plymouth University in 2019, show that MRI lie detectors can be beaten with simple mental evasion techniques; making up new memories about a lie, and focusing mentally on a particular superficial aspect of the story can alter brain activity patterns on the MRI scans to render the detector tests inaccurate.
But the doubts go much further. Researchers have questioned whether MRI scanning can give reliable results about individuals’ mental states.
This throws into question the view that psychologists can accurately infer from fMRI scans patients’ mental conditions and whether, for example, their mood states are happy or depressed, as well as the effectiveness of medications to treat low mood — for instance, as seen by changed activity in a specific area of the brain.
Two years ago, psychologists at Duke University in North Carolina reviewed 56 studies that used repeated fMRI scans of 90 people’s brains and showed that the results were vastly different from test to test, even if the tests were repeated within a few days or weeks.
This means that the fMRI brain scan results of a person completing a memory task or watching a film, for example, could easily be entirely different when tested under the same circumstances a week later, even though they’re feeling and thinking the same.
This lack of consistency means that the scans can’t give reliable data on people’s mental functioning or health, reported the journal Psychological Science.
The study’s lead author, Ahmad Hariri, a professor of psychology and neurology, says: ‘If a measure gives a different value every time it is administered, it can hardly be used to make predictions about a person. Better measures are needed to achieve clinically useful results.’
Researchers have questioned whether MRI scanning can give reliable results about individuals’ mental states
Such concerns were reinforced in June by a major report in the journal Nature, with the researchers arguing that fMRI brain scanning studies produce such highly complex and variable results that even large projects that involve hundreds or thousands of patients are still too small to reliably detect most links between the way people’s brains function and the way they behave.
Scott Marek, an assistant professor of psychiatry at Washington University, discovered the problem when he scanned the brains of 2,000 children to try to establish links between their brain activity and their IQ.
To double-check that his results were consistent, Scott Marek split them into two equal sets and analysed them in the same way.
If the results were consistent, the two sets would produce broadly the same data. But they did not. They were very different.
‘I was shocked,’ he says.
Even with large studies such as his, the individual brain scans showed such variable results that broad-scale conclusions about relationships between brain activity and behaviour or intelligence could not be made reliably.
Dr Joanna Moncrieff, a psychiatrist and professor of critical and social psychiatry at University College London agrees.
‘While fMRI scans can show something dynamic is going on in a brain, they don’t provide any evidence for establishing what causes this dynamic activity to happen.
‘The scan shows a brain that is active at that moment, without explaining why.
‘Similarly, no one has ever clinically demonstrated with fMRI scans any identifiable biological mechanism in the brain that consistently underlies depression or other mental disorder,’ she adds.
‘So to claim, as drug researchers do, that they can show in fMRI scans that drugs such as antidepressants, or psychedelics, can rectify mental disorders such as depression makes no sense.’
Karl Friston, a professor of neuroscience at University College London and a global authority on brain imaging, says Scott Marek’s results reveal the complexity in getting reliable results from MRI studies.
Even those that involve thousands of patients can produce apparently convincing but bad results, he told Good Health.
This is due to something Professor Friston calls ‘the fallacy of classical inference’: if there’s lots of data swirling around, it is easier to ‘see’ patterns even though they are coincidental.
‘If we can understand the connectivity between different brain regions and how that might go awry in disorders such as schizophrenia, autism, depression or Parkinson’s, we may be able to understand the failures of this message-passing’
It’s rather like seeing faces in randomly patterned carpets.
Where MRI-scanning science can prove useful, he says, is in deep-dive studies of individuals’ brains to establish how they pass messages around.
‘If we can understand the connectivity between different brain regions and how that might go awry in disorders such as schizophrenia, autism, depression or Parkinson’s, we may be able to understand the failures of this message-passing and find drug therapies to address the problems,’ says Professor Friston.
He adds that while Thirza Dado’s research is definitely reputable science, a fundamental problem is that our brains don’t see things such as faces in photographic style but rather like a Picasso painting: our brains note the side of a nose, an eye, a fringe, and put them together.
So MRI scans will never be able to pull a real-life picture of a face out of someone’s brain.
But he agrees this approach may be a vast help in creating artificial vision: ‘It’s similar to hearing aids,’ he says.
‘If you can identify the visual information that matters, you can emphasise it. This could help people with, for example, partial blindness, by finding out what frequencies produce the best representations and enhancing them.’
So reading minds may still be a long way off. But it appears that the super-tech world of MRI and artificial intelligence could one day restore sight to the blind.