I see what you’re thinking – Scientists’ Pursuit of Mind-Reading

BY Alicja Krawczun-Rygmaczewska

The concept of mind-reading has been taunting scientists for decades or even centuries. It was pursued by Hans Berger in early 1900’s – which led to development of EEG – and is no less intriguing to researchers nowadays. Many contemporary studies have explored the possibility of decoding other people’s thoughts with Blood Oxygen Level Dependent (BOLD) based functional magnetic resonance (fMRI), some of which obtained quite promising results. This piece will explore whether fMRI is an appropriate technique with which to attempt mind-reading and how BOLD signal can be a proxy of measuring subject’s thoughts and intentions.

In his work on vegetative patients, Adrian Owen demonstrated that BOLD in fact may be used to detect patient’s brain activity and interpret their intentions. Unsurprisingly, his work caused many controversies and raised just as many questions. Dr Parashkev Nachev from Imperial College London was one of the critics, arguing that Owen’s results are mainly based on assumptions that the activity in the brain infers consciousness, which is not always the case. He further pointed out the general limitations of fMRI as a technology. Indeed, BOLD fMRI is not as reliable a technique as we wish it to be. It doesn’t measure neuronal activity directly, but instead reports changes in oxygenated blood flow, which we might interpret as a proxy of neurons firing at best. It also relies on statistical techniques that, if not used correctly, might lead to incorrect results. 

After implementing that to machine learning they were able to very accurately reconstruct movies shown to the subjects, just from their brain activity.

Despite those obvious BOLD limitations, some groups went a few steps further than Owen and his team. There has been a series of interesting studies at the Gallant Lab in UC Berkley. This group has been essentially working on reading thoughts: they aimed to use fMRI to predict people’s decisions during a game of Counterstrike. The researchers wanted to predict participants’ intentions about moving their weapons in the game. While the hypothesis was promising, their results weren’t satisfying. fMRI technique proved to be too slow to detect dynamic changes in the participants’ brains – especially while playing such an intense and fast-moving game. However, Gallants’ group pressed on. They drew conclusions, adopted new methods, and in 2011 published a study which shows BOLD fMRI-facilitated reconstructions of images from movies viewed by participants. To compensate for the slowness of fMRI, they presented a new encoding model that mediates visual information and compares it to haemodynamic reaction which arises much more slowly. In other words, with this model, they were able to predict BOLD signal in the areas of the brain they were interested in (primary visual). After implementing that to machine learning they were able to very accurately reconstruct movies shown to the subjects, just from their brain activity.

For example, as Jack Gallant said in his interview for the Wired, “There’s the classic question of when you dream are you actively generating these movies in your head, or is it that when you wake up you’re essentially confabulating it”.

Those discoveries sparked yet another series of doubts. Deciphering visual perception might be possible, but how can we talk about deciphering someone’s thoughts since they are entirely subjective. Quoting Oliver Sacks, “Consciousness […] is charged with feelings and meaning uniquely of our own, informing our choices and interfusing our perceptions.” How can a machine even begin to analyse such a unique experience a human consciousness is? Aren’t our thoughts influenced by an entire life of past events? A group based in Kyoto dared to tackle these questions by attempting to decode visual imagery during sleep in healthy participants. Horikawa and his team also used a combination of fMRI technique and machine learning. Despite not being able to reconstruct the actual visual information from dreams, they managed to predict the categories of objects their subjects dreamt of. However, these predictions were accurate only in 60% of the cases, and only three subjects participated in that study. This was one of the first attempts at such hypothesis and while it didn’t achieve expected results, this study exposed more complicated queries and paved the road for the future experiments in this field. For example, as Jack Gallant said in his interview for the Wired, “There’s the classic question of when you dream are you actively generating these movies in your head, or is it that when you wake up you’re essentially confabulating it”.

Despite all the obvious limitations of the above studies – or perhaps precisely because of them – researchers continue to tackle the challenge at hand, often in collaboration with each other. The trickiest has proved creating a universal model for decoding any individual going through an fMRI scanner. In order to translate brain activity into images, both the Gallant lab and Horikawa’s team had to first feed the machine with information specific to every participant of their studies. They used brain activity from the subjects evoked by simple picture presentation and used that as a base for the machine learning analysis. Hence, there was never one universal decoding model. Every single participant had their own, personalised decoding model, based on their own brain. This successful step on an individual level shows promise for further work towards the development on a universal model.

Currently, several groups are working on overcoming the model issue, including the Haxby Lab, where the researchers are trying to come up with as generalized a model as possible. Other studies are exploring methods that could improve BOLD fMRI’s sensitivity. One of those methods is superconducting quantum interference device amplified fMRI (SQUID-fMRI). SQUID is a device allowing signal detection at a microtesla range, theoretically allowing us to surpass the BOLD signal (several teslas) and measure the neuronal activity more accurately. Though this technique still has a long journey ahead it is an extremely promising solution for the future of fMRI-based mind-reading. 

Some researchers decided to go beyond BOLD fMRI. The BrainNet project managed to create a brain-to-brain interface (BBI) by utilising electroencephalography EEG as a decoding measure and transcranial magnetic stimulation (TMS) for delivery of the information between participants. In the recently published study, researchers describe how sets of three participants played a game of Tetris together in a BBI setting. Two of the participants (Senders) were able to see the entire screen but were not able to make any decisions in the game. The third participant (Receiver) could move the blocks but didn’t have any information about the bottom side of the screen. The BBI was used to pick up the information about the bottom of the screen from the Senders by using EEG and convey it to the Receiver with TMS. Those triads were not only able to accurately perform the task but also when artificial noise was introduced into the feed, the Receiver was able to filter this out and still perform well. If it were not for all the cables attached to the participants’ heads, one might be tempted to call this a “telepathy project”. This type of information could not be correctly interpreted by the computer. It is the passing from a brain to a brain that makes the signal useful, almost organic. Decoding these signals on a computer screen, at least with currently available technology could only be done in the most superficial way. 

A person’s thoughts are incredibly complex events, influenced by a lifetime of experiences, constantly evolving. All existing technology is simply too primitive to translate them, merely scratching the surface of what we could be able to decode.

Another possible way of working on fMRI temporal resolution could be combining EEG and fMRI. This project could combine EEGs’ high temporal resolution and fMRIs’ spatial resolution, giving us the ultimate, non-invasive neuroimaging method. As always, there are technical difficulties, such as the degradation of signal quality in both of those modalities. If those were to be overcome, it could revolutionise the entire field of brain imaging.

It seems obvious that when it comes to mind-reading we are still merely peeking through a keyhole. Current studies using BOLD fMRI seem to be missing a component that would allow for a better analysis of acquired data. A person’s thoughts are incredibly complex events, influenced by a lifetime of experiences, constantly evolving. All existing technology is simply too primitive to translate them, merely scratching the surface of what we could be able to decode. Perhaps at this moment we have reached the limits of BOLD based fMRI. Nonetheless, according to Moore’s law, we will not have to wait too long for the improvement of the technology. While fMRI alone might be an insufficient measure for decoding something as complicated as thoughts and feelings, augmenting it with other methods such as machine learning, SQUID, or EEG could give us more answers and create new questions to answer. Every study on this topic is already putting us one step closer not only to mind-reading but also to understanding the sophisticated machinery a human brain is. Will we ever be able to effortlessly read people’s minds? Considering all existing evidence, quite possibly. There is no other way to find out than to do it ourselves. After all, in the words of Alan Kay, the best way to predict the future is to invent it. 

Leave a Reply