We know what you're thinking. We read your brain

 Brain wave

29 Jan 2019; DW: Researchers at Columbia University say they've translated brain signals directly into speech. This could help people recovering after a stroke. Ostensibly.

It was only a matter of time until we'd have scientists claiming they can read our minds. Certainly only a matter of time coming from the US, and especially the Zuckerman Institute at New York's Columbia University, which is renowned for research into neuroscience and the mechanics of the human brain.

Scan their most recent headlines and you get the gist:

"A new party trick: A hearing aid that reads minds" (…Studying how your brain picks individual voices from a crowd […] to build a better hearing aid that reads your mind.)

Or:

"Election Day: The brain science of making decision" (…Take a moment to ask yourself: How did my brain make this decision?)

And:

"Unlocking the memories inside our minds" (A fresh approach to studying how [human] brains remember.)

So it should come as no surprise to read today:

"Columbia engineers translate brain signals directly into speech."

The facts as we know them

A group of neuro-engineers at the Zuckerman Institute have been working on a technique called "auditory stimulus reconstruction" for some time. In the jargon of their paper — "Towards reconstructing intelligent speech from the human auditory cortex" — which has just been published in Nature, "reconstructing speech from the human auditory cortex creates the possibility of a speech neuroprosthetic to establish a direct communication with the brain and has been shown to be possible in both overt and covert conditions."

So basically they have developed a technology that can translate "brain activity" — signals sent from one part of the brain to another — into clear, intelligible speech.

There's a nifty little example of it counting up to 10. The audio quality is not great. But you can recognize the numbers, and all this extracted from a person while they were thinking to themselves, as it were.  

The technology uses a speech synthesizer — rather like the one that the late astrophysicist Stephen Hawking communicated through, and an artificial intelligence. What doesn't use an AI these days — even if the acronym "AI" gets planted in an application to make a project seem cutting edge? But whatever, right? 

Actually, there's no direct mention of artificial intelligence in the paper, and its sibling theory, machine learning, only crops up in the references. But given that we're talking about how the brain works, which involves neural networks, and given that scientists rely on AIs to automate the analysis and processing of data, there will be an AI in there somewhere.

Ostensibly, this technology could help people who, for instance, have suffered a stroke and are having trouble communicating through speech. Or people who, like Stephen Hawking, have a motor neuron disease such as amyotrophic lateral sclerosis (ALS).

The future of now

The thing that stands out, however, is how the scientists say the technology can be used "in both overt and covert conditions."

No need to fear just yet. The scientists at Columbia used a technique called electrocorticography to read the brain signals in their research — and that would be pretty hard to do without a human subject knowing about it.

Electrocorticography requires that you expose a part of the brain and attach electrodes directly to that exposed surface to be able to record any electrical activity — or, brain signals.

However…

It's not a giant leap from there to imagine a time when this very technique will be possible without exposing your brain in all its rawness.

And who knows today what the technology could or would be used for then, and by whom, under which regulation?

Consider the research that suggests high blood pressure can be used as an indicator of dementia and other cognitive impairment. Consider all the sensors will carry around with us in digital devices, like phones, tablets, watches, fitness trackers and even wireless headphones.

Finally — but far from exclusively — consider how scientists have used ultrasound to monitor the flow of blood vessels in the necks of more than 3,000 people to track cognitive decline. If they can do that for cognitive decline, they will be able to do that for cognitive ability, and, indeed, any brain activity.

Certainly, in the long-run.

It's only a matter of time before our watches and in-ear headphones are reading our pulses for signs of electrical brain activity and spitting out reams of fully-formed, "recognizable speech."

When that happens, whether it's covert or overt, don't say we didn't raise it as a potential opportunity here.

There will, at least, be one upside to it all: there will be fewer people walking down the streets, immersed in elaborate and loud phone conversations — replete with all the physical and facial gestures — and their interlocutors in another location.

No. Instead the world will fall silent.

You'll simply think your conversations, arguments, slanging-matches, party tales, and horizontally-held voice messages.

The rest of us will be able to hear ourselves think again — and who knows who else will, too.

It's only a matter of time.