• Home
  • News
  • Can AI principles make neurotechnology more ethical?
Can AI principles make neurotechnology more ethical? featured image

Can AI principles make neurotechnology more ethical?

By ITU News

Last year, neurotechnology company Neuralink released images of a macaque playing a video game using only his mind. Scientists had inserted a coin-sized disc into the animal’s brain that converted the signals emitted by his nervous system into movements on the screen.

Pager, the macaque, was rewarded for winning moves with sips of a banana milkshake through a straw.

The experiment was the latest to demonstrate the vast potential of neurotechnology – the field that explores how to collect, interpret, and modify information from our brains. But it stirred up familiar ethical concerns about the use of artificial intelligence (AI) tools.

What would happen if Neuralink’s technology were connected to a human brain? Under what conditions should this happen, and with what precautions in place?

“The context of creation and use significantly impact how we approach and think about neurotechnologies and neurodata,” says Sara Berger, a researcher at IBM’s Thomas J. Watson Research Lab.

In a recent AI for Good keynote, Berger and her colleague Francesca Rossi, IBM’s AI Ethics Global Leader, unpacked the many ethical challenges surrounding these technologies, including social effects, potential misuse, transparency, accountability, and impact on human agency.

Common ethical concerns

Neurotechnology is not, strictly, new. Scientists have been exploring how to install and utilize electronic devices in our brains for more than 50 years. But recent advances, not to mention growing overlaps with AI, have made discussions of its pros and cons increasingly urgent.

On the plus side, the stimulation of specific areas of the brain through electrodes has proven valuable in treating of Parkinson’s disease and epilepsy. Other neurotechnological approaches have proven useful in alleviating symptoms of Alzheimer’s.

The use of any invasive device that necessitates surgery tends to be regulated heavily by medical authorities. But this is not the case for other neurotechnology applications.

Berger mentioned the smart helmets marketed by private companies to monitor driver fatigue and warn of accident risks.

“The data collected is not necessarily representative, or it might be especially noisy or inaccurate,” Berger says.

Watch the full keynote below:

To work properly, such products require development and training on a sufficient variety of people, including those with different head sizes and non-normative neurodata. Otherwise, a helmet could misinterpret your neurodata or even discriminate against certain social groups.

Governance mechanisms

AI ethics is a multidisciplinary field in which experts come together seeking to optimise the benefits of increasingly responsive, creative technologies. However, when it comes to neurotechnology, applying such principles can be difficult.

Concerns about mental privacy, identity, and free choice can be “heightened or intensified or expanded in some ways” with embedded devices gathering and responding directly to people’s neurodata.

Still, AI ethics has laid the groundwork for tackling such questions.

“Some existing technical or governance strategies, models or mechanisms, might be able to be reused or applied – adapted to neurotech applications to mitigate some of these concerns,” Berger says.

Authorities and institutions thinking about governance mechanisms for neurotechnology need to recognize its increasing convergence with AI.

While existing applications already show this convergence, “more will come,” says Rossi.

“We need to be prepared technically to embed AI in neurotech systems in the right way, but also ethically.”

Maintaining transparency can be tricky with continual machine learning, she adds. “Some of the most successful deep learning techniques are kind of a black box. It is not clear how the output relates to the input that is given to the system.”

From principles to action

The ethics of AI, according to Rossi, has gone through different stages. After the first, awareness phase came a second stage of developing ethical principles, which produced the multitude of principles and guidelines available today.

The third and current stage – where the industry finds itself now – is characterised by the introduction of regulations, practical standards, certificates, and auditing methods to ensure proper use.

The European Commission’s proposed regulation on AI, currently under discussion, sets out transparency obligations depending on the expected risk of each technology.

For neurotechnologists, like for other AI professionals, this stage also means considering the voices of end-users and anyone else who could encounter new systems and behaviours.

“We need to learn how to update the values, principles, frameworks, and tools to involve neuroscience and neuroethics experts in AI ethics venues, and to engage with those people most likely to be affected,” Rossi explained.

“Principles are important but not sufficient, and tech ethics issues are not only technical.”

To learn more about efforts to build Trustworthy AI, watch the AI for Good webinar series.

Image credit: Adobe Stock

Related content