A Research Team Plans to Merge AI with Human Brain Cells

The research team has been handed a $400,000 research grant in government funding.

A group of researchers in Australia have been awarded just over $400,000 (USD) in government funding to study the possibilities of merging AI with human brain cells, a research project likely to bring a number of AI-related ethical and existential questions to the fore.

The team, working in collaboration with Monash University and Melbourne-based Startup Cortical Labs – is the same group behind the DishBrain project, which involved teaching a group of human brain cells how to play the retro video game “Pong”.

While ChatGPT, Bard and other AI tools powered by large language models continue to be used in increasingly inventive ways by businesses, research like this may open up the door to artificial intelligence systems that learn in entirely different – and more human-like – ways.

Research Team Behind DishBrain Win Funding

Monash University released a statement last week confirming that it had been awarded a grant worth hundreds of thousands of dollars to continue its research into “growing human brain cells onto silicon chips, with new continual learning capabilities to transform machine learning”.

The funding has come from the National Intelligence and Security Discovery Research Grants Program, which is part of the Australian Department of Defense.

Associate Professor Adeel Razi, who is leading the group, says that the project “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms”.

He predicts that the capabilities of this new technology “may eventually surpass the performance of existing, purely silicon-based hardware.”

Take Back Control of Your Data Today

Incogni by Surfshark can help you reclaim your information from third-party vendors.

The sort of AI-powered technology many of us envisage being widely used in the near future – from autonomous vehicles to “intelligent handheld and wearable devices”, “will require a new type of machine intelligence that is able to learn throughout its lifetime,” Associate Professor Razi explained.

Many AI systems are currently prone to “catastrophic forgetting” – something the learning capabilities of biological neural networks can perhaps alleviate.

Ethical Implications and Existential Fears Around AI R&D

ChatGPT’s November 2022 launch, and other AI chatbots that have sprung up since then – has seen governments across the world scrambling to implement legislation and kickstart initiatives that promote the responsible development of AI systems.

It’s not just governments that are spooked, though – a letter signed in March by over a thousand tech leaders and company representatives called for a pause to the development of any AI technology stronger than GPT-4.

In the context of AI research, the most high-risk areas – such as healthcare – also often hold the biggest rewards. So, many state governments are trying to strike a delicate balance between promoting safety and transparency without stifling innovation.

There are also wider, more fundamental ethical considerations surrounding hyper-intelligence systems – such as whether it be ethical or responsible to create a machine that we may not one day be able to control – that, eerily, feel more pressing by the day.

When it comes to merging human biological matter with silicon chips, at this stage, we’re unlikely to see ethical quandaries jumping out of the petri dish tomorrow.

But it does allude to a specific frontier in the development of AI – the merging of human bodies with machines – that will be fraught with ethical dilemmas. And – like many areas of AI research – although it could turn out to be very high-risk, it may also be very high reward.

Did you find this article helpful? Click on one of the following buttons
We're so happy you liked! Get more delivered to your inbox just like it.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at contact@tech.co

Written by:
Aaron Drapkin is a Lead Writer at Tech.co. He has been researching and writing about technology, politics, and society in print and online publications since graduating with a Philosophy degree from the University of Bristol five years ago. As a writer, Aaron takes a special interest in VPNs, cybersecurity, and project management software. He has been quoted in the Daily Mirror, Daily Express, The Daily Mail, Computer Weekly, Cybernews, and the Silicon Republic speaking on various privacy and cybersecurity issues, and has articles published in Wired, Vice, Metro, ProPrivacy, The Week, and Politics.co.uk covering a wide range of topics.
Explore More See all news
Back to top
close Building a Website? We've tested and rated Wix as the best website builder you can choose – try it yourself for free Try Wix today