A pair of researchers from Columbia University recently built a self-replicating AI system.

Instead of painstakingly creating the layers of a neural network and guiding it’s development as it becomes more advanced – they’ve automated the process. The researchers, Oscar Chang and Hod Lipson, published their fascinating paper titled “Neural Network Quine” earlier this month, and with it a novel new method for “growing” a neural network.

Here’s what  they argue in a paper popped onto arXiv this month –

“The primary motivation here is that AI agents are powered by deep learning, and a self-replication mechanism allows for Darwinian natural selection to occur, so a population of AI agents can improve themselves simply through natural selection – just like in nature – if there was a self-replication mechanism for neural networks.”

The researchers compare their work to quines, a type of computer program that learns to produces copies of its source code. In neural networks, however, instead of the source code it’s the weights – which determine the connections between the different neurons – that are being cloned.

The researchers set up a “vanilla quine” network, a feed-forward system that produces its own weights as outputs. The vanilla quine network can also be used for self-replicating its weights and solve a task. They decided to use it for image classification on the MNIST dataset, where computers have to identify the correct digit from a set of handwritten numbers from zero to nine.

Accuracy?

The test network required 60,000 MNIST images for training, another 10,000 for testing. And after 30 runs, the quine network had an accuracy rate of 90.41 per cent. It’s not a bad start, but its performance doesn’t really compare to larger, more sophisticated image recognition models out there.

The paper states that the “self-replicating occupies a significant portion of the neural network’s capacity.” In other words, the neural network cannot focus on the image recognition task if it also has to self-replicate.

“This is an interesting finding: it is more difficult for a network that has increased its specialization at a particular task to self-replicate. This suggests that the two objectives are at odds with each other,”

the paper said.

In the future AI will create itself, advance itself, and integrate new neural networks through a natural selection process. What’s the worst that could happen?

Read more news on Learn2Create – NEWS

LEAVE A REPLY

Please enter your comment!
Please enter your name here