AI guru Nick Bostrom frequently highlights the crucial importance of the goal setting process when building a superintelligence. He outlines the complexities involved in doing this, with the many unintended consequences that lie in wait for us.
Despite the perilous nature of such work, it’s increasingly likely that automations will have such an end goal and then freedom over how they achieve that. Recent work by researchers at the University of California, Berkeley revolves around setting such a goal and devising algorithms that successfully attain it.
Automated learning
“The only thing we say is ‘This is the goal, and the way to achieve the goal is to try to minimize effort,’” the team say. “[The motion] then comes out these two principles.”
The process has thus far been tested on relatively simple blocky graphics, such as humanoid shapes. In each iteration thus far, complex behaviors have emerged after a period of learning (as you can see in the video below).
As you can see, the robot is capable of ‘standing’ from any prone position in a relatively natural way. What’s more, this general process is effective regardless of the form the robot is taking.
This has allowed the machines to undertake relatively complex, repetitive tasks, such as running and swimming. The machines are powered by a neural network that’s trained to control it via information it receives from its environment and past performance.
This basic process allows the machine to adopt the most effective way of achieving its goal, whether that be flapping, swimming or walking.
The concept builds upon the perceived way that humans learn, with children slowly understanding their body and its capabilities as they develop.
The approach to learning is undoubtedly interesting, but there remains some concern about leaving AI free to achieve its goal in anyway possible.
Artificial ethics
For instance, if it’s tasked with going from A to B, what would happen should something get in its way? Whilst humans impart a degree of morality into our decision making, such thinking is not possible in AI at the moment.
After all, with the rapid evolution of artificial intelligence, it could quickly do incredible things in the attempt to meet its goal, which if poorly worded could reap significant harm.
Bostrom suggests that this can be averted by giving the AI an overarching goal of friendliness, although even that is not without difficulty.
“How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration,” he says.
We’re reaching the stage where AI is increasingly capable of doing fantastical things. Now is perhaps the time to address just how we’d like it to do that before we reach a point where such deliberations are too late.
If you liked this post, buy me a coffee
I am ready pay for a coffee.
Thanks Ivan, much appreciated.