Isaac Asimov's "I, Robot" is a collection of short stories about robots whose existence is governed by the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Asimov first introduced the three laws in his 1942 short story "Runaround" (story #2 in "I, Robot").

With artificial intelligence (AI) becoming prevalent in mainstream society, I've found Asimov's laws to be quite pertinent. In particular around the fears and doomsday scenarios emanating from AI. The fear is that humans will give birth to a new species that through recursive self-improvement will grow beyond our control as it becomes far more intelligent than we imagine.

Tim Urban, in his two part AI series gives an example of AI reaching a level of intelligence that is equivalent to the gap between a human and an ant:

A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.

Imagine trying to explain your name to an ant. Or to an organism that has no concept of language or words. An organism so primitive compared to humans that we feel virtually no remorse when squishing one. Now imagine AI communicating with us in a form we have no concept of. What happens if it perceives us in the same way that we perceive ants?

Elon Musk is a proponent for developing AI in a responsible and safe way. He encapsulates his fear in a possible outcome from tasking AI with getting rid of spam email:

(The AI) concludes that the best way to get rid of spam is to get rid of humans.

To combat an AI overlord Musk co-founded OpenAI, a non-profit research company whose mission is to:

Discover and enact the path to safe artificial general intelligence.

So are Asimov's three laws science fiction? Or is humanity on a trajectory to a society where some iteration of these three laws exist? Will OpenAI produce some equivalent of the three laws? Who will be responsible for implementing and regulating them? How will we ensure that every AI that is created abides by them? The established laws will be meaningless if one country abides by them but another does not.

In Asimov's last story of "I, Robot", "The Evitable Conflict" robot psychologist Susan Calvin and World Co-ordinator (leader) Stephen Byerley share a discussion on the purpose of machines (AI) and the anti-machine movement.

"But you are telling me, Susan, that the ‘Society for Humanity’ is right; and that Mankind has lost its own say in its future."

"It never had any, really. It was always at the mercy of economic and sociological forces it did not understand—at the whims of climate, and the fortunes of war. Now the Machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society,—having, as they do, the greatest of weapons at their disposal, the absolute control of our economy."

"How horrible!"

“Perhaps how wonderful! Think, that for all time, all conflicts are finally evitable. Only the Machines, from now on, are inevitable!”

In this discussion the "Society for Humanity" (anti-machine movement) believes that machines are controlling the future of humanity. Yet Susan Calvin states that humanity was never in control. Prior to machines we were controlled by economic and sociological forces we didn't understand. This resulted in wars, economic depressions. But machines learned and understood these forces at a level humanity did not. The machines now controlled these forces. And because of the three laws, they controlled them in such a way where the outcomes would result in no harm to humans.

So it appears that Asimov provided us with a warning: regulate the machines before they regulate us.