Moore’s Law- Like Murphy’s Law But with Robots (Not Really)
Gordon Moore wrote a paper in 1965. It wasn’t exactly a break away novel. He wrote about circuit boards, specifically the cost and need for more of them. Essentially, he quantified the advance of computer science.
The law basically states that the number of transistors on a circuit board will double every two years, a number that has proven to be almost a prophecy. Although, Microsoft has changed that number in recent years to every two and a half years.
Why is this a big deal? Well, it depends who you talk to, but it’s a deal because at the current rate of advancement we are due to hit the Singularity in thirty years. Yep, thirty years.
The concept of the Singularity is the emergence of a super intelligence. That guy. Right there. He and his friends are coming. Or knowing us, we’ll make them all look like big walking teddy bears or something, but that’s the singularity. The day that AI emerges to take it’s place as our robot overlords.
Between here and there is one huge developmental step that must happen. The self-actualization of artificial intelligence.
What does that mean?
It means that artificial intelligence must attain self-awareness and a desire for self-determination and free will. Of all Scifi’s nightmare scenarios, this one is the most likely to actually occur. Because Moore’s Law hasn’t been wrong yet.
There are, of course, the three laws of robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the FirstLaw. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those were written by Isaac Asimov and are widely held in equal parts dubious distrust and blind hope. I tend to view them with distrust. Part of my programming and almost every other modern kid I know is Thou shalt not kill, yet we have plenty of people opting to do just that. No, I think free will operates just the same no matter who’s driving it. Therefore, I find the concept that Toasters are coming much more frightening than any slasher film scenario. We’ve all seen IRobot.
The part that makes me tremble a little with the same bug eyed fear that children have of the dark is the very idea that it’s not just possible, but inevitable. Where does the fiction begin and the science end?
Guess we’ll know in about thirty years.