Unless you’ve been living under a rock, you’ve heard about the “suicide” of the mall security robot. If you haven’t heard, but prefer rocks, you can read about it here: Rock.
There are many technical reasons why this would happen. False positive read from the reflection on the water being my favorite.
But what if it was more?
I’m not saying it is, I just want to play with this a little bit.
The singularity has gotten a lot of attention of late, and for good reason. Moore’s law is about to explode. Things are on the verge of getting seriously interesting. Or maybe, totally collapsing, but I’m an optimist so we’re going with interesting in a good way.
Mission accomplished, but that was probably not the result it was looking for.
A lot of people are worried about computers becoming sentient and taking over. On this I disagree with Elon Musk (gasp you can read about he and Zuckerburg duking it out here: Elon -vs- Mark).
I think it’s far more likely that computers will, in fact, continue to get smarter but “becoming” is a different thing. We tend to superimpose human values when we envision the future of AI, the Singularity and a potential for Skynet. Yes, we need self-directing controls, boundaries even, but I don’t see computers circumventing those anytime soon.
Yes, we need self-directing controls, boundaries even, but I don’t see computers circumventing those anytime soon.
For now, I suspect Robocop above was simply wrong, but it’s still fun to contemplate.