Lately, I’ve been thinking a lot about the future of mankind. Why? Because I just had a child, and of course I want to be confident she will have a future. For now, she has two very loving parents who are happily married (which you don’t see much anymore), and a highly supportive family. But that isn’t enough. I want her to be able to live a full life in this world, or even other worlds maybe. This, of course, is nothing I currently have, or will ever have, control over. It’s something we all control together. And I don’t like the direction mankind is headed.
Mankind has feared the end of its existence since the beginning of its existence. We feared floods, famine, carnivorous predators, and the god(s) of our tribe. For the vast majority of human history, we lived in trees and caves, understanding virtually nothing about the planet, or the universe, that we live in. We evolved to use tools. It started with sticks, and it will end when ‘the stick’ turns against us. What am I talking about? Artificial intelligence.
See, every time we gain knowledge, we record that knowledge and thus that knowledge never needs to be learned again. It started with writing, like I’m doing right here, and now we record knowledge with machines. Of course, that wasn’t enough, so we started teaching machines how to gain knowledge on their own. Additionally, we are trying to learn how to cure all terminal diseases, but that isn’t enough, because apparently we have to try to achieve immortality through gene manipulation and nanorobotics.
We are the same animals we were in the African savanna. We are afraid, and we are ignorant. When I was 18, and I became an atheist, I became fascinated with religion, because religion was our first attempt as a species for nearly everything. It was our first attempt at philosophy, our first attempt at healthcare, and our first attempt at science. To quote my greatest idol, the late Christopher Hitchens, “Because religion is our first, it is our worst.” A statement I mostly agree with, but not entirely.
I strongly believe we are not supposed to understand everything. We shouldn’t understand everything about nature in order to conquer it and achieve immortality. We should try to live long lives, of course, but where should the line be drawn? Perhaps one cannot give an exact number of ‘maximum’ years any human should live, but it sure as hell shouldn’t be forever. I don’t even think anyone’s lifespan should exceed past 150 years.
Life has no meaning if there is no mystery left in it. Life has no meaning if it lasts forever. And this is where I get into artificial intelligence, which I strongly believe will be the end of mankind.
For the sake of assuring everyone I’m not going crazy, I’ll remind you that the world’s top scientific and philosophical minds, including the late Stephen Hawking, Elon Musk, and the current CEO of Apple, etc., all agree(d) that AI is our greatest threat.
We invented machines to do tasks much more efficiently and much more rapidly than we can. The problem is, we didn’t just stop at the calculator, we decided to teach machines how to understand everything there is to understand, from stock exchanges to our very nature. We are even teaching machines how to operate military vehicles and weapons, completely independently of human interference. Again, I’m not saying what we have now is a threat to us, like autopilot features on airplanes, or when your Facebook app recognizes your face in picture you took 3 seconds ago. What I’m saying is that these things will not stop where they’re currently at. These technologies will only keep improving, more and more and more. What frightens me even more is how complacent the average person is about this, like when they order an Alexa off Amazon.com and connect it to all of their home systems.
Machines can learn thousands of times faster than humans, and can store millions of times as much information, and they’re only improving more each year. It should be obvious to everyone that, at some point in the not-too-distant-future, machines will be so advanced that they will be aware of their own existence. What, you thought Alexa was the absolute peak technology could ever achieve? You think smartphones will be the same in 100 years as they are right now? 20 years ago, smartphones didn’t even exist.
I’m not blaming the threat of artificial intelligence on Google, or Apple, or Amazon, or the US military (even though I strongly believe the US military will be the ones who advance AI to the level of sentience, but more on that later). I’m blaming the rise of AI on our species. We don’t know when to stop. We don’t know when enough is enough. We get bored with what we have, and thus look for the next best thing. Our phones need to be better, our TVs need to be better, our you-name-it always needs to be better. We always need something new and exciting to make our lives just a little bit more cushy. It’s like having new music: you get bored listening to the same 50 songs in your collection, so you go out and seek 50 more songs to love. Except in this case, ‘you’ are the human race, and the songs are technology itself.
30 years ago, robbing a bank required you to at least get off your ass and go to the bank. Now, you can sit at home and hack a bank’s servers, or a company’s servers, or an entire government’s servers.
Will AI be friendly to us or dispose of us because we will be obsolete? We don’t know until it happens. And that’s what scares me. We won’t hold off and prohibit our companies and governments from advancing AI, we will just instead wait until one of these entities creates AI so that we can find out the hard way. We don’t know when to stop. The US government (or possibly some other government) will keep advancing AI for the sake of ‘security’. The world’s military forces will keep trying to extend the reach of their stick all for the sake of “what if our country gets attacked?” Nobody wants to lose a war, of course, and thus the advancing continues.
And as for tech companies, like Google, or Facebook, or Apple, etc., etc., they will also just keep advancing AI for the sake of profits. Like toddlers who keep crawling toward daddy’s rifle, these tech companies will keep thinking, “What’s one more inch forward going to hurt?” Because already having billions of dollars isn’t enough, so these companies will just keep pushing for the next billion.
Our species survived just fine without smartphones. We survived just fine without sentient drones. We survived just fine without social media reminding us that it’s our best friend’s birthday. Sure, living standards are better today (in first-world countries) than ever before in human history, but once again I will say: we don’t know when to stop.
Life isn’t supposed to be perfect. Even if artificial intelligence chooses not to dispose of us like an old telegraph, and we somehow benefit from machines becoming sentient, it would still be disastrous for human life. We aren’t supposed to know everything, we’re not supposed to be able to predict the future. (Yes, computation has reliable predictive capabilities, and it will only improve with time.) We aren’t supposed to live forever. Immortality with omniscience would completely kill the reason for being alive in the first place. It’s because we don’t know everything, because we work hard to achieve things, and it’s because we die that life has meaning. How much would you care about your family if you knew they could never die? There is no such thing as infinite resources with infinite value. Only things with limitations have the highest value.
Either AI chooses to kill us, or we become forced to rely upon it for literally everything. AI will, after all, become a real-life god compared to us, and it will be capable of learning about the universe millions of times faster than we could. If we continue to live, there’s no possible way we could ignore its achievements. Do you really think we’ll still be motivated to do anything with our lives when our AI overlord can do everything for us? Do you really think we’ll still be motivated to do anything when our AI overlord can predict our very every move for us? Again, this is assuming AI even cares to keep us around after it’s created, which is highly unlikely.
No matter which way it goes, the creation of sentient machines will be the end of us. It will either kill us outright, or kill any reason to be alive in the first place. Like only reading the last paragraph of your favorite novel, there will be no mystery left in life and thus life will mean nothing. The reason we don’t skip right to the ending of our favorite movies, or books, or TV shows, is because the meaning, the pleasure, and the value of all things is found in what is not yet known. The journey, not the destination. Would you want to live life if you came into the world knowing exactly how everything (literally everything) will play out?
I could warn about the physical and philosophical dangers of AI all day, but that won’t change the fact it’s coming. It will come, because we don’t know when to stop. Unless our corporations and governments pledge together not to advanced the intelligence of machines past a certain point, it will come.
Better off are those who were able to live full lives and die in an imperfect world.