Life 3.0: Being Human In The Age Of Artificial Intelligence by Max Tegmark
About 4 billion years ago, life appeared on Earth. It was simple and quite boring, just bacteria that followed a programmed series of events to reproduce. Life, at this stage, evolved rather than designed. Max Tegmark, a physicist and cosmologist at MIT, calls that Life 1.0. Then humans appeared. We could learn things and design our own software instead of being stuck with the software evolution gave us. This enable us to develop culture and dominate Earth. Tegmark calls this period, Life 2.0. It is where we live right now. But it seems that gradually we are heading towards Life 3.0, where life, if you think of it as a self-replicating information-processing system, will be able to design not only its software but also its hardware. Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by the recent progress in AI. What will happen, and what will this mean for us? That’s the topic of Max Tegmark’s book, Life 3.0: Being Human In The Age Of Artificial Intelligence.
So far the intelligence exhibited by machines was put there by human programmers. Computer systems like Deep Blue, beat Kasparov at chess because it could remember better and think faster. But recently we have moved in an era where in computer systems there is not intelligence at all put in by humans.
Deep Learning, a technique for implementing machine learning, is based in neural-net architectures and it is inspired by our understanding of the biology of our brains – more specifically the interconnections between the neurons. But, unlike the biological brain where any neuron can connect to any other neuron, these artificial neural networks have organised into layers of nodes which are connected to several nodes to layer behind and to the layer above it. By this way, they are able to receive and send data,be trained and process information. Progress in machine learning and Artificial Intelligence (AI) has been impressive the last few years.
Let’s see how far can AI can go. As it happens in every revolution, there’s a dystopian and a utopian vision of the future. The techno-skeptics think we don’t have to worry too much about AI because it’s not going to happen, or it won’t happen for hundreds of years. Some others think that we don’t have to worry because the outcome would be just awesome. Max Tegmark calls them digital Utopians. Then there are people who think that we’re steadily meandering toward an AI apocalypse where humans are obliterated by a super-intelligent entity. Finally, there are the people, who position themselves somewhere in the middle. This is the beneficial AI movement, people who think that it could be awesome and beneficial for society or it could be dangerous, and what we have to do right now is to steer things in AI in a good direction.
"I have exposure to the very cutting-edge AI, and I think people should be really concerned about it," ElonMusk told attendees at the National Governors Association on July 15, 2017. "I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal." But Google’s artificial intelligence scientist Peter Norvig, thinks that's far-fetched. "I don't buy into the killer robot [theory]," he told CNBC last year. But even he can “see that there will be disruptions in employment”.
In Life 3.0, Max Tegmark explores our collective journey into the futureof Artificial Intelligence. He says thatit is not enough to make our technology powerful but we also have to focus on figuringout how to control it and where we want to go with it. In order to explore these questions, Max Tegmark, his wife Meia Chita-Tegmark, Jann Tallinn and others, founded The Future of Life Institute. Their goal, is to do what they can to help make sure that technology would be beneficial for humanity.
As technology is gradually getting more powerful, there are a lot of things that can go wrong, says Max Tegmark. It is therefore crucial, “that we learn to make AI more robust, doing what we want it to do.” We need, as humanity, to try to develop the wisdom and steer things in the right direction.
This raises a lot more questions. It is clear to me that in order to make social progress towards a more enlightened world, scientific knowledge must be combined with an instinctive insight of wisdom. But, how do we develop that wisdom? Is wisdom a kind of knowledge to be developed?
Life 3.0 is an accessible and engaging book. It covers a wide variety of scenarios concerning the impact of AI on our lives and the promises and perils of the AI revolution. There are no answers in the book, but a lot of possibilities that makes you think about the future and life overall.
What’s my thoughts on this? I am a realistic optimist. I believe that the future will be much more different from what we expect. I try to remain positive and I am hoping for the best. To quote Stephen King, "There's no harm in hoping for the best as long as you're prepared for the worst."