Nick Bostrom and the future of humanity

01/02/2017

Philosophy



The Future of Humanity Institute is a task-force of scientists, mathematicians, physicists and other ductile and skilled intellectuals led by Swedish philosopher Nick Bostrom, dedicated to humanity’s biggest questions. While the majority of the human race exists in a state of contingency, the researchers working for the Future of Humanity Institute are contemplating about the destiny of our species, the risks threatening our future presence on this planet and what can we do to neutralise them.

The Future of Humanity Institute is located in a building that is part of the Oxford University, which in the past supported Nick Bostrom’s work by publishing many of his papers. His most famous book ‘Superintelligence. Paths, Dangers and Strategies’ became a bestseller in 2015, consolidating his reputation as one of the most renowned thinkers in the world, and in the new technology sector too.

What is the biggest threat to humanity at the moment? Are we going to die due to the environmental transformations caused by climatic changes or because of a nuclear war? Acknowledging the threats that could possibly jeopardise our future on this planet could actually enable us to devise an intervention plan and prevent the worst from happening. And that’s not all: thinking within a perspective set in an extremely remote future, thousands years from now, leaves us exposed to a plethora of complex questions revolving around the meaning of life and the metaphysical.

According to Bostrom, one should leave no room for fortuity, carefully evaluating the probabilities of an unlikely event to happen in the future. In order to tackle with lucidity and diligence topics that go well beyond science fiction, one needs to master all the logical categories of theoretical and moral philosophy, statistical mathematics, as well as boasting an in-depth knowledge of the latest technology. And become one of the most influential philosophers of our times at 43 years old, like the New Yorker pointed out in a lengthy and multifaceted article published a while ago after the incredible success of Superintelligence.

The main threat is the chance of a superior intelligence annihilating, enslaving or relegating to a marginal role in the universe our posterity. The main objective is artificial intelligence. Bill Gates and Tesla’s Elon Musk seem to agree, echoed by Stephen Hawking. Artificial intelligence will end up developing itself and growing at an increasingly faster pace, while human beings, limited by the slowness of biological evolution, won’t be able to compete with the machines and eventually will be replaced. A Darwinian scenario indeed, yet not an abstract one.

While Bostrom’s book was being launched, 1000 eminent scientists were signing an open letter to stop the development of war weapons able to operate autonomously. According to Bostrom humanity is in fact handling artificial intelligence the same way children would handle a bomb, placing it next to the ear in order to hear the ticking sound it makes, completely unaware of its deadly potential.

We need more awareness, not only about the limits of modern technology and the opportunities it provides, but also about the meaning of being alive and the objective of this limited yet still superior existence of ours. If we look at the future then, we might be able to better understand both the present and ourselves.