From Artificial Intelligence to Superintelligence: Nick Bostrom on AI & The Future of Humanity

Uddipan Barman
7 min readJan 2, 2021

I want to introduce you to my present and the rest of the world’s future. I call it to stem some people think that some of these things are sort of science-fictiony far out there crazy. But I like to say: okay, let’s look at the modern human condition. If we think about it, we are recently arrived guests on this planet.

If the world was created, the earth was created one year ago. The human species, then, would be 10 minutes old. The industrial era started two seconds ago. Since our last common ancestor, there have already been 250 000 generations, and we know that complicated mechanisms take a long time to evolve.

So a bunch of relatively minor changes take us from broken off thee branches to intercontinental ballistic missiles. So, it seems pretty obvious that everything we’ve achieved pretty much and everything we care about depends crucially on some relatively minor changes that made the human mind and the corollary.

Suppose any further changes that could significantly change the substrate of thinking could have potentially enormous consequences. In that case, some of my colleagues think we are on the verge of something that could cause a profound change in that substrate and a machine?

Superintelligence, artificial intelligence is a rapidly growing field of technology, with the potential to make huge improvements in human well-being. However, the development of machines with intelligence vastly superior to humans will pose special, perhaps even unique risks.

The United States has identified artificial intelligence as one of the key technologies that will ensure the United States will be able to fight and win the wars of the future potential international rivals in the field of AI are also creating pressure for the united states to compete for innovative military AI applications; china is a leading competitor in this regard.

In 2017, the Chinese government released a strategy detailing its plan to lead AI by 2030. Less than two months later, Vladimir Putin publicly announced Russia’s intent to pursue AI technologies, stating whoever becomes the leader in this field will rule the world most surveyed.

Ai researchers expect machines to eventually be able to rival human intelligence, though there is little consensus on how this will likely happen. Artificial intelligence used to be about putting commands in a box. You would have human programmers that would painstakingly handcraft knowledge items since then; a paradigm shift has taken place in the field of artificial intelligence.

Today, the action is really around machine learning so rather than handcrafting knowledge, representations and features. We create algorithms that often learn from raw perceptual data. The same thing that the human infant does. The result is AI. That is not limited to one domain.

Now, of course, AI is still nowhere near having the same powerful cross-domain ability to learn and plan as a human being has, cortex still has some algorithmic tricks that we don’t yet know how to match in machines.

Whichever government or company succeed in inventing the first artificial superintelligence will obtain a potentially world-dominating technology via superintelligence, we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

There are this illusion that time horizon matters. If you feel that this is 50 or 100 years away, that is consoling, but there’s an implicit assumption. The assumption is that you know how long it will take to build this safely and that 50 or 100 years is enough time.

Sam Harris is a neuroscientist philosopher and new york times best-selling author. He explains that one only needs to accept three basic assumptions to recognize the inevitability of superintelligent. Ai intelligence is a product of information processing in physical systems. We will continue to improve our intelligent machines. We do not stand on the peak of intelligence or anywhere near it.

The problem with the race for ai is that it may result in poor quality AI that does not consider humanity’s welfare. Sam harris worries that the power of superintelligent AI will be abused.

Suppose governments and companies perceive themselves to be in an arms race against one another. In that case, they could develop strong incentives to create superintelligent, ai, first or attack whoever is on the brink of creating it.

The Chinese Russians, Indians, Israelis, Koreans Japanese and Europeans are at least as motivated as Americans to develop advanced AI. Currently, China is primarily focused on using AI to make faster and more well-informed decisions and develop a variety of autonomous military vehicles.

Russia is also active in military AI development, focusing on robotics an arms race for an AI could lead to authoritarianism. It can make it easier for political groups to shut down negative speech against them. It can make it easier for businesses to assert their rights over workers’ interests, consumers in the general public.

Philosopher Nick Bostrom expressed concern about what values superintelligence should be designed to have a biological neuron fires, maybe at 200 hertz 200 times a second. Still, even a present-day transistor operates at the gigahertz neurons propagate slowly in axons 100 meters per second tops, but in computers, signals can travel at the speed of light.

There are also size limitations like a human brain has fits inside a skull, but a computer can be the size of a warehouse or larger, so the potential for superintelligence kind of lies dormant. In matter, much like the power of the atom, like dormant throughout human history, patiently waiting there until 1945.

In this century, scientists may learn to awaken artificial intelligence’s power, and I think we might then see an intelligence explosion. Any ai superintelligence could proceed rapidly to its programmed goals, with little or no power distribution to others.

It may not take its designers into account at all. The logic of its goals may not be reconcilable with human needs. The ai’s power might lie in making humans its servants, rather than vice versa. If it were to succeed in this, it would rule without competition.

Under a dictatorship of one, there is a need to transition to more democratic forms of political decision-making. It cannot be assumed that ai will not work to strengthen the power of those who control it.

Elon Musk warned that the global race towards ai could result in a third world war. To avoid the worst mistake in history, it is necessary to understand the nature of an ai race, as well as escape the development that could lead to unfriendly artificial superintelligence researchers in the field, who believe that ASIs will be friendly want to help to create an environment that Will make it possible for the ASIs to be positive.

However, many experts suggested limiting the power of ASIs to determine whether it would be acceptable to limit the power of an ASIs. It is important to understand what it means to limit a superintelligence. It could permanently keep any ai system at a human level of cognition to limit the superintelligence to try to make the superintelligence less capable.

It means the superintelligence stays below a certain level of capability permanently. Machine intelligence is the last invention that humanity will ever need to make. The machines will then be better at inventing than we are and they’ll be doing so on digital time scales.

Now a superintelligence with such technological maturity would be extremely powerful, and at least in some scenarios, it would be able to get what it was. We would then have a future that would be shaped by the preferences of this ai. Several technological capabilities could form the basic elements of a workable asi.

The cognitive element to the ai will be much like our own brains. These elements would have not only the ability to learn and comprehend but also the memory to store. All of that knowledge, the ability to think could include morality, ethics, rational thought, artistic creation, scientific experimentation and especially the ability to work with logic.

Another important element is human-like emotional responses reason and to communicate in terms meaningful for human beings. Whoever succeeds in creating the first superintelligent entity, they need to make sure that this new type of intelligence is democratized, understands people and can communicate with them somehow.

Bostrom argues that we might make a mistake and give this new entity goal that leads. It to annihilate humankind, assuming its enormous intellectual advantage, gives it the power to do so. The way to avoid this mistake is to create an open system and help it develop.

Artificial intelligence is undoubtedly the future. The possibility to create a friendly asi is essential, as it will have a severe effect on our future to ensure the friendly nature of artificial superintelligence.

World leaders should work to ensure that this asi is beneficial to the entire humanity. Also, it is important to test all logical goals before the development of such ai. The ai programming would have to be open to access and only positive uses, such as curing diseases or producing resources.

--

--

Uddipan Barman

I am a Medical Student. But I am really Interested in Technology. I like to talk about new gadgets.