Artificial Intelligence is something we hear a lot about. A great deal of this centers around the doom and gloom that Hollywood likes to put out. It focuses upon the man versus machine scenario, something that is unlikely to happen.
That is not to say that AI is not a threat. It is. However, we will probably see things unfold in a different manner. Instead of the battle against machine, humans will do what they do best: fight each other.
We can already conclude that our position as dominant species on the planet will be threatened. If AI comes into being with the capabilities that many describe, it will be able to upgrade itself millions of times faster than humans. Thus, we simply will not be able to keep up.
Are we prepared for that? It appears we are only left with a few choices in this matter.
cease all research and development regarding AI systems so that we remain unthreatened and at the top of the food chain.
accept the fate that we will lose the top spot while ceding it to the synthetic systems we help to design.
merge with these systems so that any upgrade we are a part of.
The first one can be crossed off the list because it is not going to happen. Countries, companies, even individuals are not going to stop writing algorithms and experimenting with different code. Therefore, we simply have to presume that, at some point, these systems will advance to the point of singularity.
As a side note, the overwhelming majority of AI experts believe that we will see this happen. What is debated is the when, not the if.
Looking at the list, we are left with two choices. Here is where we can surmise that the divide among humans will take place. In fact, we are already seeing it to a minor degree.
There are people who are already playing around with different ways to further merge with our technology. They believe that it is a natural evolution of humanity. It is something that we always did. The latest iteration of this was the smartphone. Now billions of people have access to the entire information base via a handheld device. Why stop where and not take the next step?
Of course, there are most others, who do not carry this same view. Instead, they believe that is a pathway to losing our humanity. A lot feel we are already too dependent upon our technology and adding more is simply a form of insanity.
Whichever side people are on with this discussion, it is going to come to a head at some point. Presently, it is not much of an issue since our AI systems are still rather stupid. They lack any real intelligence, or at least what we consider to be intelligent. "Smart" systems are really just computational power, memory, and data. They are very narrow in focus.
For example, the "smart" thermostat is capable of reading the temperature in the house while making adjustments. It can do this instantly, on the fly, throughout the day. However, it cannot tell you what a horse is and it will never be able to.
This might not remain the case as the gap between humans and AI closes. There is no doubt AI systems are advancing. How soon until they are getting close to humans is still a major guess. Nevertheless, each year the progress is pulling it closer.
How will humans react? Are we going to see massive fear-mongering spread by the media? Will politicians and leaders of companies that develop and support AI be assassinated? Are we going to see "Luddite" moments where people set out to destroy the different machines that are already in place like some residents did in Arizona against the Waymo self-driving taxis.
The AI arms "race" is on. That is why the first option on the list is not feasible. China and the United States, as well as most other developed countries, are putting a lot of money and resources into AI research. Unfortunately, much of it comes through the militaries, seeking more efficient ways to kill each other.
What this means is the systems will get more power. Here is where we have to conclude it is pretty much a given.
So, we each have a choice as to what side of the debate we end up on. Of course, this is really something that everyone is a part of already. The problem is few see how interconnected this all is.
AI systems work of data. Hence, what is fed into them is what they train on. Each day, humanity generates an ever increasing amount of data. Much of this is harmless such as geographic location or web searches. However, what is vital to know is that everything we post is part of the material to train AI. Thus, are we feeding it information that we do not want it to mirror?
Humans desire for hatred, anger, and wanting to destroy others could be establishing the training ground for future AI systems. They might not care about us and simply leave us alone or there is the possibility they will learn our behavior completely. If the later is the case, then humanity is likely in some hot water.
In the meantime, the divide will keep growing. The size of the ones who want to merge with our synthetic systems will grow. There will remain, however, a sizeable number who resist this. Where a problem will arise is when either of these sides (or both) turn violent.
If human history is any indication, it is almost guaranteed that it will come to this.
Of course there is an upside to this. We already saw how productivity can be increased by orders of magnitude. Presently, the most valuable companies in the world all deal with technology. As the capabilities grow, the wealth generated is going to far exceed anything we saw before us.
Automation is a bad thing from the jobs perspective yet that is about the only downside. Our ability to handle tasks while also creating more output is a direct result of technological advancement. We are likely to see great benefits, from a human standpoint, as fields such as medicine and construction improve.
Mundane and dangerous jobs are being taken over by AI and robotic systems. In the near-term, this is a negative especially if those replaced lack the skillset to switch to other jobs. However, when factoring in things such as safety, stress, and a lack of fulfillment, it could be better to get humans out of these positions.
Of course, that always comes back to the discussion of income and wealth inequality, topics outside the scope of this article.
However humanity decides to deal with this problem, there is little doubt that AI systems are going to keep getting more powerful and "smarter". This means that we all will have a choice to make to decide which side of the debate we are going to fall upon.
What are your thoughts? Where do you fall on this issue? Are you going to fight the development of this or opt to merge with it?
Let us know in the comment section.
If you found this article informative, please give an upvote and rehive.
gif by @doze
logo by @st8z
Posted Using LeoFinance Beta