The Future With Artificial Intelligence

in LeoFinance3 months ago

Artificial Intelligence is something we hear a lot about. A great deal of this centers around the doom and gloom that Hollywood likes to put out. It focuses upon the man versus machine scenario, something that is unlikely to happen.

That is not to say that AI is not a threat. It is. However, we will probably see things unfold in a different manner. Instead of the battle against machine, humans will do what they do best: fight each other.


We can already conclude that our position as dominant species on the planet will be threatened. If AI comes into being with the capabilities that many describe, it will be able to upgrade itself millions of times faster than humans. Thus, we simply will not be able to keep up.

Are we prepared for that? It appears we are only left with a few choices in this matter.

They are:

  • cease all research and development regarding AI systems so that we remain unthreatened and at the top of the food chain.

  • accept the fate that we will lose the top spot while ceding it to the synthetic systems we help to design.

  • merge with these systems so that any upgrade we are a part of.

The first one can be crossed off the list because it is not going to happen. Countries, companies, even individuals are not going to stop writing algorithms and experimenting with different code. Therefore, we simply have to presume that, at some point, these systems will advance to the point of singularity.

As a side note, the overwhelming majority of AI experts believe that we will see this happen. What is debated is the when, not the if.

Looking at the list, we are left with two choices. Here is where we can surmise that the divide among humans will take place. In fact, we are already seeing it to a minor degree.

There are people who are already playing around with different ways to further merge with our technology. They believe that it is a natural evolution of humanity. It is something that we always did. The latest iteration of this was the smartphone. Now billions of people have access to the entire information base via a handheld device. Why stop where and not take the next step?

Of course, there are most others, who do not carry this same view. Instead, they believe that is a pathway to losing our humanity. A lot feel we are already too dependent upon our technology and adding more is simply a form of insanity.

Whichever side people are on with this discussion, it is going to come to a head at some point. Presently, it is not much of an issue since our AI systems are still rather stupid. They lack any real intelligence, or at least what we consider to be intelligent. "Smart" systems are really just computational power, memory, and data. They are very narrow in focus.

For example, the "smart" thermostat is capable of reading the temperature in the house while making adjustments. It can do this instantly, on the fly, throughout the day. However, it cannot tell you what a horse is and it will never be able to.


This might not remain the case as the gap between humans and AI closes. There is no doubt AI systems are advancing. How soon until they are getting close to humans is still a major guess. Nevertheless, each year the progress is pulling it closer.

How will humans react? Are we going to see massive fear-mongering spread by the media? Will politicians and leaders of companies that develop and support AI be assassinated? Are we going to see "Luddite" moments where people set out to destroy the different machines that are already in place like some residents did in Arizona against the Waymo self-driving taxis.

The AI arms "race" is on. That is why the first option on the list is not feasible. China and the United States, as well as most other developed countries, are putting a lot of money and resources into AI research. Unfortunately, much of it comes through the militaries, seeking more efficient ways to kill each other.

What this means is the systems will get more power. Here is where we have to conclude it is pretty much a given.

So, we each have a choice as to what side of the debate we end up on. Of course, this is really something that everyone is a part of already. The problem is few see how interconnected this all is.

AI systems work of data. Hence, what is fed into them is what they train on. Each day, humanity generates an ever increasing amount of data. Much of this is harmless such as geographic location or web searches. However, what is vital to know is that everything we post is part of the material to train AI. Thus, are we feeding it information that we do not want it to mirror?


Humans desire for hatred, anger, and wanting to destroy others could be establishing the training ground for future AI systems. They might not care about us and simply leave us alone or there is the possibility they will learn our behavior completely. If the later is the case, then humanity is likely in some hot water.

In the meantime, the divide will keep growing. The size of the ones who want to merge with our synthetic systems will grow. There will remain, however, a sizeable number who resist this. Where a problem will arise is when either of these sides (or both) turn violent.

If human history is any indication, it is almost guaranteed that it will come to this.

Of course there is an upside to this. We already saw how productivity can be increased by orders of magnitude. Presently, the most valuable companies in the world all deal with technology. As the capabilities grow, the wealth generated is going to far exceed anything we saw before us.

Automation is a bad thing from the jobs perspective yet that is about the only downside. Our ability to handle tasks while also creating more output is a direct result of technological advancement. We are likely to see great benefits, from a human standpoint, as fields such as medicine and construction improve.

Mundane and dangerous jobs are being taken over by AI and robotic systems. In the near-term, this is a negative especially if those replaced lack the skillset to switch to other jobs. However, when factoring in things such as safety, stress, and a lack of fulfillment, it could be better to get humans out of these positions.

Of course, that always comes back to the discussion of income and wealth inequality, topics outside the scope of this article.

However humanity decides to deal with this problem, there is little doubt that AI systems are going to keep getting more powerful and "smarter". This means that we all will have a choice to make to decide which side of the debate we are going to fall upon.

What are your thoughts? Where do you fall on this issue? Are you going to fight the development of this or opt to merge with it?

Let us know in the comment section.

If you found this article informative, please give an upvote and rehive.

gif by @doze


logo by @st8z

Posted Using LeoFinance Beta


Unfortunately, much of it comes through the militaries, seeking more efficient ways to kill each other.

Some people say killing is a crime. Some people say killing is a sport. But it will probably be an ironic art. A self-fulfilling prophecy. People create machines to kill people. Machines will kill people. The future showed in the Terminator movies (or something similar) will be real, if the military will continue what it does now.

While not an issue at the moment it can QUICKLY become one. Being that all it takes is a single spark or "mistake" that turns on the light and all of a sudden AI systems start advancing like crazy.

This happens throughout our history. While it might not seem like as much as AI it actually is.

Take for instance the printing press or the cotton gin along with other innovations and inventions. Electricity. All of these things had drastic implications during their times.

Being that all it takes is a single spark or "mistake" that turns on the light and all of a sudden AI systems start advancing like crazy.

Like technology itself in the past. Technological revolution.
History repeats itself. This time will be AI revolution.

If I get time I’m hoping to write a post on this sometime in the next week to make some crude comparisons to how Hollywood portrays this vs the reality on the ground, especially how it relates to “The War” between humans and machines, in that there won’t be one.

We’ll keep adding layers of functionality to give a little more control, because who wants to track steps when a device can do it for you? Or upload that data and examine it to optimize our health when an algorithm can do it instantly and make recommendations?

No one will “wake up” from the matrix ready to fight for the salvation of humanity. Why would we? By then our trust in those systems will have been proven. If the code is less than functional it’ll have been patched long before it’s problematic on a massive scale.

Before we know it we’ll just roll with whatever because we’ve come to trust it’s in our best interest to do so, likely because it is.

I don't know.

The more intelligent you make these systems and androids, the more they can potentially formulate their own thoughts and actions. Is it that much of a stretch really? The advances in biotechnology, nanotechnology, AI and automation are astounding.

In my opinion, there should be worldwide pacts and agreements on what can and cannot be programmed into these AI systems.

As a species we have enough to worry about with global warming and other threats. The last thing we need is another 100% avoidable potential catastrophe.

I worry that we have become too advanced and we may well annihilate ourselves within 100 years unless we put clear safeguards and rules in place at an international level.

I’d posit you worry because you think we have control over the evolution of life.
Ideally, I agree with you but that’s too clean cut. Life is messy. It’s only getting messier. Embrace the chaos and run de it out. 😊

Those Hollywood cinematic creations where wars between humans and machines are part of the action of the stories - i believe that in the distant future similar events could be possible in reality. Only when we think of the two world wars (where technology was not as evolved as it is today) in which each nation tried to impose itself by destroying the lives of many innocent people. Hollywood does nothing but offer us, from the imagination of some creators, possible alternatives to the future of humanity - we can perceive those scientifically fantastic creations as if they were small simulations of the future. Globally there are enough people who think of doing bad things around them causing colossal damage to a society or a some nation, using advanced technologies.

Agreed, but I was referring to the overall theme that all of humanity is united against the machines. I appreciate the “tiny simulations” perspective, especially considering some of them could play out in reality, albeit on a small scale, comparatively speaking. We see it now. Hold outs happen at every level. The more tech outpaces those who don’t want to keep up the more aggressively they’ll try to hold the rest of us back. The extent to which they can do that is still generally limited, even considering the rapid advancement in some (scary in the wrong hands) medical/biotech that’s coming sooner than we think, not to mention what’s already here.

We’re learning pretty big lessons along the way. Just like the boomers were pretty much traumatized by the Cold War and threat of nuclear annihilation, younger generations are seeing what happens when an entire society shifts to a short term outlook. I hope both will serve us well going forward.


I love your response. Given our current state of technology, there is no argument against your logic. If you could, please take my perspective on the matter.

The Hollywood position on AI is certainly entertaining and fatalistic, but the concept of runaway technologies is possible with real consequences.


Take, for instance, the portrayal of nuclear power in the late 1970s. A movie called the China Syndrome was released starring Jane Fonda and Kirk Douglas. It was a fanciful drama about nuclear power and the dangers it posed. The nuclear industry went on its own publicity tour touting its safety and how nothing like what the movie proposed could ever happen.

Two weeks after Hollywood released the China Syndrome, Three Mile Island, located in the United States, the state of Pennsylvania, suffered a partial meltdown of its Unit 2 reactor core. After studying this event in detail through my training, I can tell you that we were lucky at the time. The events at Three Mile Island jarred the industry to implement the safety standards we follow and improve today.


Let's take a look at a different example. The nuclear arms race between the U.S and Russia was a nightmare I'm glad to have only read about. There have been approximately 2,000 nuclear explosions on this planet since 1945. They were all tests...tests. We've had more than one instance where we could have experienced Armageddon were it not for a single solitary decision.

Long Story Short (Too Late, Sorry)

Hollywood jests for a profit. The reality of having runaway technology get out of our hands is certainly possible, but as you mentioned with regards to AI, it won't happen in our lifetimes. I hope that when technologies become as advanced as Hollywood portrays, we won't have idiots running our global governments.

Posted via

I would probably opt to go with AI. To me it seems that it could free up more time so people could spend more time with family or something else they enjoy. We as humans are the ones who put the data or knowledge into AI, so just wouldn't it depend on what you would have it do. Maybe it could be programmed to go for a walk with you, so you would not be alone. I really don't know just putting things out there.

I don't care who you are or what you want, you cause harm to a machine that is increasing my quality of living and that's an act of war. Any reasonable person would have found a non-destructive way to deal with their problems, ill shoot to kill after respect isn't the top priority anymore, especially on my property.

Posted via

Bang, I did it again... I just rehived your post!
Week 56 of my contest just can now check the winners of the previous week!

The AI arms "race" is on. That is why the first option on the list is not feasible. China and the United States, as well as most other developed countries, are putting a lot of money and resources into AI research. Unfortunately, much of it comes through the militaries, seeking more efficient ways to kill each other.

This is true. Mostly, they do it for their advantage, and it doesn't matter if it compromises lives of other people, mostly civilians.

Posted Using LeoFinance Beta

You chose an interesting and controversial topic at the same time. I believe that in reality there really is an advanced form of artificial intelligence. For example, Bitcoin's Blockchain technology - I still tend to believe that some form of artificial intelligence has played an important role in the development of Blockchain technology. In general, technology is used to help humanity in the development of a society and of course to destroy it what does not suit to some on. It is human nature to create and offer good and bad things. This will also happen with artificial intelligence, or maybe not ..... it depends - in the end it is created and developed by human beings.

I kind of mentioned it in the @leomarkettalk but yes I see ourselves merging with machines. I see no reason why people won't choose to become better using AI to our advantage.

As for stopping AI research? I don't think it will happen because whoever doesn't use it will be at a disadvantage.

I think it also be possible for people to just take a backseat as well. At least with the current landscape of people, I can definitely see it since there are too many people who don't even verify what they hear and just follow whatever the news tells them.

Posted Using LeoFinance Beta

When we deconstruct the human mindset it comes down to fear and loss of control. Many are fearful of the unknown or change. While others don’t want to lose their power in society. AI will have huge ramifications upon the world just like blockchain. However, we must approach this coming advancement with neutrality. I can already see the protests of bots taking my job nonsense. Or the politicians passing draconian laws regulating machine learning.

Posted Using LeoFinance Beta

I think many people will become unemployed in the future.

I believe that for some mundane boring jobs... AI will be a saviour. Too many people are sad and bring misery to others because they remain stuck in such a job. Robots could do those activities while humans could use their mind to see what they really want to do. The question is.... Will people want to use their mind or will they get lazy and want to merge with AI? It is a possibility.
Faced with novelty and job loss, many people will go luddite style. Until mass understanding will be possible. But there is one statement in your post which will go beyond time and beyond AI
humans will do what they do best: fight each other.

Nobody and no technology will be able to force the human to change from within. The key is in the human mind and it is easy to believe otherwise.

I'll either retire and live in the woods at some point or become a cyborg if that prolongs my life span. But the woods option is more to my liking :)

Posted Using LeoFinance Beta

I disagree with your conclusions, but upvoting because it’s an important discussion to have.

Hopefully I’ll have time to elucidate the what why and how of my specific agreements and disagreements with your assessments.


Posted via

I can see companies having their products boycotted by the masses at some point when it starts becoming obvious and affecting millions. There will be advantages and disadvantages as robots are disruptive technology depending on what you do for a living. I welcome them in many ways as a majority of people I have worked with in the past have been useless and generally lazy. The working population have bought this on themselves as if they were reliable and good this wouldn't be happening with so much urgency.

Posted Using LeoFinance Beta

I think the future with artificial intelligence is kind of a mixture of happiness (for the ease) and concern (for the bad intentions).

there wont be any 'fight' as the movies depict. AI, if its trying to enslave or kill us simply wont engage in such transparent and complicated scenario. It would be much better for it to silently take over infrastructure and media platforms. Doing the absolute minimum to progressively infect our minds, or let us become more and more dependent on its systems (remember, its an artifical thing without biological limits like lifespans, so its got plenty of time)

When the time comes, it can do whatever it pleases.

And since its intelligent with more resource and time than any of us, it can find the best path for its intentions easily.


Posted Using LeoFinance Beta