As I was walking through an airport one day, making my way from security to my gate so I could plop my tired self into a chair well before boarding, a particular ad on display caught my attention. It was one of those quilt-sized ads that are very hard to ignore, and this one held my interest not because it was funny, or because it was something I needed. It simply had a picture of a human arm, albeit obviously made of metal, gears, and other robotic things, and along the top of the ad in great, bold letters was the following message: “OUR STOCKS ARE CHOSEN BY PEOPLE, NOT ROBOTS.”

But, Why?

My immediate thought in response to this was “Why?” I reasoned that it may have something to do with whether or not AI is truly ready to handle the nuances of portfolio management, but it struck me as something that a very capable AI (such as Watson) would be able to handle: taking large piles of data having to do with market patterns, product announcements, and sentiment analysis, and using them to construct performance estimates.

A human can always take the conclusions and do whatever they wish with them, and it may actually serve to help that advisor catch that they missed something. But even this is an internal problem, a decision about how a financial advisor does their job, and not typically something that their customers would have to worry about.

The question should not be Why as in “Why would they still have people choosing their stocks?” It’s Why as in “Why would they feel the need to advertise this point? What conditions exist in the investment industry, in their target audience, to suggest that this would be a selling point?

The fact of the matter is that we, as human beings, have a hard time trusting anything that is not human.

It is built into us on a genetic level; from a statistical perspective, we are more likely to be afraid of creatures such as spiders and snakes because their bodily structures are so fundamentally different from ours. Something doesn’t even have to be drastically far-removed from us visually to be the subject of ire; the issue of the “Uncanny Valley” has plagued graphic artists for decades, a concept where the closer an object comes to resembling a real human, it actually causes more feelings of revulsion. We are programmed from the very building blocks of life to be skeptical.

Social convention dictates a number of reasons to be fearful as well.

Well-known and trusted figures such as Stephen Hawking and Elon Musk have proclaimed publicly that AI could or will be the cause of the fall of mankind. Films such as The Terminator and WarGames go out of their way to proclaim the cataclysmic catastrophe that would result of investment in AI and the “robot revolution”, not to mention a veritable flood of writings and novels written over the course of a century and a half. (The first published paper warning of machine supremacy was written in 1863!)

Countless blog posts and web articles talk of the risks, how they outweigh the rewards, and/or how relatively little we get from it. They talk of the implosion of the job market, the inherent flaws of bias, the dangers of that kind of power falling into the wrong hands, and the outright lack of humanity that could govern its actions.

Despite all of these potential risks and issues, I remain bullish on AI, and show no signs of stopping. Let me explain why, beginning with addressing the issues themselves:

The Robopocalypse: I’m actually extremely grateful that so many people have gone out of their way to point out this problem, because the very fact that they have done so has made us all paranoid about it. This overriding mindset is extremely prevalent in all parties working in the AI industry, and will distill down into the proper amount of precaution. And if they somehow miss a beat, that Church of Artificial Intelligence will at least speak well for us?

Job Market at Risk: While it would be foolish to claim that AI will not be replacing any jobs, we shouldn’t assume disaster either. Throughout history, workers have lived in fear of being replaced by innovations, but each and every major change has resulted in a net gain of jobs. Forbes states it very well: “Just as our parents struggled to predict the emergence of fields like social media or blogging, so, too, are we incapable of comprehending the jobs AI will create.”

Creator and Operator Bias: Yes, AI having bias based on how it was configured is absolutely a thing, and it’s not something that will ever go away. However, researchers are actively working to develop ways to prevent bias, and as our usage of the technology evolves, problematic bias will come to be seen as a “rookie mistake”. In the same way that Salesforce admins would view a newly created custom object that was not given rights on any existing profiles, a biased AI is still something that could be created, but only by the lazy or freshly naïve. Anyone who commissions an AI will get what they pay for, and the likelihood is that “AI firms” will come into being: companies who focus on AI as their product, and therefore are much more likely to care about craft and customer service. On a similar note, we will reach a point where AI without any kind of restraint, common sense, social conditioning, or other aspects of humanity will be seen as “dumb AI”, and therefore also an inferior product.

Too Much Power: To my mind, this is the biggest risk inherent in AI: that malignant actors will create AI with intentionally malignant bias. (Imagine what Cambridge Analytica could have done with a powerful AI behind it.) This is also one of the least popularized risks: our culture creates innumerable stories about AI becoming dangerously self-aware, and a comparatively miniscule amount of stories that are about bad people creating AI to do bad things. Of this, we must become more aware and push towards proactive measures. If we do not, it will only take one major incident to push people towards a demand for overreaching legislation. Some legislation is inevitable, if not necessary, but overly restrictive law could prevent us from realizing higher levels of potential with the technology. Thankfully, a few vendors have begun work on counter-AI to detect and work against AI-based adversaries.

So how do we prevent this and any of the other potential negative side effects listed above?

 It is not through prohibition, either through the rule of law or willful ignorance. We must remember that AI is a tool, and one that has already been drawn from Pandora’s Toolbox. There’s no putting it back.

And nor should we. The potential gains from AI are immense and potentially even transhumanist in nature. It is capable of doing things we could never accomplish on our own, such as ultra-precise surgery or operating in environments that are prohibitive to humans. It is able to identify things we as humans either neglect or forget. It can enable people with disabilities, such as ADHD, blindness, autism, and many more, to function more easily and increase their quality of life by providing the guidance and interaction that they need. Rather than forgoing these advances, which range from convenience to life-saving, we must recognize that there is a metaphorical baby in this bathwater.

The only way we can effectively prevent AI’s pitfalls and its abuse is to learn more about it. Even if it were to be banned somehow, it would continue to be developed in secret by those who would use it for illicit means and ends. We must embrace it, not doom-say it into the shadows. Current AI structures such as IBM’s Watson, Salesforce’s Einstein, and Google’s DeepMind are like Homo erectus; they are the predecessors upon which future advances will be built, and they must be allowed to thrive before they can reach their true potential. By supporting these tools, rather than disregarding or condemning them outright, we provide the incentive for modern governments and corporations to develop the cure, even as others work to create the disease (intentionally or otherwise).

The next time Salesforce unveils a new Einstein product or Google publicizes a new DeepMind ability, it behooves all of us to remain cautious but optimistic. Scholar Andy Salerno said it best when DeepMind’s AlphaGo program defeated Go World Champion Lee Sedol in 2016: “Lee should feel no shame in his losses. For AlphaGo could never demonstrate its abilities—our abilities—if Lee were not there to challenge it.”

Mike Walzl is our Chief Operating Officer at Clarus Group. Mike has over twenty years of experience in the staffing and consulting industry, beginning his career as a recruiter. His areas of focus throughout his career have been: Technology, Accounting, and Finance. As COO, Mike is responsible for corporate strategy, development and overall growth of Clarus Group.

Leave a Reply

Your email address will not be published. Required fields are marked *