By now we as a society should have realized that machines tend to uplift our quality of life. Over the past hundred-odd years, the unfolding of each new breakthrough technology — from electricity to the automobile to the internet — has brought with it trepidation, but then ultimately acceptance. However, even in spite of these past experiences, our society seems to have a peculiarly strong resistance to the latest technology radically transforming our lives: artificial intelligence (AI) and machine learning (ML).
The AI/ML Revolution
Perhaps that is not so unreasonable, as the AI/ML revolution is quite different from other revolutions: it involves giving up, rather than increasing, our control over machines. The internet may be a disruptive force, even an anarchic one, but at the end of the day, it is under human power. A web page will always do what a human web designer or programmer coded it to do, bugs and glitches notwithstanding. ML models are a different thing altogether. Yes, we humans set up the parameters within which they run — but ultimately they make decisions according to logic our minds can’t always interpret.
Even the way we interact with computers has changed. In the last 15 years alone, we have gone from controlling them via a keyboard and mouse to swiping screens with our fingertips. And thanks to advances in natural language processing we now can use our voices to talk to computers as we would with another human. Not only can computers do more complex work than they could in 1998 or 2008, but we also interact with them in a way that puts them on a more or less equal footing with us. No wonder many find the AI/ML revolution a bit unnerving.
Then there is the lightning-fast pace of change. The field of AI/ML is constantly evolving so that 2018’s models are significantly faster and more accurate than 2017’s. New innovations further accelerate the pace. For example, the past year or so has seen a wider application of transfer learning, a time-saving technique that lets data scientists adapt an existing pre-trained model to a new task. Transfer learning lets those with fewer resources piggyback off of the work of major research institutions and big tech companies, vastly decreasing the time and resources required to build a highly sophisticated and accurate model.
Changing Hearts and Minds
It’s an understandable impulse to cling to the familiar by keeping humans in total control of computing processes. However, it would be incredibly wrongheaded. Society stands to gain immensely from the automation of rote work and the greater accuracy of ML-driven insight. So how can we move forward when the ultimate hurdle for technology is not the pace of possible advancement, but rather the mindset of its masters?
The answer may be that those of us who wish to advance AI/ML solutions must take care to address stakeholders’ concerns while also emphasizing the technology’s potential benefits. It is vital to offer a positive vision for the future, not just assurances against harm.
The healthcare industry is dealing with this challenge as it gradually adopts ML-driven tools that supplement human judgment. Researchers at Google recently trained deep learning algorithms to gauge a patient’s cardiovascular health based on photos of their eyes. In the future, doctors may not have to rely on blood tests to measure a patient’s cholesterol levels or their risk of heart disease.
In our experience, medical personnel often have credible concerns that hospital management implementing an AI/ML solution should take care to address. Will the technology add a step in their already hectic workflow? If an ML model makes a mistake that impacts a patient’s well-being, could a nurse’s license be on the line? Health care providers may be more willing to try out AI/ML solutions once these issues are discussed, along with the potential time- and labor-savings of automation.
The stakes are different, but no less critical for CEOs tasked with managing financial, legal, and reputational risk for an entire organization. Here, transparency becomes a key concern, in two directions. Current regulatory frameworks depend on transparency of outcome, meaning that firms need to be able to explain to regulators why a certain decision was made or how a certain result came about. Most AI/ML solutions, however, are “black boxes” that can offer no such rationale. This can make conversations with regulators uneasy — an understandable concern for any business leaders, but especially those in the pharmaceutical, finance, and healthcare industries. For an AI/ML evangelist in these sectors, crafting a plan that addresses how a solution will interact with regulations will be key; however, so will be painting a picture of the higher profit margins and increased efficiencies that will become possible with AI/ML integration.
Transparency Is the First Step
Greater transparency of outcomes with AI/ML might be possible with improvements in the transparency of data. Today firms keep a tight grip on proprietary data, as well as sensitive medical, insurance, and financial information. That can make it difficult to see what data has been fed into a given AI/ML system, which exacerbates the “black box” problem. It’s a bit of a Catch-22: firms want to know how a given model works before they feed it their precious data, but until firms loosen their grips on their data, it will be hard to understand how any model works. As AI/ML becomes more trusted, a second step will be for regulators, business leaders, and other stakeholders to open up access to protected data and increase transparency.
Technology makes things faster, easier, and more cost effective. As a result, it is likely that machines will be making more decisions for us, and in some cases outpacing us in how quickly they can grow and develop. Companies, despite profit opportunities, may not fully adapt to this reality until society does as well. To liberate the full potential of machine learning as fast as possible, we’ll need a broader shift in our way of thinking about human life as the technology progresses. Transparency about how an algorithm predicts an outcome and what data it uses is a first step towards the change.