Science-fiction books and movies have long proposed the idea of a world influenced or dominated by “thinking” machines. Today, artificial intelligence, or AI, can be found in many of the technologies that you use in your day-to-day life. Whether it’s in creating UX-enhancing chatbots or preventing the next cybersecurity data breach, it’s clear that this cutting-edge technology is poised to play an increasingly important role in society. But for many, the question remains: Will this role be one that replaces humanity or helps make it better?
While robots and thinking computers were once considered to be far-fetched technologies from some distant date in the future, recent news stories about AI-backed applications have brought the topic out of the realm of theory and into reality. AI-trained computer games like chess, Jeopardy and poker have all recently been played against humans and won.
These victories have prompted some, including tech mogul Elon Musk and renowned theoretical physicist Stephen Hawking to sound the alarm bells for the future of humanity. Likewise, some people still associate artificial intelligence with doomsday scenarios like Terminators and Cyborgs. However, popular applications like Siri, IBM’s Watson and Alexa are exposing more people to the softer side of this revolutionary technology, which is a key step in dispelling many of the negative perceptions that persist about AI.
Likewise, if you think that AI is just another fad like the dot.com craze or the much-hyped Segway, you are in for a bit of a shock. Gone are the days of AI being relegated to the confines of research labs or high tech conferences. Machine learning, the process that enables machines to “learn,” is already widely used in the most popular e-commerce and video streaming sites to recommend products and movies you might enjoy. Voice-recognition software, like that found in most smartphones, also utilizes this technology in the form of natural language processing, a branch of artificial intelligence.
Artificial intelligence or machine learning is the process of continuous training and testing on a computer program with data (preferably large quantities) in order to improve its ability to predict a known outcome. The more you train and test the program, the smarter or more “intelligent” it becomes. This is called supervised learning. In some cases, the outcome may be unknown, for example, the program is working to identify an outcome based on a set of given inputs such as determining a label for something with four sides. Such a technique is referred to as unsupervised learning.
Research into artificial intelligence first began in the 1950s. It didn’t take long for debates about the future of a master machine race competing with humans to surface. Over the past three decades, AI research has revealed that machines can indeed surpass humans when it comes to intelligence. However, this superior intellect does not seem to correlate with sentience, which speaks to the heart of “the man vs. machine” debate. Without sentience or autonomy, even the most learned of machines can’t compete with humans on a broad scale or engage in activities or “thoughts” beyond what they have been programmed to do.
However, AI does excel at automated tasks like those that involve calculations or fairly predictable interactions and outcomes. While jobs in the accounting, customer service and manufacturing sectors may become fully automated in the not-so-distant future, most positions are unlikely to be replaced by artificial intelligence. It’s much more probable that certain tasks within a position will become automated, making workers more efficient and particular jobs easier.
A study by McKinsey Global Institute found that less than 5 percent of jobs worldwide could be automated with today’s AI technology, but nearly 30 percent of tasks across 60 percent of occupations could be augmented by it. So while the fourth industrial revolution, or Industry 4.0, is indeed likely to happen, there’s a good chance that it will do far more good than harm. Similarly, even if widespread job disruption does occur, this new revolution, like those before it, will probably create more jobs than it takes.
The benefits of AI-driven technology go beyond making life more convenient for tech-savvy consumers. It also has the ability to improve the quality of life for millions of humans around the globe. With the world’s population growing rapidly, the farming sector faces the daunting task of producing enough food to support it. Real-time data analytics and machine learning technology can help farmers meet this increase in demand by helping them make the most of each acre, maximizing crop yields and profits.
Cancer research has made significant strides in the last couple of decades, with survival rates for many of the most common types of cancers at an all-time high. Artificial intelligence can further add to this success by improving the accuracy of diagnosis. By analyzing a large amount of patient data, AI can learn to better predict which inputs or symptoms correspond with a specific outcome or diagnosis. Best of all, the more data the machine is fed, the better its predictive capabilities become. As a result, many hospitals and healthcare providers are already using artificial intelligence to aid in their patient care.
No matter which camp you belong to in the AI debate, there’s little doubt that this amazing technology will change our world in some significant ways. Whether it’s self-driving vehicles, improved customer services via chatbots or the optimization of food resources, AI is a part of the present as well as the future.