Does anyone remember Mapquest? Mapquest was a platform that provided printable online directions. The directions would maybe get you to where you needed to go, but there was always the chance you would end up in the completely wrong place. Today, when I need directions, I just ask my phone that is connected to my navigation system what the fastest route is—and it’s done. I have the route, how long it will take me, and whether or not I will need to pay a toll.
This convenience, along with countless others, is due to artificial intelligence. It can turn your thermostat on and vacuum your floors all while you’re not even home. And, given time, it will physically drive you anywhere. While self-driving cars may still be in the works, the artificial intelligence conversation is happening now. If you’re like me and don’t know how these technologies work, here are 10 Artificial Intelligence (AI) terms you need to know.
10 Artificial Intelligence Terms You Need to Know
1. Artificial Intelligence
Let’s start at the beginning. First coined by John McCarthy at the Dartmouth Summer Research Project on Artificial intelligence in 1956, artificial intelligence (AI) is a subfield of computer science. McCarthy aimed to create “thinking machines” based on the idea that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
2. Machine Learning
In his TEDxMileHigh talk on AI, Assistant Professor Bradley Hayes describes machine learning as the way AI systems teach themselves. Instead of writing code for every single scenario, the AI system can educate itself based on real-world experience. Hayes says, “It’s better that we write code allowing these systems to learn from demonstrations and experiences—extracting patterns from the data to generate their own rules and logic.”
3. Black Box
Black box occurs when artificial intelligence crosses over to the dark side. In his talk, Hayes describes the importance of explainable AI—or artificial intelligence that follows clear pathways. Black box AI occurs when the pathways can’t be followed and the transparency is lost. Essentially, the AI system makes all of the decisions based on big data without adequately explaining the reason why. This can be a problem because the system can inherit biases from the algorithms that can lead to wrong or unfair results.
For example, in his talk, Hayes mentions the AI system that was trained to learn how to differentiate between pictures of dogs and wolves. The system learned to identify the correct picture. However, it based its results on a learned bias.
The system learned that the photos with snow in the background were the photos with wolves. If it was given a picture with a wolf standing in the grass it would identify it as a dog. Eventually, the researchers figured out the bias the system learned. The black box in this example is the bias the system learned on its own based on the background of the photos.
4. Algorithm
Simply put, an algorithm is a recipe. Just as the back of the chocolate chip bag tells you how to make cookies, an algorithm gives an AI system everything it needs to achieve a result. Algorithms are becoming more intertwined with our lives than we realize.
For example, marketers use algorithms to learn what you are interested in as a consumer. They teach their AI systems to learn that if you visit a website for cozy sweaters, you are interested in buying a cozy sweater. (You are then bombarded with ads for cozy sweaters until you buy one.)
5. Supervised Machine Learning
Supervised machine learning is exactly as it sounds. Someone supervises the input algorithm the AI uses and trains it to arrive at a certain output. Think of it as a math class. Your teacher already knew the answer to the problem on the board, but they taught you the process of how to get the answer and then watched as you trained yourself on how to do the problem.
This type of learning is most commonly used in AI systems today. Companies know the outcome they want to achieve, so they train their systems on how to get there.
6. Unsupervised Machine Learning
Unsupervised machine learning is slightly terrifying in reality. The AI system is given an algorithm and it solves the problem completely blind. It has an input but no training or known process to help it reach an outcome. The result is in the hypothetical hands of the machine. Whatever result it reaches is the result that it learns.
Unsupervised is the least popular AI learning system. Yet, it is gaining popularity. As researchers and programmers test the limits of AI, they begin to allow it to come up with solutions on its own.
7. Deep Learning
Deep learning is an advanced form of machine learning that classifies large data sets. Google translate is an excellent example of deep machine learning. When you enter the text that you want to be translated, the system combs through all of the data that is has learned on the specific language you have chosen. It can then differentiate between the proper words and grammar rules to choose the appropriate translated structure.
The potential uses of AI deep learning are exciting. Those elusive self-driving cars I mentioned use deep learning to recognize objects and avoid them. In medicine, deep learning can help design treatment specifically made for a patient’s genome.
8. Data Mining
Data mining is “the nontrivial extraction of implicit, previously unknown, and potentially useful information from data.” In other words, AI systems comb through massive sets of data and make connections between inputs.
A popular use of the application is searching for pictures on Google Images. If you search koala bears, the Google AI system will scour the massive amounts of pictures on the internet and pull together all of the ones it recognizes as having koalas. So, your results might be some photos of a fuzzy grey bear in a tree, and other pictures might be of a sign with a koala on it, or a t-shirt embellished with one. The AI system recognized all of these as results from your search.
9. Natural Language Processing
Natural Language Processing (NLP) is a machine’s ability to understand human language. Essentially, a computer can capture written or spoken sentences of human language and respond to it appropriately. Common applications of NLP are used in Microsoft Word and Grammarly where the system knows whether or not the grammar in a document is correct based on its knowledge of the human language. Others include Google Translate and personal assistant applications like Apple’s Siri and Google’s Alexa.
10. The Turing Test
Named after its inventor, mathematician Alan M. Turing, the Turing test is the final test to see if a system is truly artificially intelligent—can it think on its own. In this test, the computer’s goal is to be misidentified as a person. An interrogator asks a series of questions and has to guess whether the answer came from a computer or a human. If the interrogator chooses the computer, then it is deemed officially “intelligent” enough. If the computer doesn’t pass or the interrogator cannot distinctly choose which is a computer, the system fails the test and is redesigned.
Just a Start
Technology is advancing quickly. New products and markets are introduced to what seems like daily. These 10 artificial intelligence terms are the basics of artificial intelligence. To understand this technology that is intricately intertwined in our lives, we have to start somewhere.
Some of these new technologies are exciting. The thought of self-driving cars promises huge convenience to many commuters. However, to put our full trust in a machine that will make decisions for us, we need to understand the processes behind them.