Machine learning, deep learning and artificial intelligence (ML, DL and AI) are related technologies that are changing the face of how many industries manage themselves and make decisions. Clearly, they are very important and complex processes that are evolving very quickly.
It is important to understand the differences between them. Unfortunately, you almost need to use one of them to do so. The map was laid out well earlier this week by Hope Reese at TechRepublic. AI is a general term that “can refer to anything from a computer playing a game of chess, to a voice-recognition system like Amazon’s Alexa interpreting and responding to speech.”
Three subgroups exist: narrow AI (aimed at specific questions), artificial general intelligence and super intelligent AI.
ML refers to machines that use data to teach themselves, Reese writes. The highest profile example is Google’s DeepMind, which beat the world champion in the Korean game Go last March. Deep Learning is a subset of ML, Reese writes, that solves problems in a way that simulates human decision-making.
It is important to track how these tools are developing. Al Gharakhanian, the managing director of Cogneefy, offers some very valuable, high-level perspective on trends in ML and DL at InformationWeek. He points to three high-level trends.
The first is the emergence of unsupervised learning. Today, he writes, the predominant method of training ML/DL tools is supervised learning. This approach uses large amounts of labeled data. The nascent trend is unsupervised learning. The big benefit is that unsupervised learning doesn’t require huge datasets.
A second trend is the growth of generative adversarial networks (GANs). To understand GANs, it is necessary to understand discriminative models. These “labeled historical data and use their accumulated knowledge to infer, predict, or categorize.” Generative models rely less on stored knowledge. They synthesize or generate ideas based on “insights gained during training.” They are a refinement:
GANs are really not a new model category; they are simply an extremely clever and effective way of training a generative model. This strength reduces the need for large training datasets.
The third trend is reinforced learning (RL). This is learning though experimentation and exploration. It differs from supervised learning in that it doesn’t start with preconceived notions of “how the world works,” or “good training data,” Gharakhanian writes.
Of course, it’s impossible understand these concepts from a single article. The important thing to understand is that this cutting-edge material is growing and changing almost in real time.
Indeed, the time between the birth of the ideas and their use by commercial, military, governmental and other users is very short. In mid-February, for instance, Forbes highlighted two ways in which companies are using AI:
While these companies dominate the headlines—and the war for the relevant talent—other companies that have been analyzing data or providing tools for analysis for years are also capitalizing on recent AI advances. A case in point are Equifax and SAS: The former developing deep learning tools to improve credit scoring and the latter adding new deep learning functionality to its data mining tools and offering a deep learning API.
This is difficult material. It is important to understand at a high level, however, as the boundaries of research and development expand.
Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at cweinsch@optonline.net and via twitter at @DailyMusicBrk.