in Machine Learning

The Top Artificial Intelligence Glossary of Terms (A-Z)

With so much research in AI and evolving applications, it can be difficult to keep track of all the confusing terms in artificial intelligence. In this post, I attempt to pen-down common terms and their definitions that crops up when discussing artificial intelligence. You can use this as a handy reference tool in 2017 and beyond.

So, whether you’re still hung up on the difference between artificial intelligence, machine learning, and deep learning, check out the following roundup of artificial intelligence terms to keep yourself in the know.

Popular Artificial Intelligence Terms

Artificial Intelligence Glossary of terms

  1. Advanced Driving Assistance Systems(ADAS) – Systems developed to help the driver in the driving process – Through increased measures of car safety and process automation.
  2. Artificial Intelligence – Computer programs designed to solve difficult problems which humans (and animals) routinely solve. In a nutshell, is to enable computers to think and learn-by-itself through feeding of good data. The goal of AI is to develop programs which can solve such problems independently, although the patterns for solving these problems differ significantly from the way they are solved by humans.
  3. Bayesian network – A type of probabilistic graphical models built from data and/or expert opinion. They are graphs explaining the chances of one thing happening depend on the chances that another thing happened. They can be used for a wide range of tasks including prediction, anomaly detection, diagnostics, automated insight, reasoning, time series prediction and decision making under uncertainty.
  4. Classifiers – Algorithms (like., KNN, SVM) used for data classification machine learning.
  5. Computer Vision – Is a field of Artificial Intelligence and Image processing that train machines to see the surroundings, understand and make better decisions.
  6. Crowdsourcing – It is a practice of distributing tasks to a large audience and get things done quickly. The drawback here is that it’s tough to maintain the crowd and ensure quality when done alone.
  7. Data labeling – Task of annotating the object(s) found in the given data. This includes images, audio, video or any file type.
  8. Data Mining – The process of combing through a data set to identify patterns and extract information. Often such patterns and information are only clear when a large enough dataset is analyzed. For this reason, AI and machine learning are extremely helpful in such a process.
  9. Data Science is a study that unifies statistics, data modeling & visualization, and analysis to extract information, classify data etc.
  10. Data Scientist

  11. Decision Model – A model that uses prescriptive analytics to establish the best course of action for a given situation. The model assesses the relationships between the elements of a decision to recommend one or more possible courses of action. It may also predict what should happen if a certain action is taken.
  12. Deep Learning – Family of machine learning methods based on neural networks (models inspired by the human brain) which can be used for different applications. Gained popularity by recent success in computer vision and speech recognition tasks.
  13. Facial Recognition – The recognition of faces and emotional states in images or video signals. This is commonly done through point annotations called landmarks.
  14. Ground Truth – Is a process usually done on site (or, using gold standard) to measure the accuracy of the training dataset to prove or disprove research hypothesis. For ex., Self-driving cars use ground truth to train AI to properly validate the road and street scenes.
  15. Human-in-the-loop is a process of using humans in the middle of the process to achieve expected output. It’s been used in Machine Learning process to enhance the result accuracy.
  16. Image Recognition – Recognizing the specific types of objects in given image or video datasets.
  17. Machine Learning – A subset of AI in which computer programs and algorithms can be designed to “learn” how to complete a specified task, with increasing efficiency and effectiveness as it develops. Such programs can use past performance data to predict and improve future performance.
  18. Managed Crowdsourcing – Service provider offering fully-managed outsourcing solution using the crowd.
  19. Natural language generation – A machine learning task in which an algorithm attempts to generate language that is comprehensible and human-sounding. The end goal is to produce computer-generated language that is indiscernible from language generated by humans.
  20. Natural language processing – A machine learning task concerned with improving the interaction between humans and computers. This field of study focuses on helping machines to better understand human language in order to improve human-computer interfaces.
  21. Optical Character Recognition (OCR) – A system that detects images of handwritten or printed text and converts them into machine-readable text
  22. Perception is a process of acquiring, interpreting, selecting, and organizing sensory information. It is what you perceive, which may be true or false, as opposed to the ground truth which is always true.
  23. Precision – out of given data set, how many are selected to use.
  24. Recall – out of selected data, how many are processed.
  25. Reinforcement learning algorithms – A type of machine learning in which machines are “taught” to achieve their target function through a process of experimentation and reward. In reinforcement learning, the machine receives positive reinforcement when its processes produce the desired result, and negative reinforcement when they do not.
  26. Semantic Segmentation is understanding the image at pixel-level, partitions the image into semantically meaningful parts, and classifies each part into one of the pre-determined classes.
  27. Speech Recognition – The recognition of words and/or emotional state in an audio signal.
  28. Supervised learning algorithms – A type of machine learning in which human input and supervision are an integral part of the machine learning process on an ongoing basis. In supervised learning, there is a clear outcome to the machine’s data mining and its target function is to achieve this outcome, nothing more.
  29. Training Data – In machine learning, the training data set is the data given to the machine during the initial “learning” or “training” phase. From this data set, the machine is meant to gain some insight into options for the efficient completion of its assigned task through identifying relationships between the data.
  30. Unsupervised learning algorithms – A type of machine learning in which human input and supervision are extremely limited, or absent altogether, throughout the process. In unsupervised learning, the machine is left to identify patterns and draw its own conclusions from the data sets it is given.