Welcome to Tech Bee AI Glossary
You are currently visiting the Tech Bee AI Glossary. At Tech Bee, we know how confusing and a bit intimidating the existence of many unique AI terms and ideas might be to some people. That’s why we have developed this all-encompassing link to a specially selected glossary that will help you navigate this fascinating topic.
From a total novice with a general, non-specialist interest in the field to students, professionals, and experts in IT, this book aims to be their reference work. Discover concise definitions, meaningful descriptions, and useful examples within major subject areas: Machine Learning, Neural Networks, Data Science, and AI Ethics.
Our AI glossary will be updated as the technology progresses—so be sure to bookmark the page and submit any terms you’d like to have included! Dive deeper into AI terminology through our Tutorials. You can also use IBM AI Glossary to get to more terms not represented in this glossary and Access AI glossaries.
Now you can start discovering AI, and let me accompany you with more buzzing knowledge, guys! Refer to our Ebooks for expanded knowledge. For more information on the terms relating to tech and data, you can visit TechTarget. For more definitions related to machine learning you should go to google’s AI guide.
General AI Terms
- Artificial Intelligence (AI): computational ability that replicates, mimics, learns from, and performs tasks that are considered to be rational by humans. This involves getting information to get knowledge and rules and then being able to use these rules to arrive at certain conclusions and also being able to sort it out when one is wrong.
- Machine Learning (ML): Part of a larger field that subsequently studies ways of training machines to gather knowledge and gain improved performance from the data.
- Deep Learning (DL): A subfield of machine learning where data representation is passed through multiple layers of artificial neural networks in order to solve such problems as image recognition and natural language processing.
- Artificial General Intelligence (AGI): AI form that has capabilities for any intellectual activities a human may be capable of as opposed to being tailored to perform specific operations.
- Artificial Superintelligence (ASI): The fourth-generation AI that is expected to surpass human capability in all areas.
Narrow AI (NAI): It is an AI approach where the AI solution developed is tailored for a particular small set of applications that can be performed effectively without general intelligence.ed - Generative AI (GenAI): AI models that develop new context from input training information.
- Natural Language Processing (NLP): The application layer protocol used to transfer hypertext documents on the World Wide Web, including HTML.
- Reinforcement Learning (RL): This means deciding choices or actions under conditions that have definable consequences for the decision maker—the rewards are at least partly returned once the action ends.
- Computer Vision (CV): A branch of artificial intelligence that allows systems to understand information coming from the world visually.
- Multimodal AI: AI implementation of processes that are able to handle multiple forms of inputs at once, for example, text, image, and voice.
- Edge AI: AI that operates locally on a device instead of working in a centralized data center, which may slow down its response time.
- Explainable AI (XAI): A collection of approaches aimed at presenting reasons for the AI decision-making process in a form that is easily understandable for human interpretation.
Data & Training Terms - Big Data: Infinitely vast collections of information that can be processed via computers to understand and look for patterns and connections regarding human actions.
- Dataset: A portion of data that is exclusively employed for training, for validation, or for testing of the AI systems.
- Training Data: The set of data used when an AI model is being trained on a topic or a set of topics.
- Validation Data: A form of dataset by which parameters of a model are adjusted to make sure that the model can perform well when faced with other datasets that were not included in its training process.
- Test Data: The type of data that is utilized in order to assess how well the AI model has been performing.
- Overfitting: When an AI model learns all the features of the training data, with emphasis on noise, it ends up learning new data poorly.
- Underfitting: What occurs when the AI model does not pick the correct patterns from the training data and hence performs poorly.
- Feature Engineering: This reflects the act of choosing, transforming, or creating features in the datasets with an aim to making the AI models better.
- Data Augmentation: The process of using techniques to generate real-time training data other than training data collection, like using images that can be rotated or flipped.
- Hyperparameter Tuning: The optimization of the parameters that regulate the learning within the context of models in machine learning.
- Batch Size: The set of concurrent instances of training data that are passed through a network in one pass or epoch.
- Epoch: Provides a completed loop of passes through the complete training datasets in the learning process.
- Gradient Descent: A way of reducing the error in AI models by making adjustments to the weights of each iteration by the gradient of the loss function.
Neural Network Concepts
- Perceptron: A technique used to separate pre-activation normalization from the learning algorithm.
- Convolutional Neural Network (CNN): Often referred to as a neural network that is designed to efficiently handle image processing and image recognition tasks.
- Recurrent Neural Network (RNN): A neural network used in establishing a relationship between sequential data, indicating that new observations depend on previous observations.
- Long Short-Term Memory (LSTM): A recurrent structure of neurons engineered to recall long-term relationships within sequential data.
- Transformer: A complex neural network model that is specifically used in sequence-to-sequence learning tasks, including translation from one language to another, which GPT and BERT use.
- Attention Mechanism: This is a technique used in the fundamental structure of a neural network where models are able to concentrate on certain sequence input in their prediction.
- Activation Function: A mathematical function used in training and influencing whether a neuron in a neural network should be turned on in order to capture the particular and various patterns of movement.
- Backpropagation: It is a mathematical approach adopted to train networks since any network involves setting up weights and then finding the gradients of the loss function to update the weights.
- Dropout: Dropout, which keeps some neurons out randomly during the training in order to reduce overfitting.
- Weight Initialization: The initiation of the first input and output weights in a neural network before the training session, by which the initial weights may complicate the training procedure.
Ethical & Legal AI Terms
- Bias in AI: Prejudices that are built into the AI models, which result from training on a lopsided dataset and then yield unjust or false results.
- Fairness: The idea of how the AI systems should provide similar outcomes for clients without discrimination.
- Transparency: The act of allowing the end user to comprehend the action of the AI system.
- Accountability: Holding humans or other organizations accountable for the choices and actions implemented by the AI.
- AI Regulation: This is the policy and laws that have been put in place to prevent ill and unsafe use of the developed and implanted AI sciences.
- Data Security: Ensuring that certain data is inaccessible or utilized inappropriately by someone who should not have access to that information and who is likely bound by rules such as GDPR or CCPA. How it is understandable to humans:
- Explainability: This is the ease with which an identified AI model’s decision-making can be explained.
Understand related legal terms in our Licensing page.