Artificial Intelligence (AI) is a broad field of computer science focused on creating systems that can perform tasks requiring human intelligence. At its core, AI enables machines to learn from experience, adjust to new inputs, and perform human-like tasks.
Machine Learning (ML) is a subset of AI where algorithms learn patterns from data without explicit programming. ML models improve their performance over time as they are exposed to more data.
Deep Learning is a further subset of ML inspired by the structure of the human brain. It uses neural networks with many layers (hence "deep") to process complex data like images, audio, and text.
Generative AI refers to models that can generate new content—text, images, music, or code—based on patterns learned from training data. These models include large language models (LLMs) like GPT, which power applications such as ChatGPT.
AI Models are the mathematical representations trained on data to make predictions or decisions. Key types include supervised, unsupervised, and reinforcement learning models.
Large Language Models (LLMs) are a type of neural network trained on massive text corpora. They excel at understanding and generating human language, enabling tasks like translation, summarization, and conversation.
This short overview captures the essential hierarchy and relationships among these foundational AI concepts.