‘What is Artificial Intelligence (AI)?’

Introduction to AI…

Welcome to WhatIsAI.co.uk, your go-to source for understanding Artificial Intelligence (AI) without the technical jargon. Did you know that “What is AI?” is the most searched question about this topic online, with around 10,000 searches each day (source: AHREFS.com, 2024)? It’s easy to see why, as many people used to think of AI only in terms of science fiction.

AI has become one of the most important technologies of the 21st century. It’s changing industries and improving our everyday lives in big and wide-ranging ways. But what exactly is AI? Our goal is to give you a clear and complete picture of artificial intelligence, explaining its basic ideas and why it’s seen as a technology that can be used in many different areas. For the latest news and ideas, check out our regularly updated AI blog.

Understanding AI - Contents

So what exactly is AI?

Artificial Intelligence, or AI, is when machines are made to think and learn like humans. There are two main types of AI: Narrow AI (also called Weak AI) and General AI (also known as Strong AI).

  • Narrow AI: This type of AI is built to do specific tasks. Examples are voice assistants like Siri and Alexa, recommendation systems on Netflix, and self-driving cars. Narrow AI is very specialised and works within certain limits.

  • General AI: This refers to systems that can understand, learn, and apply intelligence to a wide range of tasks, similar to how humans think. Whilst true General AI doesn’t exist yet, more and more systems are getting better at mimicking human behaviour.

Key Artificial Intelligence Concepts

To understand AI, it’s helpful to know some basic ideas:

  • Machine Learning (ML): A part of AI where computers learn from large amounts of data to find patterns and make decisions without being specifically programmed for each task. ML can be supervised, unsupervised, or semi-supervised.

  • Deep Learning: A more advanced type of machine learning that uses neural networks with many layers to analyze and learn from large amounts of data. This is used in things like image and speech recognition.

  • Natural Language Processing (NLP): This area of AI helps machines understand, interpret, and generate human language. It’s used in chatbots, translation services, and tools that analyze emotions in text.

  • Robotics: AI-powered robots can do tasks from simple factory work to complex jobs in healthcare, like performing surgeries. Robotics combines AI with physical machines to interact with the real world.

  • Computer Vision: This field of AI allows machines to understand and make decisions based on visual information from the world, like pictures and videos. It’s used in facial recognition, medical imaging, and self-driving cars.

The History and Evolution of AI

The idea of artificial intelligence goes back to ancient stories about artificial beings created by humans. However, modern AI started in the mid-20th century.

  • 1950s: The term “Artificial Intelligence” was first used in 1956 at the Dartmouth Conference, where researchers began exploring how to create intelligent machines. Early AI focused on symbolic reasoning and solving problems.

  • 1960s-70s: AI research hit some tough times, known as “AI winters,” because computers weren’t powerful enough and expectations were too high. Still, progress was made in specific areas like expert systems.

  • 1980s-90s: AI research picked up again thanks to better computers and new algorithms, leading to practical uses like machine learning and neural networks.

  • 2000s-Present: With the rise of big data, powerful graphics processors (GPUs), and advanced algorithms, AI became widely used. Today, AI is part of many fields, including healthcare, finance, transportation, and entertainment.

What is AI as a General-Purpose Technology?

AI is considered a general-purpose technology (GPT) because it affects many different industries and can drive economic growth and innovation. Like electricity or the internet, AI is versatile and can be used in various applications, changing how we work, live, and interact.

  • Economic Impact: AI can greatly increase productivity and efficiency. For example, AI-powered automation can make manufacturing processes smoother, lower costs, and improve supply chain management.

  • Healthcare: AI helps predict disease outbreaks, create personalised treatment plans based on patient data, and develop advanced diagnostic tools that use image recognition to identify medical issues.

  • Finance: In finance, AI is used for detecting fraud, algorithmic trading, personalised banking services, and managing risks. AI insights help banks make better investment choices and enhance customer service.

  • Transportation: AI is transforming transportation with self-driving cars, traffic management systems, and predictive maintenance for infrastructure.

  • Education: AI tools in education offer personalised learning experiences, automate administrative tasks, and provide insights into student performance and behaviour.

  • Entertainment: AI powers recommendation systems for streaming services, creates realistic virtual worlds in video games, and helps generate content for movies and music.

Ethical and Societal Implications of Artificial Intelligence

While AI has many benefits, it also brings important ethical and social challenges that need to be addressed to ensure it aligns with human values and societal goals. Proper AI governance is essential.

  • Bias and Fairness: AI systems can unintentionally keep the biases present in their training data, leading to unfair or discriminatory results. It’s important to make AI algorithms fair and transparent to reduce these risks.

  • Privacy: AI uses large amounts of data, raising concerns about privacy. Protecting personal information and ensuring data security are crucial, especially as AI becomes more common in personal devices like smartphones, where users might not fully understand the risks and how their data is used.

  • Job Displacement: AI-driven automation might replace some jobs, especially those with routine tasks. Governments and businesses need to develop plans for retraining workers and helping them transition to new roles. Combining human skills with technology is inevitable, and managing this change positively requires careful planning.

  • Accountability: As AI systems become more independent, it becomes harder to determine who is responsible for their actions and decisions. Clear rules and regulations are needed to handle issues of responsibility and liability.

The Future of AI

The future of AI looks very promising. Ongoing research and development are expected to create even more advanced and capable AI systems. Key trends to watch include:

  • Explainable AI (XAI): Efforts to make AI models easier to understand and more transparent will build trust and help users see how AI systems make decisions.

  • AI in Edge Computing: Combining AI with edge computing allows data to be processed and decisions made in real-time at the source of data, reducing delays and improving efficiency.

  • AI and the Internet of Things (IoT): Integrating AI with IoT devices will create smart environments that can manage and optimise various functions on their own, from home automation to industrial operations.

  • AI for Sustainability: AI can help protect the environment by optimising energy use, reducing waste, and managing resources more efficiently.

  • Human-AI Collaboration: The future will likely see more teamwork between humans and AI, using the strengths of both to achieve better results in different areas.

Artificial Intelligence is a fast-growing field with the power to change every part of our lives. By understanding the basics of AI and how it’s used, we can better appreciate its role as a general-purpose technology. As we explore the opportunities and face the challenges of AI, it’s important to ensure that its development and use are guided by ethical principles and support the wider goals of society.

For more information and updates, explore our AI courses and read our latest articles on the AI blog.

81 Important AI concepts …

  • Artificial Intelligence (AI)
    Machines performing tasks that typically require human intelligence (language, perception, decision-making). AI scales via massive data, advanced neural architectures, and hybrid cloud-edge solutions.

  • Machine Learning (ML)
    Algorithms that learn patterns from data to make predictions or decisions. Modern ML spans simple statistical methods to advanced deep and reinforcement learning, with growing focus on ethical data use.

  • Deep Learning
    A subset of ML using multi-layered neural networks to learn from large datasets. Transformer-based models dominate across language, vision, and multi-modal tasks.

  • Neural Network
    A computational structure inspired by biological neurons, with interconnected nodes that transform input data into outputs. Current research emphasises new architectures (CNNs, GNNs, transformers).

  • Natural Language Processing (NLP)
    Enabling computers to interpret, generate, and work with human language. Large Language Models (LLMs) power chatbots, summarisation, translation, and more.

  • Generative AI
    AI systems that produce new content (text, images, etc.) often indistinguishable from human-made work. Techniques like GANs, VAEs, and transformer-based diffusion models are widely used.

  • Reinforcement Learning (RL)
    Training agents via reward feedback to optimise actions. Modern RL excels in robotics, autonomous systems, and game-playing (e.g. AlphaZero, MuZero).

  • Supervised Learning
    Models learn from labelled data to map inputs to known outputs. Still crucial for tasks where abundant, high-quality labels exist (e.g. image classification).

  • Unsupervised Learning
    Discovers patterns in unlabelled data (clustering, dimensionality reduction). Used for anomaly detection, customer segmentation, and data exploration.

  • Semi-Supervised Learning
    Uses both labelled and unlabelled data to boost performance when labels are scarce. Popular techniques generate pseudo-labels automatically.

  • Transfer Learning
    Applying knowledge from one task to another. Fine-tuning large pre-trained models is now standard across NLP, vision, and beyond.

  • One-Shot / Few-Shot Learning
    Models learn with very few labelled examples. Prompt-based techniques have amplified this approach in large language models.

  • Zero-Shot Learning
    Performing tasks unseen during training by leveraging semantic or contextual understanding. Common in multilingual translation and classification.

  • Data Mining
    Extracting insights from large datasets using statistical and ML methods. Modern platforms integrate real-time data streams and complex analytics.

  • Data Visualisation
    Presenting data in graphical formats. AI-driven automation creates interactive dashboards and dynamic visual narratives.

  • Predictive Analytics
    Forecasting future trends using historical data. Used widely in finance, marketing, and supply chain to anticipate demand or risk.

  • Computer Vision
    Interpreting and analysing visual data (images, videos). Vision transformers, CNNs, and advanced detection models enhance everything from cameras to autonomous vehicles.

  • Speech Recognition
    Turning spoken language into text. Deep end-to-end networks now achieve near-human accuracy in multiple languages.

  • Speech Synthesis (TTS)
    Generating natural-sounding speech from text. Neural TTS allows expressive, lifelike voices for virtual assistants, accessibility, and brand personas.

  • Robotics
    Engineering autonomous or semi-autonomous machines. Recent advances combine RL, computer vision, and sensor fusion for delicate or complex tasks.

  • Autonomous Vehicles
    Self-driving systems that perceive, plan, and navigate without human input. Sensor fusion (LIDAR, cameras) and ML continue to refine safety and reliability.

  • Explainable AI (XAI)
    Methods that clarify AI decision-making. Visualisation tools like SHAP or LIME are increasingly mandated by regulators for sensitive applications.

  • Ethical AI
    Ensuring AI aligns with societal values, fairness, and accountability. Global regulations and ethics boards shape data practices and deployment.

  • Bias in AI
    Systemic errors from skewed data or flawed assumptions. Bias detection and mitigation frameworks are crucial for equitable outcomes.

  • AI Governance
    Policies and frameworks for responsible AI development and oversight. Organisations publish guidelines to ensure transparency, safety, and compliance.

  • Federated Learning
    Collaborative model training across decentralised data sources, preserving privacy by sharing only model updates rather than raw data.

  • Edge Computing
    Processing data close to where it is generated (e.g., local devices). Advances in hardware compression and lightweight models enable real-time AI on edge devices.

  • Cloud Computing
    On-demand access to computing resources via the internet. AI platforms in the cloud offer scalable training and deployment services with specialised hardware.

  • Quantum Computing (in AI)
    Exploring quantum effects to potentially accelerate complex computations. Still emerging, with promise for optimisation and cryptography tasks.

  • Transferable Adversarial Attacks
    Crafted inputs that fool multiple AI models across different architectures. Robustness remains a critical concern in security-sensitive domains.

  • Training Data
    The dataset used to teach models patterns. The “data-centric AI” trend emphasises quality, diversity, and traceability of training sets.

  • Big Data
    Extremely large or complex datasets. Modern frameworks handle real-time streaming, unstructured data, and advanced analytics.

  • Data Augmentation
    Techniques to expand a dataset by altering or generating new samples. Crucial to improve model robustness and reduce overfitting.

  • Data Cleaning
    Identifying and rectifying errors or outliers in datasets. Automated tools leverage ML to detect anomalies and maintain data integrity.

  • Hyperparameters
    Configurable parameters (e.g., learning rate, number of layers) set before training. Automated methods like Bayesian optimisation expedite tuning.

  • Activation Functions
    Non-linear transformations in neural networks (e.g., ReLU, sigmoid, GELU). Crucial for training stability and performance.

  • Loss Function
    Measures the difference between model predictions and actual targets. Custom loss designs guide models toward specific objectives.

  • Optimiser
    Algorithms (e.g., Adam, SGD) that iteratively update model parameters to reduce loss. Innovations like AdamW improve stability in large-scale training.

  • Overfitting
    When a model clings too tightly to training data, failing on new data. Regularisation, dropout, and data augmentation help combat this.

  • Underfitting
    When a model fails to capture underlying data patterns, leading to poor performance. Addressed by increasing model complexity or refining data.

  • Regularisation
    Techniques that discourage overly complex models (e.g., L2 penalties, dropout). Helps generalise and prevents overfitting.

  • Batch Normalisation
    Normalises intermediate outputs in neural networks, stabilising training. Alternatives (LayerNorm, GroupNorm) are used in transformers.

  • Convolutional Neural Network (CNN)
    Specialised in extracting spatial features from grid-like data (images). Central to computer vision, though often combined with transformers.

  • Recurrent Neural Network (RNN)
    Processes sequential data (time series, text). LSTMs and GRUs remain valuable, though transformers have become more prevalent.

  • Transformer
    An architecture using self-attention to handle sequences efficiently. Forms the core of modern large language, vision, and multi-modal models.

  • Attention Mechanism
    Enables a model to weigh certain parts of the input more. Key to capturing long-range dependencies in text, images, and beyond.

  • GPT (Generative Pre-trained Transformer)
    A family of large language models known for coherent text generation. GPT-4 and successors excel at advanced NLP tasks, coding, and reasoning.

  • BERT (Bidirectional Encoder Representations from Transformers)
    Learns deep bidirectional context for words in a sentence. Although overshadowed by autoregressive LLMs, it remains a staple for many NLP tasks.

  • Fine-Tuning
    Tailoring a pretrained model to a specific task with specialised data. Significantly reduces training time and cost.

  • Embeddings
    Dense vector representations of entities (words, images, nodes) capturing semantic relationships. Power many search, recommendation, and classification systems.

  • Word2Vec
    Early technique for learning word embeddings from large corpora. Historically significant, but now largely replaced by contextual models.

  • Glove (Global Vectors)
    Embeddings learned from word co-occurrence statistics. Useful for simpler NLP tasks, though overshadowed by transformer-based embeddings.

  • Large Language Model (LLM)
    Massive transformer-based models trained on extensive text. Essential for conversational AI, text generation, translation, and more.

  • Chatbot
    Automated conversational agent. Modern bots use LLMs for more natural, context-aware interactions in customer service, healthcare, and beyond.

  • AI Ethics
    Moral principles ensuring AI benefits society, respects privacy, and avoids harm. Shaped by global regulations and public discourse.

  • Turing Test
    A classic benchmark for gauging machine “intelligence”: can a system’s outputs pass as human? Modern generative models challenge its relevance.

  • Singularity
    Hypothesised moment of AI surpassing human intelligence, triggering unpredictable changes. Remains speculative, fuelling discussions on safety and alignment.

  • AI Alignment
    Ensuring advanced AI systems behave in accordance with human values. Intensive research involves controlling powerful models’ goals and actions.

  • Federated Analytics
    Distributed analysis without centralising raw data. Particularly useful for privacy-sensitive industries like healthcare and finance.

  • Swarm Intelligence
    AI inspired by collective animal behaviours (ants, bees) for optimisation. Combined with RL for multi-agent coordination in logistics or traffic management.

  • Evolutionary Algorithms
    Bio-inspired optimisation (genetic algorithms, neuroevolution). Used to discover novel model architectures and hyperparameters.

  • Bayesian Networks
    Graphical models using probabilistic dependencies between variables. Help with transparent decision-making and uncertainty quantification.

  • Markov Decision Process (MDP)
    Framework for sequential decision-making with states, actions, rewards, and transitions. Underpins reinforcement learning designs.

  • Markov Chain
    A sequence of events where each depends only on its immediate predecessor. Common in stochastic modelling, though deep methods are now more widespread.

  • AI Chipsets
    Specialised hardware (GPUs, TPUs) optimised for AI calculations. Enable faster training and inference of large models.

  • Neuromorphic Computing
    Circuits mimicking the brain’s parallel, event-driven architecture. Still emerging; may lead to ultra-efficient AI for low-power devices.

  • Blockchain for AI
    Decentralised ledgers for securely sharing data and models. Adoption remains modest, though used in secure federated learning and model governance projects.

  • Digital Twin
    Virtual replica of a physical system, updated in real-time. AI-driven simulation helps predict failures, optimise performance, and guide maintenance.

  • Internet of Things (IoT)
    Networks of smart devices collecting and exchanging data. AI/IoT integration supports predictive maintenance, real-time monitoring, and automation.

  • Smart Cities
    Urban environments leveraging AI and IoT to enhance infrastructure and public services (e.g., traffic, waste management). Emphasis on sustainability and efficiency.

  • Personalisation
    Tailoring content or experiences to individual users. Advanced models integrate user behaviour data for pinpoint recommendations.

  • Recommender Systems
    Predict user preferences using collaborative filtering, content-based approaches, or hybrid deep learning. Essential for e-commerce, streaming, and social platforms.

  • Virtual Assistants
    AI-driven tools that handle tasks like scheduling and information retrieval. Enhanced by speech recognition, TTS, and context-aware LLMs.

  • Cognitive Computing
    AI aiming to emulate human thought processes in reasoning and learning. Blends symbolic, statistical, and neural methods to solve complex problems.

  • Cognitive Search
    Intelligent search that understands query context, semantics, and intent. Transformers provide deeper insight for enterprise knowledge management and Q&A.

  • Graph Neural Networks (GNNs)
    Neural architectures tailored for graph data, capturing relationships among nodes. Used in social networks, fraud detection, and scientific discovery.

  • Time Series Analysis
    Forecasting and analysing data points over time. Deep transformer models now excel at capturing long-term dependencies.

  • Multi-Modal AI
    Systems integrating different data types (text, images, audio, video). Foundation models like CLIP and Flamingo fuse modalities for richer understanding.

  • Continual Learning
    Enabling models to learn new tasks over time without forgetting previous ones. Vital for dynamic settings such as robotics and real-time analytics.

  • Meta-Learning
    “Learning to learn” by discovering optimal training approaches. Drives AutoML improvements and adaptive algorithms.

  • AutoML (Automated Machine Learning)
    Automated pipelines that handle feature engineering, model selection, and tuning. Accelerates experimentation and lowers the barrier to AI adoption.