Artificial Intelligence
Graphs · Graph Search Algorithms · Neural Networks · Machine Learning · Deep Learning
| Term | Meaning | Example |
|---|---|---|
| Node / Vertex | A point in the graph | City on a map |
| Edge | Connection between two nodes | Road between cities |
| Weight | Cost/distance on an edge | Distance in km |
| Directed graph | Edges have direction (one-way) | Twitter follows |
| Undirected graph | Edges go both ways | Facebook friendships |
| Path | Sequence of connected nodes | Route from A to B |
Graph: A→B(4), A→C(2), B→D(3), C→B(1), C→D(5)
✅ A* advantages
- Faster than Dijkstra for point-to-point search
- Optimal if heuristic is admissible (never overestimates)
- Widely used in games, GPS, robotics
⚠️ A* limitations
- Requires a good heuristic function
- Memory-intensive — stores open/closed lists
- Heuristic quality affects performance
Layer
Layer
Layer 2
Layer
| Component | Role |
|---|---|
| Input layer | Receives raw data (pixel values, sensor readings, etc.) |
| Hidden layer(s) | Learns intermediate features and patterns |
| Output layer | Produces final result (class label, prediction, etc.) |
| Weight | Strength of connection — adjusted during training |
| Bias | Offset that shifts the activation function |
| Activation function | Introduces non-linearity (e.g. ReLU, sigmoid) |
- Forward pass: Input flows through the network → prediction is made
- Error calculated: Compare prediction to correct answer (loss function)
- Backward pass: Error is propagated backwards through the network
- Weights updated: Each weight adjusted using gradient descent to reduce error
- Repeat thousands of times → network gradually improves
Regression methods predict a continuous numerical value (unlike classification which predicts a category).
- Linear regression: Fits a straight line to data (y = mx + c). Minimises sum of squared errors.
- Logistic regression: Predicts probability (0–1) — used for binary classification
- Polynomial regression: Fits a curve to non-linear data
In neural networks, regression is the process of finding the best weights — minimising a cost function through gradient descent.
✅ Supervised Learning
Trained on labelled data — input/output pairs. Model learns to map inputs to known outputs.
Examples: Image classification, spam detection, price prediction
🔍 Unsupervised Learning
Trained on unlabelled data. Model finds hidden patterns and structure on its own.
Examples: Customer segmentation, anomaly detection, recommendation systems
🎮 Reinforcement Learning
Agent learns by interacting with an environment. Receives rewards for good actions, penalties for bad ones.
Examples: Game-playing AI (chess, Go), robot navigation, trading bots
| Machine Learning | Deep Learning | |
|---|---|---|
| Architecture | Traditional algorithms (decision trees, SVMs) | Multi-layer neural networks (many hidden layers) |
| Feature engineering | Human must manually select features | Automatically learns features from raw data |
| Data requirements | Works with smaller datasets | Needs very large datasets |
| Compute needs | Modest CPU | High — requires GPUs/TPUs |
| Interpretability | Usually explainable | Often a “black box” |
| Best for | Structured/tabular data | Images, speech, text, video |
- Know graph terminology: node, edge, weight, directed/undirected
- Trace Dijkstra’s algorithm on a weighted graph — show distance table updating
- Explain A* = g(n) + h(n) and why the heuristic makes it faster
- Describe structure of ANN: input, hidden, output layers + weights
- Explain back propagation in plain English (forward pass, error, backwards update)
- Distinguish supervised, unsupervised, and reinforcement learning with examples
- Compare Deep Learning vs Machine Learning — when to use each
