📄️ Lectures Overview
The Schedule outlines the technical topics that we will be learning, but it is just a rough overview. Lectures for this course will broadly follow the same pattern:
📄️ W01. Introduction 01
Today, we learn the fundamental question of this course and the principles behind it: how is mechanical intelligence like human intelligence? We will learn about emergence, computation, and have concrete demonstrations of these concepts.
📄️ W01. Introduction 02
Today, we inspect structure and function of the purported atomic units of both computation and cognition. We see how digital signals are produced and used, and compare them to how neuronal signals are produced and used. With it, we pose the question of whather brains are "like" computers at the fundamental levels.
📄️ W02. Movement 01
Welcome to the first lecture of the Movement module. When we think of movement, we often think of output in terms of muscles, motors, or other force-producing devices. However, movement by intelligent creatures (from viruses to humans) is usually purpose-driven in some way. If we think about what it means to be alive, there seems to be some kind of purpose-driven action involved (i.e., eating for the purpose of self-perpetuation, self-replication, etc.). To be purpose-driven in any way, some kind of sensation is required, since you need to not just act, but act in a way that moves you closer to some kind of eventual goal.
📄️ W02. Movement 02
We talked about sensation last time. Let's talk about the other half of the interactive loop, actuation.
📄️ W03. Movement 03
Last time we discussed basic mechanics and DC motors. Today, we will discuss the control of movement from a signal standpoint. The principle of movement control is a simple process with a complicated sounding name: Pulse Width Modulation (PWM).
📄️ W03. Movement 04
To close off the Movement module, we are going to build up to the design of a servo motor. We have all of the necessary components: DC motors, drivers, potentiometers, and PWM control. We will see a real-life servo, look at the "guts," and then create a large-scale model servo to figure out how it works.
📄️ W04. Control 01
This is the beginning of the Control module. Last time, we modelled a servo and saw a complete but simple control loop. The potentiometer was used for rotational encoding. However, our robots move too far to just use rotational encoding. We need to translate rotation to linear motion.
📄️ W04. Control 02
Last time, we talked about telemetry and localization. As you will find out in the labs, it's often hard to do localization with just the on-board sensors. It's still hard with external sensors, but they can give you another piece of the picture.
📄️ W05. Control 03
Maintaining distance from an object is both simple and hard. The basic problem is that you want to reduce error, i.e., get to the distance that you want to be at. But, because of the physics of real world, you don't always get exactly there.
📄️ W05. Control 04
Today is an introduction to probability and predictions that are useful in control. Robots live in a probabilistic world, i.e., there is never 100% certainty about the correspondence between their model of the world, and the world itself. This is due to the presence of error: nothing is perfectly precise, therefore none of our models can be perfectly deterministic.
📄️ W06. State 01
Today we will learn about Bayes Theorem, and how to apply it to detection in robotics. It is foundational to many techniques in robotics from basic detection, to building complex machine learning classifiers.
📄️ W06. State 02
Today we will learn how to apply Bayes filters to object detection. The Bayes Filter is just an iterated version of Bayes' Theorem. By repeatedly applying Bayes' Theorem, we can converge to a particular measurement with a high likelihood of being correct. This should make sense intuitively: if we repeatedly measure that we're at 10cm, we're probably pretty close to being at 10cm.
📄️ W07. State 03
Today we will learn how to apply Bayes Theorem in a chain to create Bayes Networks.
📄️ W07. State 04
Today we will learn about classifiers, using line-tracking as an example. Our modern world uses classifiers as the main detection paradigm because creating a simple "rule" for detectors is difficult if not impossible for most tasks with complex and real-world data. We'll see an example of pre-built classifiers that you can use with your robots, and start to think about how to design a system that can use a classifier.
📄️ W08. Emergence 01
Today, we really start to dig into complexity science 101: automata. If you've taken a discrete math course, you've probably already seen finite state machines (FSMs) a.k.a. deterministic finite state automata (DFAs).
📄️ W08. Emergence 02
Today we will discuss systems theory and variations on Conway's Game of Life. These are generalizations of the discrete form of cellular automata that Conway developed. One form of generalization is in allowing for different rule sets to be explored. Another is to allow for the spatial resolution to increase, making the "squares" almost continuous. Another is to change the values that can be applied, e.g., allowing the squares to be "greyscale" and vary between 0 and 1. Finally, the transition functions can be changed, allowing for more complex relationships between pixels.
📄️ W09. Emergence 03
Today we extend Conway's Game of Life into more dimensions and resolutions. The original GoL was simple enough to be hand-computed, and, amazingly, produced self-replicating patterns that seemed to go on "forever". However, the limitations of a large square grid and only two states are obvious, as are the possible extensions: what if we can make the grid so small that it becomes continuous spatially, and what if we can give the cells so many states that they become continuous state-wise? The further we push this, the more our cells produce things that look like real-world microscopic life forms. In this, we get a much clearer demonstration of emergence, where things that seem to have intentional behaviour and even "forces" are driven purely by unintelligent state transition rules. "Agents" emerge.
📄️ W09. Emergence 04
Today, we introduce simple swarms, which are multi-agent systems. The difference between a swarm and cellular automata is that a swarm is comprised of agents that act over a surface or environment, whereas the cellular automata emerges out of the environment itself. The cellular automata are at the "atom" level, where we can see the emergence of higher-level structures like cell walls. The swarms are at the "cell" level, where we can see the emergence of cell bodies, specialized parts, and "multi-cellular" organisms. Again, although nothing is "intelligent" in any of the agents in these swarms, when allowed to interact, complex structures start to emerge, and something like "decisions" seem to start to happen.
📄️ W10. Learning 01
Today is our first module on learning, and we will start from the systemic perspective. We will learn about evolutionary algorithms and genetic programming. These kinds of algorithms both take inspiration from biology and the theory of evolution, where agents have deterministic programs once they are "born", but each new generation has a small chance of "mutation". To decide which agents and algorithms "live" to the next generation, we have to impose evaluation frameworks that simulate living, dying, gaining energy and losing energy.
📄️ W10. Learning 02
Today, we introduce the concept of utility, or the ability to evaluate different actions based on current state. From a single-agent perspective, we can evaluate whether our actions are reasonable given feedback from the world, and a way to value our state as a result of our actions. By keeping a record of actions and value, we can start to model uncertainty in the world and make predictions.
📄️ W11. Learning 03
Utility allows us to think of scale-free laws of intelligence: what does it take for any system to be intelligent? Many agents can sense, act, control, and model (predict) the world, but we still would not think of them as intelligent. In a psychology terms, habituation and classical conditioning are memory-based agentic state changes that allow for sophisticated strategic actions, but not true learning. However, being able to update a reward model gives us the ability to follow chains of actions, even if they present immediate punishment. We compare habituation and decision-making in plant intelligence to our robot models of intelligence using Q-learning.
📄️ W11. Learning 04
Today we will learn about communication strategies within and between systems. Different systems have different topological designs, but most advanced systems are distributed to some degree, i.e., computation takes place in more than one area. However, the further apart modules are, the more communication problems can occur.
📄️ W12. Intelligence 01
We'll explore how neural networks learn to recognize patterns by building functions from data. Starting from the challenge of digit recognition, we'll construct networks neuron by neuron, discovering why layers create hierarchical representations and how non-linearity enables networks to approximate any function. We'll train neural networks ourselves using TensorFlow Playground, where you'll see how gradient descent navigates high-dimensional parameter spaces to minimize error. We'll connect these ideas to course themes of emergence and distributed cognition, examining how simple computations combine to produce complex behavior.