Skip to main content

🗒️ Lectures

📄️ W02. Movement 01

Welcome to the first lecture of the Movement module. When we think of movement, we often think of output in terms of muscles, motors, or other force-producing devices. However, movement by intelligent creatures (from viruses to humans) is usually purpose-driven in some way. If we think about what it means to be alive, there seems to be some kind of purpose-driven action involved (i.e., eating for the purpose of self-perpetuation, self-replication, etc.). To be purpose-driven in any way, some kind of sensation is required, since you need to not just act, but act in a way that moves you closer to some kind of eventual goal.

📄️ W08. Emergence 02

Today we will discuss systems theory and variations on Conway's Game of Life. These are generalizations of the discrete form of cellular automata that Conway developed. One form of generalization is in allowing for different rule sets to be explored. Another is to allow for the spatial resolution to increase, making the "squares" almost continuous. Another is to change the values that can be applied, e.g., allowing the squares to be "greyscale" and vary between 0 and 1. Finally, the transition functions can be changed, allowing for more complex relationships between pixels.

📄️ W09. Emergence 03

Today we extend Conway's Game of Life into more dimensions and resolutions. The original GoL was simple enough to be hand-computed, and, amazingly, produced self-replicating patterns that seemed to go on "forever". However, the limitations of a large square grid and only two states are obvious, as are the possible extensions: what if we can make the grid so small that it becomes continuous spatially, and what if we can give the cells so many states that they become continuous state-wise? The further we push this, the more our cells produce things that look like real-world microscopic life forms. In this, we get a much clearer demonstration of emergence, where things that seem to have intentional behaviour and even "forces" are driven purely by unintelligent state transition rules. "Agents" emerge.

📄️ W09. Emergence 04

Today, we introduce simple swarms, which are multi-agent systems. The difference between a swarm and cellular automata is that a swarm is comprised of agents that act over a surface or environment, whereas the cellular automata emerges out of the environment itself. The cellular automata are at the "atom" level, where we can see the emergence of higher-level structures like cell walls. The swarms are at the "cell" level, where we can see the emergence of cell bodies, specialized parts, and "multi-cellular" organisms. Again, although nothing is "intelligent" in any of the agents in these swarms, when allowed to interact, complex structures start to emerge, and something like "decisions" seem to start to happen.

📄️ W10. Learning 01

Today is our first module on learning, and we will start from the systemic perspective. We will learn about evolutionary algorithms and genetic programming. These kinds of algorithms both take inspiration from biology and the theory of evolution, where agents have deterministic programs once they are "born", but each new generation has a small chance of "mutation". To decide which agents and algorithms "live" to the next generation, we have to impose evaluation frameworks that simulate living, dying, gaining energy and losing energy.

📄️ W11. Learning 03

Utility allows us to think of scale-free laws of intelligence: what does it take for any system to be intelligent? Many agents can sense, act, control, and model (predict) the world, but we still would not think of them as intelligent. In a psychology terms, habituation and classical conditioning are memory-based agentic state changes that allow for sophisticated strategic actions, but not true learning. However, being able to update a reward model gives us the ability to follow chains of actions, even if they present immediate punishment. We compare habituation and decision-making in plant intelligence to our robot models of intelligence using Q-learning.

📄️ W12. Intelligence 01

We'll explore how neural networks learn to recognize patterns by building functions from data. Starting from the challenge of digit recognition, we'll construct networks neuron by neuron, discovering why layers create hierarchical representations and how non-linearity enables networks to approximate any function. We'll train neural networks ourselves using TensorFlow Playground, where you'll see how gradient descent navigates high-dimensional parameter spaces to minimize error. We'll connect these ideas to course themes of emergence and distributed cognition, examining how simple computations combine to produce complex behavior.