Open-Endedness, Multi-Agent Learning and Existential Risk

Blake Elias
6 min readOct 10, 2020

If we want to talk about general intelligence, with agents that undergo life-long, open-ended learning, then we have to talk about agents existing in a continuous feedback loop with a changing environment. It’s not enough to design a “perfectly intelligent” agent, that we train just once and then put out in the world — the “train → test” paradigm of machine learning. An agent that’s truly alive and intelligent is never done learning: over its lifetime, it will constantly be “tested” in the real world — but as a result of these tests, will continue to “re-train” itself.

We have to address the ability of an agent to adapt to an ever-changing environment. Critically, one source of such changes is the actions of other agents. An agent must understand how its behavior will affect the behavior of other agents, and how their behavior, in turn, affects the agent’s own environment. As these behaviors change the ecology, an old niche may become harder to occupy, while new, more favorable niches appear — requiring the agents to constantly adapt. This makes the “intelligence” problem become one of successful evolution, and co-evolution.

We thus realize that we can’t really talk about general intelligence without talking about multi-agent dynamics, evolution, and the game theory that describes it.

Describing Things in Different Ways

So we’ve gone very quickly from a new frontier of AI research (general AI and open-ended learning), back to a somewhat older field of math (game theory). But it’s “math on steroids” now, because we’re doing it on super-computers (as compared to what they had in the 1950s), and can thus represent far more complex games/environments, and build more sophisticated models of the agent as compared to the simplistic “homo economicus” paradigm. So it’s certainly time for a new iteration on this research.

Game theory, multi-agent behavior, evolution and open-ended learning are all connected in this very intimate way. But what’s the relationship?

(From DeepMind tutorial slides on Multi-Agent Learning.)

The quotation on the above slide is very telling:

Perhaps a thing is simple if you can describe it fully in several different ways, without immediately knowing that you are describing the same thing. — Richard Feynman

Feynman said this during his Nobel Prize acceptance speech, about how there ended up being many equivalent ways to describe a quantum theory of electricity and magnetism. In that same speech, he mentions that as an undergraduate he read a book on this subject by Dirac, which ended with, “It seems that some essentially new physical ideas are here needed.” This strikes me as quite similar to this summary-slide from a recent DeepMind talk on multi-agent learning:

We don’t really know when or how to compose losses. There’s real thinking to be done.

On this issue of compositionality of loss functions, perhaps work in compositional game theory, and compositional coordination languages, can help us find an answer?

Moving Beyond Objective Functions

We cannot use objective functions as a way of specifying all types of behavior that we might want to describe. Some behaviors may be so complex that it would just be hard to write down an objective function to describe them. For other behaviors, such an objective function simply does not exist.

To optimize an objective function, we’re used to finding a fixed point of that function’s gradient vector field: i.e., a place where the gradient is zero. We’re used to certain well-behaved-ness, where if you start anywhere and follow the negative of the gradient, you will eventually find your way to some such (local) optimum.

However, when we look at multi-agent scenarios, we see cases where, in optimizing their individual outcomes, the combined decision of the agents can be described as following the direction of a vector field — and yet this action does not optimize an overall objective function.

For example, consider the Matching Pennies game. Each player will take a “mixed strategy”: some probability of playing Heads, some probability of playing Tails. For the two players, let their respective probabilities of picking heads be p_1 and p_2. Once the two players get knowledge about the others’ strategy, they’ll immediately want to change their strategy. There will be a vector (dp_1, dp_2), describing how each player will change. The vector field describing this dynamic is shown below, with a Nash equilibrium at (0.5, 0.5), at which neither player has reason to change:

Image borrowed from figure 4 of the alpha-Rank paper.

This is a dynamical system that has a fixed point. But is it an “optimum” of some objective function, and could we have found it via gradient descent? We realize the answer is “no”, by realizing that this vector field has non-zero curl. Since the curl of a gradient is zero, we know that this vector field is not the gradient of anything.

Hence, there are systems in the world which have equilibria — and which could be considered “intelligent” — but whose equilibria can’t be found by optimizing an objective function with gradient descent. Indeed, some of these intelligent systems may not reach their equilibrium in practice. Much of the behavior we see in financial markets, politics, pandemics etc. are systems operating out of equilibrium — perhaps oscillating around some equilibrium, but remaining dynamic and never reaching it. And of course, there are games that don’t have Nash equilibria at all (including some types of GANs).

If we want to understand the full design space of intelligent behaviors, we will have to account for behaviors that don’t follow an optimization paradigm in a global sense. (There may be individual players who are optimizing for something — but the global behavior may be something more un-opinionated.)

Yoshua Bengio acknowledged the limitation of static objective functions in his 2012 paper, “Evolving Culture vs. Local Minima”:

One issue with an objective defined in terms of an animal survival and reproduction (presumably the kinds of objectives that make sense in evolution) is that it is not well defined: it depends on the behaviors of other animals and the whole ecology and niche occupied by the animal of interest. As these change due to the “improvements” made by other animals or species through evolution or learning, the individual animal or species’s objective of survival also changes. This feedback loop means there isn’t really a static objective, but a complicated dynamical system, and the discussions regarding the complications this brings are beyond the scope of this paper. [Emphasis mine.]

It seems about time to bring these questions into scope, and address them head-on. We will need to understand these questions if we hope to build AI systems that co-exist with humans in a positive, symbiotic manner. Even more urgently, we need to figure out how to co-exist with each other in a positive, symbiotic manner.

Societal Implications

There are many crucial questions we need to answer as a society. As our species evolves and advances, are all these adaptations ultimately positive and sustainable? Or are we pushing ourselves towards evolutionary collapse? Will capitalism end up with a few winners and the rest losers, who die off? Should we try to prevent climate catastrophe, or should we march right towards it — expecting that our economic and technological advancements will provide some solution (despite also being the cause!)?

I posit that instead of arguing endlessly over politics and social equity, a more productive starting point would be to build a deeper understanding of multi-agent dynamical systems. The questions raised above all come down to how to act optimally in a multi-agent system. Particular groups need to understand their dynamic relationship with the rest of society (e.g. the tech industry, wealthy elite, government, etc.). And humanity as a whole needs to understand its relationship with other species and our planet. There’s a tight set of feedback loops here, and if we can understand them, we might be able to shape them to our benefit.

These are complex scenarios that we currently struggle to comprehend, leading to much uncertainty and conflict. We need better ways to talk about these things — better mathematical theories, and better computational tools. As we develop these tools, we can expect a better understanding of how our individual and collective behaviors ultimately shape our shared destiny. And if we reach a good understanding of how these relationships work, we will hopefully be able to convince ourselves of the right path, reach agreement on the way forward, and solve some of society’s greatest challenges.

--

--