Open-Endedness, Multi-Agent Learning and Existential Risk

Describing Things in Different Ways

(From DeepMind tutorial slides on Multi-Agent Learning.)

Perhaps a thing is simple if you can describe it fully in several different ways, without immediately knowing that you are describing the same thing. — Richard Feynman

We don’t really know when or how to compose losses. There’s real thinking to be done.

Moving Beyond Objective Functions

Image borrowed from figure 4 of the alpha-Rank paper.

One issue with an objective defined in terms of an animal survival and reproduction (presumably the kinds of objectives that make sense in evolution) is that it is not well defined: it depends on the behaviors of other animals and the whole ecology and niche occupied by the animal of interest. As these change due to the “improvements” made by other animals or species through evolution or learning, the individual animal or species’s objective of survival also changes. This feedback loop means there isn’t really a static objective, but a complicated dynamical system, and the discussions regarding the complications this brings are beyond the scope of this paper. [Emphasis mine.]

Societal Implications

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store