Join me on my ML Journey from novice to expert! with some nomadic meanderings along the way.

Machine Learning

Discovering when an agent is present in a system

Discovering when an agent is present in a system

We want to build safe, aligned artificial general intelligence (AGI) systems that pursue the intended goals of its designers. Causal influence diagrams (CIDs) are a way to model decision-making situations that allow us to reason about agent incentives. By relating training setups to the incentives that shape agent behaviour, CIDs help illuminate potential risks before training an agent and can inspire better agent designs. But how do we know when a CID is an accurate model of a training setup?

Go to Source
https://www.deepmind.com/blog/discovering-when-an-agent-is-present-in-a-system