CausalCity: Introducing a high-fidelity simulation with agency for advancing causal reasoning in machine learning

Publié

Par , Principal Researcher , Senior Researcher , Senior Researcher , Principal Researcher , Senior Researcher , General Manager, Autonomous Systems and Robotics Group

The ability to reason about causality, and ask «what would happen if…?’’ is one property that sets human intelligence apart from artificial intelligence. Modern AI algorithms perform well on clearly defined pattern recognition tasks but fall short generalizing in the ways that human intelligence can. This often leads to unsatisfactory results on tasks that require extrapolation from training examples, e.g., recognizing events or objects in contexts that are different from the training set. To address this problem, we have built a high-fidelity simulation environment, called CausalCity, which is designed for developing algorithms that improve causal discovery and counterfactual reasoning of AI.

To understand the problem better, imagine if we developed a self-driving car confined to the streets of a neighborhood in Arizona with few pedestrians, wide, flat roads and street signs with English writing. If we deployed the car on the narrow, busy streets of Delhi, where street signs are written in Hindi, pattern recognition would be insufficient to operate safely. The pattern in our «training set’’ would be very different from our deployment context. Yet, somehow humans can adapt so quickly to situations that they haven’t previously observed that someone with an Arizona state-issued driving license is allowed to drive a car in India.

In our recent paper, «CausalCity: Complex Simulations with Agency for Causal Discovery and Reasoning«, we take a closer look at this problem and propose a new high-fidelity simulation environment.

Spotlight: Blog post

Research Focus: Week of September 9, 2024

Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks.

We designed a high-fidelity simulation with the ability to control causal structure as illustrated below:

A more robust AI model does more than simply learning patterns. It captures the causal relationships between events. Humans do this very well, which enables us to reason about the world and adapt more quickly and generally with fewer examples. We often do so by making a specific action–an intervention–in the environment, observing the result, building a mental model and then repeating this process to refine our model.

Using interventions is one way to learn about systems (e.g., the behavior of traffic in a city) and their underlying causal structure (e.g., what affects what). The presence of confounders–factors that impact both the intervention and the outcomes–can complicate the task of causal learning. Imagine driving in a city and noticing an ambulance. Your natural reaction might be to pull to the side of the road and then try to determine if another vehicle is following the ambulance. If so, you need to determine if there is a causal relationship between the ambulance and the other vehicle before you continue your journey. In this context the behavior of other drivers would be a confounder that might impact the path of both the ambulance and a possible follower vehicle.

Figure 2. We created a dataset CausalCity that demonstrates the potential for modeling causal relationships between vehicles in complete patterns.

Machine learning researchers are increasingly developing models that involve causal reasoning to increase robustness and generalizability. Computer graphics simulations have proven helpful in investigating problems involving causal and counterfactual reasoning as they provide a way to model complex systems and test interventions safely. The parameters of synthetic environments can be systematically controlled, thereby enabling causal relationships to be established and confounders to be introduced. However, much of the prior work has approached this via a relatively simplistic set of entities and environments. In some cases, these are purely «toy’’ examples, of how simple objects such as balls and cubes move and interact in a physical simulation.

This leaves little room to explore, and control for, different causal relationships among entities. One challenge involved in creating more realistic systems is the complexity involved in dictating every state and action of every agent at every timestep. To help address this problem, we propose giving agency to each entity to create simulation environments that reflect the nature and complexity of these types of temporal real-world reasoning tasks. This includes scenarios where each entity makes decisions on its own while interacting with each other, like pedestrians in a crowded street and cars on a busy road. Agency provides the ability to define scenarios at a higher level, rather than specifying every single low-level action. We can now more easily model scenarios such as the car following the ambulance described above.

To this end, we have developed and are publicly releasing a high-fidelity simulation environment with AI agent controls to create scenarios for causal and counterfactual reasoning (opens in new tab). This environment reflects the real-world, safety-critical scenario of driving. We seek to build a simulation environment that enables controllable scenario generation that can be used for temporal and causal reasoning. This environment allows us to create complex scenarios including different types of confounders with relatively little effort.

Figure 3. An example of how a vehicle’s route is defined in the CausalCity.

We are releasing our environment (opens in new tab)and a large example dataset created with it to help advance the state of the art in this domain. We hope it helps other researchers more easily experiment with causal modeling. Going forward, we plan to introduce more environments, people and types of agents (e.g., drones) to the simulation package.

Publications connexes

Lire la suite

Voir tous les articles de blog