Solving the Computational Challenge of City-Scale Microscopic Traffic Simulation
In recent years, traffic simulation has become one of the most used approaches for traffic analysis in support of the design and evaluation of traffic systems. In microscopic traffic simulations, road traffic is formed by driver-vehicle-units that have certain driving behaviors.
How to simulate the road traffic at a large scale, such as a city, at a high level of detail in real-time? To calculate the movement of large amount of driver-vehicle-units with a reasonable speed requires a large amount of computational resource. Parallel and distributed computing can be used to make such big simulations possible. Conventional parallel and distributed algorithms may not be suitable for traffic simulation. New algorithms which incorporate the characteristics of road traffic should be designed.
Optimising Parallel Computing for Traffic Simulation
City-scale microscopic traffic simulation can require tremendous computing power. Normal desktop computers are often insufficient to provide fast results for large-scale simulations. To alleviate this problem, the simulation needs to be parallelised so more computing power can be utilised, e.g., as provided by supercomputers.
In parallel simulation, workload is divided and assigned to multiple CPUs on the same computer or even on different computers. However, adding more and more CPUs does not always improve the speed of the simulation. The two main challenges are the load-balancing among processors and the minimisation of inter-process communication. To solve these challenges, the hardware assignment problem is formulated as a graph partitioning problem. Unfortunately, no methods are known to solve this type of NP-hard problem optimally within a reasonable amount of time.
To still achieve high simulation performance, we propose efficient graph partitioning heuristics for the acceleration of city-scale traffic simulations. We are able to reduce a simulation study from days to hours.
For parallel simulations of the city of Singapore, the previously proposed Neighbour-Restricting Graph-Growing (NRGG) — an algorithm that reduces inter- process communication by minimising the number of neighbouring partitions — has not only outperformed graph partitioning methods such as METIS and Buffoon, but also stripe spatial partitioning, for the synchronisation protocol used. Currently, NRGG is being improved to achieve even higher efficiency.
State Fast-forwarding and Model Preemption
Once a parallelised simulation fully exploits the available hardware, it can be accelerated further by reducing the computational work. The objective is to avoid unnecessary simulation steps without affecting the simulation results. At AIDA, two methods were developed to achieve this goal.
Fast-forwarding Agent States
Microscopic traffic simulation usually updates each vehicle at fixed time-steps, which often makes large-scale simulations time-consuming. Generally, a vehicle’s movement through a road network is affected by nearby vehicles, but it becomes predictable if there are no vehicles around.
We exploit this observation to reduce the simulation time. First, we scan currently isolated vehicles and determine the earliest possible time they share a road segment with other vehicles. Then, we predict the vehicle’s position at that time. The vehicle can then be moved to that position immediately (“fast-forwarded”) without any update needed in between.
Our experiments showed that a speedup of up to 2.8x can be achieved in sparse traffic conditions. However, on highly congested roads, vehicles are isolated only for short amounts of time. A more fine-grained scanning method currently under development focuses on fast-forwarding in dense traffic.
Accelerated Design Space Exploration
Due to the large parameter space, simulation-based optimisation, e.g. of traffic signal timings, may require thousands of long simulations runs to achieve a high-quality result.
We observed that it is often evident early in a simulation run that a certain configuration leads to undesirable results. Therefore, we propose model preemption, which involves which predicting the simulation outcome based on an intermediate simulation state and terminating low-quality runs early.
As is shown in the example above, our algorithms can significantly reduce the time required to run a simulation study. This speeds up the whole modelling and simulation process.