Skip to yearly menu bar Skip to main content


Poster
in
Workshop: MATH-AI: The 4th Workshop on Mathematical Reasoning and AI

Genetic Curriculum Learning for Distribution Generalization on the Travelling Salesman Problem

Michael Li · Christopher Haberland · Natasha Jaques

Keywords: [ evolutionary algorithms ] [ mathematical reasoning ] [ Deep Reinforcement Learning ] [ travelling salesman problem ] [ Combinatorial Optimization ]


Abstract:

The Travelling Salesman Problem (TSP) is a classic NP-hard combinatorial optimization task with numerous practical applications. Classic heuristic solvers - and Large Language Models (LLMs) using such solvers as a tool - can attain near-optimal performance for small problem instances, but become computationally intractable for larger problems. Real-world logistics problems such as dynamically re-routing last-mile deliveries demand a solver with fast inference time, which has led to specialized neural network solvers being favored in practice. However, neural networks struggle to generalize beyond the synthetic data they were trained on. In particular, we show that there exist TSP distributions that are realistic in practice, which also consistently lead to poor worst-case performance for existing neural approaches. To address distributional robustness, we present Genetic Curriculum Learning (GCL), an efficient novel approach utilizing automatic curricula. We also present TSPLib50, a dataset of realistically distributed TSP samples, which tests real-world distribution generalization ability without conflating this issue with TSP instance size. We evaluate our method on various synthetic datasets as well as TSPLib50, and compare to state-of-the-art LLM results and neural baselines. We demonstrate that GCL improves distributional robustness, with most of its performance gains coming from worst-case scenarios.

Chat is not available.