Skip to yearly menu bar Skip to main content


Poster

Color-Oriented Redundancy Reduction in Dataset Distillation

Bowen Yuan · Zijian Wang · Mahsa Baktashmotlagh · Yadan Luo · Zi Huang

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Dataset Distillation (DD) is designed to generate condensed representations of extensive image datasets, enhancing training efficiency. Despite recent advances, there remains considerable potential for improvement, particularly in addressing the notable redundancy within the color space of distilled images. In this paper, we propose a two-fold optimization strategy to minimize color redundancy at the individual image and overall dataset levels, respectively. At the image level, we employ a palette network, a specialized neural network, to dynamically allocate colors from a reduced color space to each pixel. The palette network identifies essential areas in synthetic images for model training, and consequently assigns more unique colors to them. At the dataset level, we develop a color-guided initialization strategy to minimize redundancy among images. Representative images with the least replicated color patterns are selected based on the information gain. A comprehensive performance study involving various datasets and evaluation scenarios is conducted, demonstrating the superior performance of our proposed color-aware DD compared to existing DD methods.

Live content is unavailable. Log in and register to view live content