M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy

Hansong Zhang1, 2, 🤪    Shikun Li1, 2, 🤪    Pengju Wang1, 2   Dan Zeng3   Shiming Ge1, 2, 😎  
1Institute of Information Engineering, Chinese Academy of Sciences    2University of Chinese Academy of Sciences 3 Shanghai University   
🤪 Co-First Authors    😎 Corresponding Author

Overview

Training state-of-the-art (SOTA) deep models often requires extensive data, resulting in substantial training and storage costs. To address these challenges, dataset condensation has been developed to learn a small synthetic set that preserves essential information from the original large-scale dataset. Nowadays, optimization-oriented methods have been the primary method in the field of dataset condensation for achieving SOTA results. However, the bi-level optimization process hinders the practical application of such methods to realistic and larger datasets. To enhance condensation efficiency, previous works proposed Distribution-Matching (DM) as an alternative, which significantly reduces the condensation cost. Nonetheless, current DM-based methods still yield less comparable results to SOTA optimization-oriented methods. In this paper, we argue that existing DM-based methods overlook the higher-order alignment of the distributions, which may lead to sub-optimal matching results. Inspired by this, we present a novel DM-based method named M3D for dataset condensation by Minimizing the Maximum Mean Discrepancy between feature representations of the synthetic and real images. By embedding their distributions in a reproducing kernel Hilbert space, we align all orders of moments of the distributions of real and synthetic images, resulting in a more generalized condensed set. Notably, our method even surpasses the SOTA optimization-oriented method IDC on the high-resolution ImageNet dataset. Extensive analysis is conducted to verify the effectiveness of the proposed method.

Motivation and Findings

Previous DM-based methods in Dataset Condensation/Distillation align the first-order moment of real and synthetic sets, i.e., minimize the MSE distance between the mean of feature representations. But does the aligned first-order moments lead to perfectly aligned distributions? We illustrate the misalignment issue in higher-order moments below:

To align higher-order moments, a naive way is to add higher-order regularization terms to the original loss of DM. As shown in the following table, by adding 2nd and 3rd order regularization, the performance of synthetic data is greatly improved.

However, it is impractical for us to add infinite regularization terms to ensure the alignment of each order moments. Luckily, the reproducing kernel Hilbert space can embed infinite order moments into a finite kernel function form, through which we can readily calculate the distance between the distribution of real and synthetic set, leading to a more distribution-aligned synthetic set. The framework of our method is as following:

Visulization

Poster

BibTeX

@inproceedings{zhang2024m3d,
      title    ={M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy}, 
      author   ={Hansong Zhang and Shikun Li and Pengju Wang and Dan Zeng and Shiming Ge},
      year     ={2024},
      booktitle={The 38th Annual AAAI Conference on Artificial Intelligence (AAAI)}
      }