Incremental learning of multi-domain image-to-image translations

Daniel Stanley Tan, Y.-X. Lin, K.-L. Hua*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Current multi-domain image-to-image translation models assume a fixed set of domains and that all the data are always available during training. However, over time, we may want to include additional domains to our model. Existing methods either require re-training the whole model with data from all domains or require training several additional modules to accommodate new domains. To address these limitations, we present IncrementalGAN, a multi-domain image-to-image translation model that can incrementally learn new domains using only a single generator. Our approach first decouples the domain label representation from the generator to allow it to be re-used for new domains without any architectural modification. Next, we introduce a distillation loss that prevents the model from forgetting previously learned domains. Our model compares favorably against several state-of-the-art baselines.
Original languageEnglish
Pages (from-to)1526-1539
Number of pages14
JournalIeee Transactions on Circuits and Systems for Video Technology
Volume31
Issue number4
DOIs
Publication statusPublished - Apr 2021
Externally publishedYes

Keywords

  • generative adversarial networks
  • image-to-image translation
  • Incremental learning
  • multi-domain

Fingerprint

Dive into the research topics of 'Incremental learning of multi-domain image-to-image translations'. Together they form a unique fingerprint.

Cite this