Abstract
Current multi-domain image-to-image translation models assume a fixed set of domains and that all the data are always available during training. However, over time, we may want to include additional domains to our model. Existing methods either require re-training the whole model with data from all domains or require training several additional modules to accommodate new domains. To address these limitations, we present IncrementalGAN, a multi-domain image-to-image translation model that can incrementally learn new domains using only a single generator. Our approach first decouples the domain label representation from the generator to allow it to be re-used for new domains without any architectural modification. Next, we introduce a distillation loss that prevents the model from forgetting previously learned domains. Our model compares favorably against several state-of-the-art baselines.
Original language | English |
---|---|
Pages (from-to) | 1526-1539 |
Number of pages | 14 |
Journal | Ieee Transactions on Circuits and Systems for Video Technology |
Volume | 31 |
Issue number | 4 |
DOIs | |
Publication status | Published - Apr 2021 |
Externally published | Yes |
Keywords
- generative adversarial networks
- image-to-image translation
- Incremental learning
- multi-domain