← Ye Wang · Homepage

SigStyle: Signature Style Transfer via Personalized Text-to-Image Models

AAAI 2025

Ye Wang1, Tongyuan Bai1, Xuping Xie2, Zili Yi3, Yilin Wang4,*, Rui Ma1,5,*

1 School of Artificial Intelligence, Jilin University

2 College of Computer Science and Technology, Jilin University

3 School of Intelligence Science and Technology, Nanjing University

4 Adobe · 5 Engineering Research Center of Knowledge-Driven Human-Machine Intelligence, MOE, China

* Corresponding authors

📄 Paper 💻 Code 🧾 BibTeX
SigStyle teaser
Our method supports global style transfer, local style transfer, style-guided generation, and texture transfer.

Abstract

Style transfer enables integrating artistic style into content images. Existing methods often miss the unique, recognizable visual signature of style (e.g., structural motifs, color palettes, brush patterns). SigStyle learns signature style via a personalized text-to-image model and a hypernetwork-based style capture module. We further introduce time-aware attention swapping to preserve content structure during transfer, and demonstrate superior quality across diverse style transfer settings.

Background

Signature style transfer remains underexplored. While many methods transfer coarse color statistics, they struggle to preserve key artistic patterns and compositional identity from style references.

Complex signature style examples

Approach

Given a reference style image, SigStyle performs hypernetwork-powered style-aware fine-tuning and represents style using a special token. We combine DDIM inversion with time-aware attention swapping to preserve content while injecting style traits.

SigStyle pipeline
Figure 1: The framework of SigStyle.
Style learning preference analysis
Figure 2: Style learning preferences in UNet.
Hypernetwork architecture
Figure 3: Hypernetwork architecture.

Global style transfer

Global style transfer comparison

Local style transfer

Local style transfer results

Cross-domain texture transfer

Texture transfer results

Style-guided text-to-image generation

Style-guided generation results

Acknowledgements

This work is supported in part by the National Natural Science Foundation of China (No. 62202199, No. 62406134) and the Science and Technology Development Plan of Jilin Province (No. 20230101071JC).