We achieved better feature alignment and representation through a straightforward framework design by optimizing features for both intra- and inter-samples, and recycling discarded patch tokens in ViT-based backbones via contrastive learning.
We introduced a system to facilitate the entire design process with a Human-AI collaborative framework, harnessing the powerful capabilities of cloud large models and the convenience of local small models through well-designed methodologies for each modules.
CSCW’24
StyleWe: Towards Style Fusion in Generative Fashion Design with Efficient Federated AI
Di Wu*, Mingzhu Wu*, Yeye Li*, Jianan Jiang, Xinglin Li, and 3 more authors
Proceedings of the ACM on Human-Computer Interaction, 2024
We introduce StyleWe, which employs a lightweight GAN with model compression and a federated learning algorithm to generate unexpected sketch styles by facilitating sketch styles fusion among multiple designers while ensuring privacy protection.
Ubicomp’24
CrossGAI: A Cross-Device Generative AI Framework for Collaborative Fashion Design
Hanhui Deng*, Jianan Jiang*, Zhiwang Yu, Jinhui Ouyang, and Di Wu
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2024
We introduce CrossGAI, which employs multiple AI modules and the Lyapunov algorithm with a DNN actor to dynamically optimize network bandwidth across different clients, supporting collaboration among multiple designers on various devices across different regions.
2023
CHI’23
Styleme: Towards Intelligent Fashion Generation with Designer Style
Di Wu*, Zhiwang Yu*, Nan Ma*, Jianan Jiang, Yuetian Wang, and 3 more authors
In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023
We introduce StyleMe, which employs Grad-CAM and AdaLIN to guide the generation of sketches, preserving the designer’s unique style in sketches. Additionally, it utilizes a two-stage generation network, combining AE and GAN, to color the generated sketches with a personalized style.