LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation

Pengzhi Li , Pengfei Yu #, Zide Liu , Wei He , Xuhao Pan , Xudong Rao , Tao Wei , Wei Chen *,
1Li Auto Inc.
#Project leader. *Corresponding author.
T. Generated image samples from LDGen. We present a composed prompt with each language in a different color, along with the corresponding image that exhibits high aesthetic quality and text-image alignment.

Overview

In this paper, we introduce LDGen, a novel method for integrating large language models (LLMs) into existing text-to-image diffusion models while minimizing computational demands. Traditional text encoders, such as CLIP and T5, exhibit limitations in multilingual processing, hindering image generation across diverse languages. We address these challenges by leveraging the advanced capabilities of LLMs. Our approach employs a language representation strategy that applies hierarchical caption optimization and human instruction techniques to derive precise semantic information,. Subsequently, we incorporate a lightweight adapter and a cross-modal refiner to facilitate efficient feature alignment and interaction between LLMs and image features. LDGen reduces training time and enables zero-shot multilingual image generation. Experimental results indicate that our method surpasses baseline models in both prompt adherence and image aesthetic quality, while seamlessly supporting multiple languages.

Method

Overview of LDGen. The dashed box shows our language representation strategy, with the bottom is our LLM alignment and cross-modal refiner training process. The detailed design of the cross-modal refiner is shown in the green box on the right.

1. We present LDGen, which efficiently integrates LLM into existing text encoder-based diffusion models and supports zero-shot multilingual text-to-image generation.

2. We propose a language representation strategy that leverages the capabilities of LLM through hierarchical caption optimization and human instruction strategies.

3. We introduce LLM alignment and a cross-modal refiner to achieve LLM feature alignment and enhance interaction between LLM and image features, enhancing the semantic consistency of conditions. e

Comparisons

Comparison of our method with recent enhancement generative models ELLA, baseline Models SDXL and PixArt-$\alpha$. Our method achieves the best results in terms of instruction adherence and visual appeal.

s

Multilingual qualitative results

Multilingual results. For each panel's eight images, we generate them using eight different languages but only display the prompt in one of the languages used. Note that LDGen uses only English prompts during training but achieves zero-shot multilingual generation due to the capabilities of the LLM. s

Refer to the pdf paper linked above for more details on qualitative, quantitative, and ablation studies.

Citation