Leveraging Multimodal LLM for Inspirational User Interface Search
Seokhyeon Park, Yumin Song, Soohyun Lee, Jaeyoung Kim, and Jinwook Seo / 2025
PARTICIPANTS
- Seokhyeon Park, Seoul Nationl University
- Yumin Song, Seoul Nationl University
- Soohyun Lee, Seoul Nationl University
- Jaeyoung Kim, Seoul Nationl University
- Jinwook Seo, Seoul National University
ABSTRACT
Inspirational search, the process of exploring designs to inform and inspire new creative work, is pivotal in mobile user interface (UI) design. However, exploring the vast space of UI references remains a challenge. Existing AI-based UI search methods often miss crucial semantics like target users or the mood of apps. Additionally, these models typically require metadata like view hierarchies, limiting their practical use. We used multimodal large language models (MLLMs) to extract and interpret semantics from mobile UI images. We identified key UI semantics through a formative study and developed an MLLM-based retrieval system. Through evaluation, combining performance metrics and human assessments, we demonstrate that our approach significantly outperforms existing UI retrieval methods, offering UI designers a more enriched and contextually relevant search experience. We enhance the understanding of mobile UI design semantics and highlight MLLMs' potential in inspirational search, providing a rich dataset of UI semantics for future studies.