Shared Imagination: LLMs Hallucinate Alike

Yilun Zhou, Caiming Xiong, Silvio Savarese, Chien-Sheng Wu
Salesforce Research
[Full Paper]   

TLDR: We demonstrate a "shared imagination" phenomenon among many LLMs, which suggests fundamental commonality among them.

Abstract: Despite the recent proliferation of large language models (LLMs), their training recipes -- model architecture, pre-training data and optimization algorithm -- are often very similar. This naturally raises the question of the similarity among the resulting models. In this paper, we propose a novel setting, imaginary question answering (IQA), to better understand model similarity. In IQA, we ask one model to generate purely imaginary questions (e.g., on completely made-up concepts in physics) and prompt another model to answer. Surprisingly, despite the total fictionality of these questions, all models can answer each other's questions with remarkable success, suggesting a "shared imagination space" in which these models operate during such hallucinations. We conduct a series of investigations into this phenomenon and discuss implications on model homogeneity, hallucination, and computational creativity.

(Additional materials will be available soon. For any questions, please contact Yilun Zhou at yilun.zhou@salesforce.com)

@article{zhou2022feature,
    title = {Shared Imagination: LLMs Hallucinate Alike},
    author = {Zhou, Yilun and Xiong, Caiming and Savarese, Silvio and Wu, Chien-Sheng},
    journal={arXiv preprint arXiv:2407.16604},
    year = {2024},
}