LLM-as-a-judge

This is an initiative aiming to employ LLM as the judge for various applications

(Correspondence to: Dawei Li)

From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge

Arizona State University,
University of Illinois Chicago,
University of Maryland at Baltimore,
University of California, Berkeley,
Illinois Institute of Technology,
Emory University
survey


Abstract

Assessment and evaluation have long been critical challenges in artificial intelligence (AI) and natural language processing (NLP). However, traditional methods, whether matching-based or embedding-based, often fall short of judging subtle attributes and delivering satisfactory results. Recent advancements in Large Language Models (LLMs) inspire the "LLM-as-a-judge" paradigm, where LLMs are leveraged to perform scoring, ranking, or selection across various tasks and applications. This paper provides a comprehensive survey of LLM-based judgment and assessment, offering an in-depth overview to advance this emerging field. We begin by giving detailed definitions from both input and output perspectives. Then we introduce a comprehensive taxonomy to explore LLM-as-a-judge from three dimensions: what to judge, how to judge and where to judge. Finally, we compile benchmarks for evaluating LLM-as-a-judge and highlight key challenges and promising directions, aiming to provide valuable insights and inspire future research in this promising research area.

BibTeX

@article{li2024llmasajudge,
      title   = {From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge},
      author  = {Dawei Li and Bohan Jiang and Liangjie Huang and Alimohammad Beigi and Chengshuai Zhao and Zhen Tan and Amrita Bhattacharjee and Yuxuan Jiang and Canyu Chen and Tianhao Wu and Kai Shu and Lu Cheng and Huan Liu},
      year    = {2024},
      journal = {arXiv preprint arXiv: 2411.16594}
    }