Overview of low-hallucination trustworthy LLMs: alignment, verification, retrieval grounding, and robust evaluation.
Overview
This project investigates key technologies for building low-hallucination and trustworthy large language models and their applications. The work spans model alignment and guardrails, grounded generation with retrieval, uncertainty estimation, and evaluation protocols.