The Mixture-of-Experts (MoE) architecture has enabled the creation of massive yet efficient Large Language Models (LLMs). However, the standard deterministic routing mechanism presents a significant limitation: its inherent brittleness is a key contributor to model miscalibration and overconfidence, resulting in systems that often do not know what they don't know. This thesis confronts this challenge by proposing a structured Bayesian MoE routing framework. Instead of forcing a single, deterministic expert selection, our approach models a probability distribution over the routing decision itself. We systematically investigate three families of methods that introduce this principled uncertainty at different stages of the routing pipeline: in the weight-space, the logit-space, and the final selection-space. Through a series of controlled experiments on a 3-billion parameter MoE model, we demonstrate that this framework significantly improves routing stability, in-distribution calibration, and out-of-distribution (OoD) detection. The results show that by targeting this core architectural component, we can create a more reliable internal uncertainty signal. This work provides a practical and computationally tractable pathway towards building more robust and self-aware LLMs, taking a crucial step towards making them know what they don't know.
Building similarity graph...
Analyzing shared references across papers
Loading...
Ang Li
Building similarity graph...
Analyzing shared references across papers
Loading...
Ang Li (Sun,) studied this question.
www.synapsesocial.com/papers/68f5fcce8d54a28a75cf19b4 — DOI: https://doi.org/10.48550/arxiv.2509.23830
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: