In #AI, #MeasuringMassiveMultitaskLanguageUnderstanding is a benchmark for evaluating #LLMs. The #MMLU consists of ~16,000 multiple-choice questions spanning 57 academic subjects including math, philosophy, law, medicine. It is one of the most commonly used benchmarks for LLMs (Morgan Stanley)
1
0
0
0