Back to Search View Original Cite This Article

Abstract

<jats:title>Abstract</jats:title> <jats:p>This chapter presents a modular framework for understanding how machine learning (ML), natural language processing (NLP), and large language models (LLMs) can be used to enhance simulation-based assessment. The chapter examines six applications: item creation, stimulus fidelity enhancement, AI role players, automated scoring, response process capture, and adaptive branching. Each application affects validity, assessee reactions, adverse impact risk, and development and administration costs differently. ML and NLP can validly score simulations while providing standardization. Generative AI enables scalable high-fidelity multimedia and interactive role-play scenarios. Challenges include algorithmic bias, LLM hallucinations, maintaining job-relevant AI role players, and negative assessee reactions. Future work should explore AI agents for scenario testing, simulations accommodating workplace LLM use, and AI role-play design informing human–AI teaming research.</jats:p>

Show More

Keywords

chapter language role players assessee

Related Articles

PORE

About

Connect