Abstract
<jats:p>The paper presents a substitution generation approach based on a dictionary and an LLM for simplifying Ukrainian texts, with the goal of improving cognitive accessibility. The proposed architecture is organized as a layered pipeline. A dictionary lookup provides fast and stable replacements for known lexical items, while large language models are used as a fallback for rare or complex cases. Candidate replacements are filtered, cached, and reused, and the final output is morphologically aligned with the source context. We describe practical usage scenarios in which the system demonstrates optimal utility, including educational materials, technical documentation, and step-by-step instructions. An experimental comparison is carried out across four models used or targeted in the project, namely MamayLLM, LapaLLM, Gemma 3 12B, and Gemini 1.5 Flash. Evaluation is structured around four metric groups: substitution quality, response speed, coverage as the share of cases with at least two valid alternatives, and resource cost. The results highlight typical trade-offs. Larger models provide better semantic adequacy for complex categories, while smaller or faster models are more suitable for lightweight lexical substitutions and real-time usage. The dictionary-first strategy stabilizes output quality and reduces latency, whereas the caching mechanism minimizes repeated LLM requests and supports scalable deployment. The study consolidates an operational design for hybrid simplification, documents the role allocation between dictionaries and LLMs, and provides recommendations for selecting model pairs depending on category complexity and deployment constraints. The findings are directly grounded in the current project infrastructure and offer a reproducible foundation for further evaluation with larger controlled datasets.</jats:p>