Abstract
<jats:p>The integration of artificial intelligence into cybersecurity systems introduces significant ethical challenges that require systematic and measurable evaluation. Although prior studies propose lifecycle-based ethical AI frameworks, practical methodologies for benchmarking ethical compliance remain limited. This study presents an evidence-driven analytical framework for evaluating ethical integration in AI-driven cybersecurity research. A corpus of peer-reviewed publications was analyzed using a structured pipeline that employs large language models (LLMs) to extract machine-readable JSON evidence, followed by automated scoring across seven ethical dimensions: transparency, explainability, accountability, human oversight, privacy, data protection, and continuous ethical monitoring. The results reveal substantial variability in ethical compliance, with notable deficiencies in human oversight and post-deployment monitoring. The proposed framework is replicable, auditable, and supports evidence-based ethical governance of AI-enabled cybersecurity platforms.</jats:p>