Abstract
<jats:p>Organizations outside formal governance frameworks often lack cybersecurity audit tools, making anomaly detection and risk evaluation difficult. This paper presents an AI-enhanced auditing framework for non-governance IT environments. Using the UNSW-NB15 dataset, we evaluate four machine-learning models: Isolation Forest, Logistic Regression, Gradient Boosting, and XGBoost, identifying complementary strengths that motivate a two-stage filter for suspicious network flows. Flagged flows are aggregated into structured evidence and passed to a GPT-4-based large language model, which generates incident explanations, control mappings, and remediation suggestions. Results show the hybrid ML-LLM approach reduces audit workload, improves anomaly interpretation, and supports recommendations for private and small-scale IT systems. The study also highlights limitations including prompt sensitivity, false positives, and third-party AI risks. Overall, findings illustrate both the potential and challenges of deploying AI-driven audit pipelines to strengthen cybersecurity in non-governance settings.</jats:p>