Abstract
<jats:p>With the proliferation of digital technologies in smart vehicles, in-vehicle interaction methods have evolved from traditional mechanical controls to include touchscreens, in-air gestures, and voice assistants. However, this diversification introduces higher cognitive demands and potential safety risks. This study proposes a novel human-centered interaction framework integrating artificial intelligence (AI), machine learning (ML), and computer-aided innovation (CAI) technologies to design and evaluate user-defined gestures on smart steering wheels. Leveraging a multi-modal sensing platform and a gesture recognition system driven by dynamic time warping and support vector machines, the system enables intuitive, hands-on vehicle control without diverting visual attention. Through a structured user elicitation study with 16 participants and comparative evaluation involving 24 users, the proposed system demonstrated significant performance improvements in task completion time (↓23.4%), error rate (↓46.7%), and cognitive workload (↓27.1%) compared to traditional controls. The system design incorporates real-time data modeling, adaptive interface learning, and UX-driven feedback mechanisms. These findings highlight the potential of AI-powered intelligent interaction systems in enhancing driving safety, user satisfaction, and in-vehicle UI/UX design.</jats:p>