Explainable AI Models for Enhanced Decision-Making in Cybersecurity
Main Article Content
Abstract
Enhancing decision-making with explainable artificial intelligence (XAI) models is essential to effective cybersecurity threat identification and mitigation as the complexity of cybersecurity threats continues to increase. The purpose of this research is to investigate how explainable artificial intelligence technologies might enhance decision-making in the field of cybersecurity. The chi-square test, which is used for feature selection, decision trees, which are used for classification, and data cleaning and normalization, which are used for preprocessing, the framework that we offer encompasses all three of these important methods. The process of cleaning and normalizing data ensures that raw cybersecurity data is put into a format that is consistent by resolving issues such as scale variances and missing values. Chi-Square is a statistical test that highlights the most statistically significant characteristics of a model, thereby reducing the complexity of the model and improving its interpretability. In conclusion, decision trees are utilized as the categorization model due to the fact that they are transparent by their very nature and offer choice paths that are easy for individuals to comprehend. This technique not only accomplishes high-performance detection, but it also helps cybersecurity experts comprehend model forecasts, contributes to the development of trust, and enables practical decision-making in situations where threats are occurring in real time.