Changing cybersecurity use with AI models – By Aditya Abeysinghe

Changing cybersecurity use with AI models  – By Aditya Abeysinghe

 

Aditya-AbeysingheUse of AI and Machine learning

There are many uses of Artificial Intelligence (AI) and machine learning (ML) in cybersecurity. The advantage of using AI and ML for cybersecurity is that AI and ML models can learn and then identify attacks which are known and/or unknown in a system. Models which are based on AI and ML can identify anomalies when there is a change of usual traffic. In classification, content is classified as attacks based on the output of a model.

Benefits of AI models to identify attacks

AI models can be used to automate detection of attacks more effectively than by using other methods. AI models are used to enhance anomaly identification or classifications in networks with less time than manual identification based on analysis. Unlike other attack classification and anomaly detection methods, methods that use AI models can identify attacks which are unknown more precisely as models can learn and improve inner classification or detection properties.

cybersecurity | elankaAI models can also decrease time taken to detect attacks by improving model properties. Systems that do not use AI models use only rules that are defined before being used in a system. They can hardly be updated after being used in a system and hence are hard to adapt to new attacks. However, manual analysis and changes in components used for identification is usually not necessary in AI and ML models which reduces time taken for identification.

Extensibility and adaptability of AI models are higher compared to other methods of identifying cyberattacks. AI models can easily be trained with new parameters and tested on new data. In contrast, testing non-AI based systems require large time, effort and cost due to large amount of changes and tools required. Also, changes and tests using one method might not be suitable for all components of a system. However, in AI models a standardized methodology is used to train and test based on known methods. This reduces extra time and costs for applying separate changes in a system.

Issues with AI

AI models are often known to be biased in output. Biasness often has issues with the accuracy of classification or prediction of a model. Therefore, on AI-based systems cyberattacks could be classified as benign and non-attacks could be classified as attacks when used in networks. Also, biasness is an issue in most modern AI models as they could learn from current data and improve without manual input.

Another issue is that complex AI-based models are often not explainable due to hidden processes used. Output of neural networks and deep learning models are often difficult to be explained due to this issue. Due to issues in explainability, issues in non-AI based components are often easy to be identified as there are less hidden processes that such systems use.

AI use in cybersecurity is rising

AI was rarely used when cybersecurity was initially added to secure networks. Also, many complex models are used only recently. Lack of tools, technical expertise, and resources to deploy models are the possible causes of not using models during this early usage. With rising use of AI in many applications and tools required to build models are downloadable at low cost, use of AI models to secure cybersecurity apps is growing.

Image Courtesy: https://ciosea.economictimes.indiatimes.com/

Comments are closed.