image 4

Artificial Intelligence in Cyber Risk Assessment: Effective or Just Hype?

Increasing cyber-attacks and their evolving forms affect everyone, from businesses to government offices and individuals. Cyber criminals will stop at nothing to defeat security systems and inflict damage to properties, finances, and reputations. Accenture’s 2019 Cost of Cybercrime study reports that security breaches have increased by 67% over the last five years, with the average cost of cybercrime hitting $13 million in 2018. This amount represents a 72% increase in the costs incurred as a result of an attack over the last half decade period.

Cyber threats are not limited to malware infections and web-based attacks. Other common cybercrimes include denial of service, phishing and social engineering, malicious software codes, malicious insiders, ransomware, and botnets. These attacks result in business disruptions along with losses in information, revenue, and equipment.

It’s imperative to have the right protection against these threats. Unfortunately, it’s not as easy as it sounds, as the number of attacks continue to grow at accelerating rates. Worse, these threats evolve or new variants are generated, which may not be detected by security systems even if their original malware are already registered in the database of threat signatures.

Continuous Risk Assessment

One highly effective solution against the dangerous propagation and evolution of threats is continuous threat and cyber risk assessment. It is the practice of ceaselessly measuring, testing, and tweaking a security system to promptly discover security gaps or inadequacies as the appear. It seeks to evaluate the effectiveness of the security measures of an organization and help optimize them. The process is done frequently and repeatedly, leaving the least possible opportunities for threats to penetrate.

Continuous threat assessment entails the use of automated testing tools to undertake the uninterrupted identification of security issues. These include breach and attack simulation tools, which are popularly used by security professionals in examining their security controls. Automated reporting and alerting tools are also employed for security teams to obtain quick actionable insights necessary to implement corrections or adjustments.

As aided by artificial intelligence and automation, continuous testing spells a shift towards an adaptive approach in IT security strategy (from the traditional binary, point-in-time decision-making). It is suitable for the contemporary situations in IT environments characterized by constant flux and the need for prompt detection and response.

coding programming code business

The Importance of Artificial Intelligence

AI and automation are essential in continuous security effectiveness testing because it is not practical to go through everything manually. A lot of new threats, including variants of existing malware, emerge every minute. Manually tracking if they have been correctly detected and blocked by the security system in place is not only time-consuming; it’s also tedious and predisposed to errors. Using AI and automation improves the capability in keeping track of these gaps.

Additionally, threats are expected to evolve to counter the detection capabilities of existing security systems. Because of this, it becomes necessary to have a system whose detection functions are not fully reliant on threat signatures. With the help of artificial intelligence, security teams can develop behavior-based detection tools capable of accurately identifying threats based on flows of operation. AI can detect anomalies based on patterns and simulations.

AI-powered detection tools may have a relatively high rate of false positives initially, particularly when they are configured to be aggressive in hunting down threats. However, with the help of machine learning, they can eventually improve their accuracy in identifying new, unknown attacks.

Artificial intelligence is also effective in dealing with sophisticated state-sponsored attacks. Several APT groups are known to be working collaboratively, under the direction of governments, to pursue political, financial, and military goals. They are notorious for using advanced and aggressive techniques with high rates of success. Fortunately, these techniques have been documented by security researchers. The information compiled about these techniques can be fed to AI systems to help train defenses that can fare better when faced with concerted state-sanctioned attacks.

Moreover, AI is helpful in cyber risk assessment to achieve flexibility or adaptability in the midst of frequent changes. IT environments are rarely the same through and through. They undergo many changes brought about by various developments such as network policy modifications, the impact of the use of unvetted shadow IT, the use of new hardware and software, and the changes that take place as employees leave or join a company.

AI-backed continuous security testing efficiently examines these changes to provide useful insights on how security is affected and what can be done to fix the potential sources of problems.

Does AI Make Sense in Continuous Threat Assessment?

Unfortunately, there hasn’t been a comprehensive study quantifying the effectiveness of the use of artificial intelligence in continuous risk assessment. However, there’s a study by Capgemini Research Institute, called Reinventing Cybersecurity with Artificial Intelligence, which offers a glimpse of the palpable benefits of AI in the field of cybersecurity.

According to the study that surveyed 850 IT security senior executives from different countries worldwide, 69% of organizations believe that AI is essential in responding to cyber attacks. On the other hand, 61% of enterprises claim that they are unable to detect breach attempts without using AI. In light of this reliance on artificial intelligence, some 48% of the respondents said that their companies are planning to raise their allocations for AI in cybersecurity by an average of 29%.

Many have already started efforts to use AI-driven cybersecurity, with 73% of the respondents saying that they were already testing use cases across their organizations. Also, 64% of those surveyed said that artificial intelligence lowered the cost of detecting and responding to data breaches. It also decreased the overall detection time by up to 12%.

Capgemini’s researchers conducted an analysis of the use cases in different companies and found that artificial intelligence was mostly employed in fraud detection, malware detection, intrusion detection, network scoring risk, and user/machine behavioral analysis.

hacker cyber code angrfiff

Fighting AI with AI

It’s worth noting that AI is not just useful in enhancing security. As the cliché goes, it’s a double-edged sword. AI can also be a tool for committing cybercrime. For one, it can facilitate the rapid production of a multitude of variants of existing malware, which tend to be perceived as new attacks by most security systems. Dubbed as adversarial AI, this malevolent application of artificial intelligence focuses on exploiting weaknesses in machine learning as used in regular settings to inflict harm on users. It attempts to make AI systems go haywire to achieve results that are favorable to the attacker.

If criminals are putting it to felonious use, security researchers are expected to work doubly hard to make sure that the productive and beneficial applications of artificial intelligence prevail. If AI becomes instrumental in creating more malicious software, efforts should be made to maximize the use of artificial intelligence in building more reliable defenses against cyber threats.

Conclusion

Is AI useful in cyber risk assessment? Considering the positive feedback of IT experts in different parts of the world. It’s safe to say that artificial intelligence is a boon for cyber security. It is an effective augmentation in establishing protection against cyber-attacks. It’s not just some hype or an attempt on the part of security vendors to deceptively promote their products. While there are software vendors that indeed use the hype of AI to misleadingly advertise, it does not erase the fact that AI serves an important purpose in cybersecurity.