AIAA

The World's Forum for Aerospace Leadership

  • MY AIAA
  • Donate
  • Press Room
  • Renew
  • View Cart
American Institute of Aeronautics and Astronautics

    Artificial Intelligence for Cybersecurity

    Artificial Intelligence for Cybersecurity


    • The next generation of cybersecurity products are increasingly incorporating Artificial Intelligence (AI) and Machine Learning (ML) technologies. By training AI software on large datasets of cybersecurity, network, and even physical information, cybersecurity solutions providers aim to detect and block abnormal behavior, even if it does not exhibit a known “signature” or pattern. Experts anticipate that, over time, companies will incorporate ML into every category of cybersecurity products.

    • There are different approaches to using AI for cybersecurity, and it is important first to determine which is appropriate for the organization. Some software applications analyze raw network data to spot an irregularity, while others focus on user/asset/entity behavior to detect patterns that deviate from normal. The types of data streams, how they are collected, and the level of effort needed by analysts all vary by approach.

    • Cybersecurity solutions utilizing AI and ML can greatly reduce the amount of time needed for threat detection and incident response, often being able to alert IT staff of anomalous behavior in real time. These technologies also help reduce and prioritize traditional security alerts, increasing the efficacy of existing investments and human analysts.

    • Attackers are also using AI and ML to better understand their targets and launch attacks. AI increases the ability of defenders to identify attacks, but it may also help hackers learn about a target’s vulnerabilities.

     
    Cybersecurity product companies have turned to AI and ML to provide insights that would otherwise be impossible for humans to achieve alone. These products use AI to identify anomalies, speed up detection, and increase the effectiveness of existing products. AI can aid analysts who may be overwhelmed with security alerts to identify patterns that may indicate a threat that would otherwise be missed by conventional cybersecurity software. Without this help, analysts can waste time researching “false positive” alerts and researching dead ends, meanwhile missing legitimate malicious activity. Organizations can waste as much as $1.3 million per year responding to “inaccurate and erroneous intelligence” or “chasing erroneous alerts,” according to one study by the SANS Institute. Better solutions are trained by using ML to analyze vast stores of human-labeled data so that it can find patterns within the noise. For as long as ML has existed, training has been the most lengthy and cumbersome part of AI/ML implementation, but several AI solutions have now been developed that permit the software to train itself autonomously, at least in part. When properly trained, AI threat analysis can apply human-like intuition to every interaction on the network and pluck a single strange packet from millions of others for human review. Some AI products on the cutting edge allow companies to correlate attacks or events across time and geography to develop a better picture of what is happening within the network. When properly trained and monitored, solutions that detect threats using ML can reduce the time from breach to discovery, reducing the amount of damage an attacker can cause. Shortening the time to discovery is critical for security, especially because the average breach still takes over 260 days to discover.


    Different AI/ML approaches can be used for different security objectives. In one approach, AI software looks at raw network activity data to flag any unusual connection that is being made, e.g., a packet comes into a SCADA network from an unknown IP address. This is basic pattern spotting, and is fairly rudimentary. Alternatively, defending against a threat actor using compromised legitimate network credentials moving slowly through the network or an insider threat may require deep learning to analyze a given user’s behavior over a series of actions to determine if the pattern of behavior is out of the ordinary. This approach is what is known as behavioral user analytics. In this system, the AI is implemented at the user/asset/entity-level to surveil an employee’s or device’s activities, e.g., an employee accesses a local server that they connect to infrequently, and then begins downloading all of the server’s contents. In either the former or the latter case, the AI will recognize that there is an anomaly and will alert IT staff for further investigation. Behavioral user analytics is quickly becoming the gold standard for cybersecurity products in the AI domain.


    Of course, enterprises that are planning their cybersecurity technology roadmap or acquiring individual technologies must be cautious about solutions that boast AI or ML capabilities, since many companies have seen the trend toward AI and are claiming these terms without having sophisticated AI/ML capabilities. 


    An important aspect for an AI platform is transparency in its decision-making process. This is important for instilling trust among those that use it or are subject to its calculations and decisions. For example, one email security platform not only discovers spearphishing attempts but also tells users why it determined that a given email is a threat. It is unclear if the industry as a whole will move in the direction of transparency, but it helps to remove the perception that AI threat detection software is a black box whose machinations cannot be understood. The software will be more accurate and an organization more secure if users know how to gauge if the AI is right or wrong, making user feedback much more useful. AI threat detection is not going to produce perfect results and will likely generate a fair number of false positives and negatives if it has not been trained on a sufficient amount of recent data. The system cannot be entirely automated, and it will require human oversight to determine the legitimacy of any alerts that the system produces. Still, for companies that do not have large cybersecurity or IT budgets, an AI platform sifting through hundreds of gigabytes of network activity data on a daily basis can be as effective as several teams of IT staffers using conventional tools for threat detection.


    Unfortunately, AI is already being used for nefarious ends by hackers and other cyber criminals. In 2016, an experimental AI was employed to send simulated spearphishing links to Twitter users to determine how effective it could be when compared to a human performing the same task. Though the AI only lured 34.4 percent of its targets to the fake phishing websites compared to 38.0 percent for the human, the AI was able to churn out over 800 attempts compared to the human’s 130 attempts in the same amount of time. This illustrates that even if an AI is less effective at tricking people than humans are, it still undoubtedly has a massive speed advantage, allowing it to “cover more ground” than a human and, in the end, create more victims. The Twitter phishing experiment is perfectly analogous to AI’s primary use case for cybercrime, whereby threat actors employ the automated advantage that is inherent in system. Whether it is DDoS attacks, ransomware, or some other kind of malware, cyber criminals are using AI to spread the threat faster and target more vulnerable machines.