enterprisesecuritymag

Making Future secure Through AIML Technology

By Paige H. Adams, Group Chief Information Security Officer, Zurich Insurance Company

Paige H. Adams, Group Chief Information Security Officer, Zurich Insurance Company

If you are an IT or Information Security professional, or even if you are not, it’s difficult to go far these days without encountering the terms “AI” or “Machine Learning.” These terms, once restricted to computer science and science fiction, are now becoming ubiquitous in our daily lives and are prominent everywhere from information security conferences to our daily general news feeds. In fact, it seems that almost every vendor these days touts a product that uses AI or is “enhanced by ML.” It is so common that one wonders if these are useful features, next-generation technology, or hype and “snake oil.”

Artificial intelligence or AI is the branch of computer science that is concerned with the automation of intelligent behavior. However, how do we define intelligence, and what does it mean to automate it? Does intelligence imply creativity?

There are no clear answers, but the consensus is that true AI enables machines to conduct reasoning, pattern-matching, and inference to solve problems that do not respond to algorithmic solutions.

Machine learning or ML, then, is a subset of AI that is concerned with teaching a machine to improve its performance in some task through exposure to data in representative sets, rather than through giving it explicit instructions or programming.

"Having cyber defensive capabilities that take advantage of AI/ML technology will be crucial in making the future secure, even as it may lead to a new form of the arms race"

Even though it may seem like they recently exploded out of nowhere, AI and ML have been with us for a long time. The field of AI can trace its origin to a Dartmouth University workshop established in the mid-1950s. By the end of that decade, computers were able to accomplish such astounding feats as play checkers better than the average human player, solve algebra word problems, and prove logical theorems.

These early successes led to over a decade of enthusiastic research and fueled an explosion of interest in AI, robotics, and related concepts in science fiction and other popular culture; however, progress eventually slowed due underestimation of the difficulty of solving some of the more advanced AI problems. This led to US and British governments cutting funding for AI research in favor of projects that promised more tangible results. This decision, in 1974, marked the beginning of a funding drought that later came to be known as the “AI Winter.”

The current “AI Spring” has been brought about by several factors, but chief among them are the following:

1. Storage: Capacity to store the large data sets needed for AI has increased exponentially in the last few decades, while costs have decreased at much the same rate.

2. Parallel processing: The advent of machines that can do parallel processing—carry out many calculations simultaneously—made the complexity and time requirements for AI algorithms more achievable.

3. Better algorithms: Dedicated research into what types of algorithms work best with different data and access to more data, including extremely large data sets, has allowed for improvements in algorithms for classifying data, making predictions, and learning patterns.

4. More sensor data: One of the main reasons we have more data is the explosive growth in the number of sensors all around us. The average automobile now has nearly as many sensors as was on the Apollo spacecraft that first took us to the moon. Our mobile devices contain an impressive array of sensors to detect light, sound, motion, atmospheric pressure, etc. These sensors are generating millions of data points every day—rich information to feed ML algorithms.

5. More emerging applications: Last, but not least, the increased performance and accessibility of AI and ML have fueled growth in the number of practical applications. There are now AI applications that can help your doctor diagnose symptoms and come up with better treatment plans, AI-enabled stock market trading applications, and even AI-powered home vacuum robots.

In addition to these applications, the last several years have also seen an increase in the use of AI/ML in information security. Information security offers ripe grounds for AI and ML for several reasons:

  1. The “needle in a haystack” problem: Much of cyber and information security involves looking for something “bad” in enormous stacks of data. AI algorithms, through their ability to classify data, are particularly suited to helping pick out anomalous events and sort out the “bad” from the “good.”
  2. The “Three V” problem: A constant in information security is that the velocity, variety, and volume of threats will continue to increase for the foreseeable future. Dealing with these threats daily can result in alert fatigue in security analysts, and manually addressing these threats is simply not scalable in the long term. Teaching machines to deal with threats and take automated actions based on those threats is necessary for us to keep pace with the future threat environment.
  3. We should let humans do what humans do best. Far from replacing human analysts, we should allow machines to do the types of task in which they excel, e.g. sorting and classifying large volumes, of data, and allow the humans to do what they do best – analyze big picture data and look for human motivations behind threat actions. There are many higher-level tasks that can be done by human analysts if they had the time do so. AI and ML applications can perform the more menial task and free up time for the humans to spend doing those instead.

Perhaps the most compelling motivator in deciding whether we should use AI for information security is that, for all the good that AI promises to bring, it also brings it share of threats. From unintentional actions by an unconstrained AI algorithm to actual intentional, malicious threats, AI can have its dark side. These intentional threats can be devising malware that mimics the use of human-powered “advanced persistent threat” actors to evade detection and burrow deep inside computer networks, to actual physical threats like outfitting swarms of drones with weapons and AI technology to create new forms of high-tech cyber terrorism. Finding and countering these threats is best done with AI itself- fighting AI with AI.

In summary, although AI and ML can be (and probably are) liberally used as buzzwords in marketing material, it’s clear that this isn’t merely snake oil. AI, for better or worse, is a part of our society and will play a central role in our future. Having cyber defensive capabilities that take advantage of AI/ML technology will be crucial in making the future secure, even as it may lead to a new form of the arms race.

Weekly Brief

Read Also

Leading IT through Collaboration

Leading IT through Collaboration

Brenda Decker, CIO, State of Nebraska
The CIO's Role in Promoting Digital Transformation

The CIO's Role in Promoting Digital Transformation

Thomas Knapp, CIO, Waterstone Mortgage Corporation
The Job of Cybersecurity is Presently Addressed as an

The Job of Cybersecurity is Presently Addressed as an

Paul Garrin, CIO Partner, Tatum, A Randstad Company
Redefining the Role of CISO in the 4th Industrial Revolution

Redefining the Role of CISO in the 4th Industrial Revolution

Marco Figueroa, Group Chief Information Security Officer, NSW Department of Customer Service
Cyber Risk Strategy and the Evolving Role of the CISO

Cyber Risk Strategy and the Evolving Role of the CISO

Richard Harrison, Chief Information Security Officer, healthAlliance