If you are an IT or Information Security professional, or even if you are not, it’s difficult to go far these days without encountering the terms “AI” or “Machine Learning.” These terms, once restricted to computer science and science fiction, are now becoming ubiquitous in our daily lives and are prominent everywhere from information security conferences to our daily general news feeds. In fact, it seems that almost every vendor these days touts a product that uses AI or is “enhanced by ML.” It is so common that one wonders if these are useful features, next-generation technology, or hype and “snake oil.”
Artificial intelligence or AI is the branch of computer science that is concerned with the automation of intelligent behavior. However, how do we define intelligence, and what does it mean to automate it? Does intelligence imply creativity?
There are no clear answers, but the consensus is that true AI enables machines to conduct reasoning, pattern-matching, and inference to solve problems that do not respond to algorithmic solutions.
Machine learning or ML, then, is a subset of AI that is concerned with teaching a machine to improve its performance in some task through exposure to data in representative sets, rather than through giving it explicit instructions or programming.
"Having cyber defensive capabilities that take advantage of AI/ML technology will be crucial in making the future secure, even as it may lead to a new form of the arms race"
Even though it may seem like they recently exploded out of nowhere, AI and ML have been with us for a long time. The field of AI can trace its origin to a Dartmouth University workshop established in the mid-1950s. By the end of that decade, computers were able to accomplish such astounding feats as play checkers better than the average human player, solve algebra word problems, and prove logical theorems.
These early successes led to over a decade of enthusiastic research and fueled an explosion of interest in AI, robotics, and related concepts in science fiction and other popular culture; however, progress eventually slowed due underestimation of the difficulty of solving some of the more advanced AI problems. This led to US and British governments cutting funding for AI research in favor of projects that promised more tangible results. This decision, in 1974, marked the beginning of a funding drought that later came to be known as the “AI Winter.”
The current “AI Spring” has been brought about by several factors, but chief among them are the following:
1. Storage: Capacity to store the large data sets needed for AI has increased exponentially in the last few decades, while costs have decreased at much the same rate.
2. Parallel processing: The advent of machines that can do parallel processing—carry out many calculations simultaneously—made the complexity and time requirements for AI algorithms more achievable.
3. Better algorithms: Dedicated research into what types of algorithms work best with different data and access to more data, including extremely large data sets, has allowed for improvements in algorithms for classifying data, making predictions, and learning patterns.
4. More sensor data: One of the main reasons we have more data is the explosive growth in the number of sensors all around us. The average automobile now has nearly as many sensors as was on the Apollo spacecraft that first took us to the moon. Our mobile devices contain an impressive array of sensors to detect light, sound, motion, atmospheric pressure, etc. These sensors are generating millions of data points every day—rich information to feed ML algorithms.
5. More emerging applications: Last, but not least, the increased performance and accessibility of AI and ML have fueled growth in the number of practical applications. There are now AI applications that can help your doctor diagnose symptoms and come up with better treatment plans, AI-enabled stock market trading applications, and even AI-powered home vacuum robots.
In addition to these applications, the last several years have also seen an increase in the use of AI/ML in information security. Information security offers ripe grounds for AI and ML for several reasons:
Perhaps the most compelling motivator in deciding whether we should use AI for information security is that, for all the good that AI promises to bring, it also brings it share of threats. From unintentional actions by an unconstrained AI algorithm to actual intentional, malicious threats, AI can have its dark side. These intentional threats can be devising malware that mimics the use of human-powered “advanced persistent threat” actors to evade detection and burrow deep inside computer networks, to actual physical threats like outfitting swarms of drones with weapons and AI technology to create new forms of high-tech cyber terrorism. Finding and countering these threats is best done with AI itself- fighting AI with AI.
In summary, although AI and ML can be (and probably are) liberally used as buzzwords in marketing material, it’s clear that this isn’t merely snake oil. AI, for better or worse, is a part of our society and will play a central role in our future. Having cyber defensive capabilities that take advantage of AI/ML technology will be crucial in making the future secure, even as it may lead to a new form of the arms race.