Between the frenetic pace at which artificial intelligence (AI) has advanced in the past few years and an increasingly (inter)-connected world in which cyberattacks occur with alarming frequency and scale, it is no wonder that the field of cybersecurity has turned to AI and machine learning (ML) in order to detect and defend against adversaries.
The use of AI in cybersecurity not only expands the scope of what a single security expert is able to monitor but importantly, it also enables attacks to be discovered that would have otherwise been undetectable by a human. So, is it time to hand over the responsibility for security operations to the machines?
The current era of technological expansion has given birth to a variety of new tensions resulting from AI and machine learning. But the most pressing question is how should we organize our economy, and society when large fragments of the human workforce are beginning to be put out of work by automation?
Put more simply, how will people live if they can’t get jobs because they have been replaced by cost-effective, better-performing machines?
In recent years, numerous studies have been published and institutes inaugurated, dedicated to studying which jobs will remain in the hands of humans in the future, and which will be doled out to the machines.
The 2013 Oxford Report on The Future of Employment attempted to describe what categories of jobs would be safe from automation and which are at greatest risk. It generally argued that creative jobs, like artists and musicians, are less likely to be automated. Yet, the first AI-generated painting was sold at Sotheby’s for nearly $500,000. And there is such a thing as AI-generated music.
So, what does the AI mean for the cybersecurity industry? While there are no clear-cut rules for which types of cognitive and manual-labor jobs will be replaced, the recent application of advanced AI and machine learning techniques in the field of cybersecurity is unlikely to put security analysts out of work.
Understanding why requires an appreciation for the complexity of cybersecurity and the current state of AI. Advanced attackers constantly develop novel methods to attack networks and computer systems. Moreover, these networks and the devices connected to them are constantly evolving, from running new and updated software all the time to adding new types of hardware as technology progresses.
The current state of AI on the other hand―while advanced―performs a lot like the human perceptual system. AI methods can process and recognize patterns in streams of incoming data, much like the human eye processes incoming visual input and the ear processes incoming acoustic input. However, they aren’t yet capable of representing the full breadth of knowledge an experienced system administrator has, neither about the networks they are administering, nor the complex web of laws, corporate guidelines, and best practices that govern how best to respond to an attack. Simply put, AI systems currently do not have a full understanding of the “context”.
So, for the foreseeable future, AI will simply remain a tool in a defender’s pocket, making it possible to detect, and therefore respond to, ever-evolving advanced attacks at speeds and scale hitherto practically unattainable.