OUR ASSESSMENT OF GENERATIVE ARTIFICIAL INTELLIGENCE AND WHAT IT MEANS FOR CYBER
Frank Santucci, Chief Technology Officer
Adam McCarthy, Chief Executive Officer
April 5, 2023
5 min read.
In the ever-changing world of technology, few things get people as excited as Artificial Intelligence (AI). The tech community has been awash with excitement since Microsoft invested a staggering ten billion dollars into OpenAI in what industry experts agree is a meaningful shift in the application of AI to everyday problems.
Google, Amazon and Apple alike are racing to get their products into the hands of consumers to capitalise on what many predict is the future of web search and information gathering. This month, Microsoft released its next weapon in the AI evolution in the form of Microsoft Copilot – shaped by the power of OpenAI’s GPT-4 generative AI.
The prospect of greater integration of AI into Microsoft 365, which includes Word, Excel, Powerpoint and Office, is exciting for all of us. Its ability to understand the entire business landscape and intelligently summarise, analyse and curate information will gift time back into the calendars of busy people who can apply their skills to more meaningful activities.
As the co-founders of a cyber security company, we’re anxiously waiting for the chance to test Microsoft’s next AI offering – Microsoft Security Copilot; the first security product that could enable cyber operators to move at the speed and scale of AI.
Microsoft Security Copilot combines an advanced large language model (LLM) with a security-specific model from Microsoft. In a typical cyber incident, Microsoft says it will translate into gains in the quality of detection, speed of response and ability to strengthen security posture.
For many of us, AI is not new. ParaFlare has been using a form of AI and Machine Learning (ML) for years, deeply integrating it across our platforms. Right up until recently we found ML particularly useful as it provided ‘value add’ capabilities in aggregating alerting from bespoke sources. What we see coming now is a shift towards the necessity of using AI to help humans work faster with big data sets.
What’s new is the turning of the dial on AI through the power of integrated LLMs. In other words, harnessing the power of natural language, including its context within an individual business, to unlock the true potential of AI.
There’s no doubt society is on the cusp of a transformation. The big-tech industry is set to change dramatically. There will be winners in the AI ‘arms race’, and there will be losers (i.e., those who don’t integrate generative AI into their platforms). Clear leaders have already emerged and inevitably there will be big shakeup in the security vendor landscape.
We need to consider this technology transformation with a degree of caution and pragmatism and learn from the past. We offer this to any business who thinks cyber operations will be revolutionised by AI, to the point that it becomes highly scalable, and therefore commoditised. What we are already seeing is an uptick in AI related attack vectors and our use of these technologies will be necessary in offsetting this new tactic.
The nature of cyber operations remains unchanged. It is, and always will be, a complex contest based on an essentially human activity to seek out and contain an adversary, enabled and empowered by technology. It is the character of cyber operations that is constantly changing as new tools and technologies – such as generative AI – become available.
The highly skilled human operator or analyst will always be responsible for making decisions based on complex data in a dynamic environment. What AI will do is simplify, streamline, and aggregate some of that complex information, and filter out some of the noise. It therefore becomes part of our armoury in the fight against cyber adversaries.
And as we come to terms with the potential of AI and begin integrating it across our businesses, so do our adversaries. At the very least, AI will enable cyber threats actors to construct more eloquent, convincing phishing emails, social engineering attacks and malicious content.
There are plenty of people who are nervous about where AI will take us. Elon Musk and some of the biggest names in tech have called for a six-month pause in developing more powerful AI systems out of concern for national security. With all technology, there’s good reason to exercise caution.
ParaFlare was built on a deeply human capability, enabled and empowered by technology, and it will remain so. We’ll embrace AI as part our armoury and continue to do what we do best – protect our customers from the cyber threats that wish them harm.
Have a comment? Join the conversation on LinkedIn