SECURITY BREACH? CALL 888.234.5990 EXT 9999

BLOG ARTICLE

Generative AI and Cybersecurity Risk: Cutting-Edge Defensive Strategies

ChatGPT, the popular chatbot created by the innovative artificial intelligence (AI) research firm OpenAI, has been a hot conversation topic for months now. Similar tools, powered by large language models (LLMs) – deep learning models trained on enormous databases of text that can summarize, translate, and generate content – have recently evolved very rapidly. This type of AI is called generative AI.

While business leaders try to find the best strategies for harnessing the power of these emerging technologies, ChatGPT has seen the fastest user base growth rate of any consumer-facing software in history. Just about everyone—from college students to seniors—is experimenting with ChatGPT. They’re using it to write essays, do research, create summaries, and even craft jokes and poems.

AI: A Blessing or a Curse?

Unfortunately, cybercriminals are experimenting with generative AI as well. Tools like ChatGPT can write code as well as text, and there’s no way to guarantee that the code ChatGPT authors can’t be used for malicious purposes. What’s more, threat researchers warn that bad actors can leverage tools like ChatGPT to help them insert malicious code into open source code libraries that then become part of software supply chains.

ChatGPT and similar tools can also be used to generate phishing emails at scale, or to create highly personalized messages that are much more convincing than yesterday’s social engineering attacks—so convincing that they may fool users who’ve had extensive cybersecurity training. Deepfakes—when AI is used to create audio, video or image content that shows someone doing or saying something they actually didn’t do or say—can be almost impossible to detect, raising new questions about how to prepare employees to deal with this emerging threat.

As generative AI moves further and further into the mainstream, it’s all but inevitable that cybercriminals will increasingly take advantage of its capabilities. AI is enabling them to scale their operations, more effectively deceive their victims, and create far more sophisticated, highly-targeted fraud schemes.

Enterprise security teams will need to take advantage of similarly sophisticated tools if they’re to keep up with the rapid evolution of the threat landscape.

How Defenders Can Respond to this Threat

Here at Netrix, we’ve already incorporated generative AI into our security operations program. We believe that this is the way of the future, but even today it’s allowing us to scale up our security analysts’ capabilities, optimize resource usage, and enable investigators to reach better conclusions about incidents more quickly.

Let’s take a closer look at how we’re using generative AI in our Security Operations Center (SOC). Like most security analyst teams, ours is structured into three tiers:

  • Tier 1 analysts spend most of their time monitoring security technology consoles and event logs for anomalies, and investigating alerts.
  • Tier 2 analysts are typically more experienced, and perform more sophisticated work, digging deeper into incidents that Tier 1 analysts have passed on to them.
  • Tier 3 analysts, the most experienced of all, spend their time on strategic tasks like threat hunting, examining trends so they’re prepared to counter the latest exploits, and reviewing lower-tier analysts’ proactive and reactive work.

Right now, our Tier 1 and Tier 2 analysts are leveraging ChatGPT to provide recommended response actions when incidents occur. They’re using OpenAI tools in conjunction with Microsoft Azure OpenAI services, which serve as a proxy or broker, protecting the privacy and confidentiality of the data that’s routed through the platform. This ensures that sensitive content cannot be exposed through the security teams’ use of ChatGPT, which has been observed in a number of recent high-profile events.

These tools make it possible to apply the expertise that’s gained through examining very large data sets to every single incident investigation that’s performed within our SOC. ChatGPT provides the interface that individual analysts can interact with. Rather than watching endless lines of telemetry data scroll across a console, analysts can ask questions, request that ChatGPT explain how it arrived at its findings, and teach the AI tool to perform better in the future, should its conclusions be inaccurate today.

This significantly reduces the workload of our Tier 1 and Tier 2 analyst teams, but it’s also helping us to decrease our overall mean time-to-resolution (MTTR). This is a critical metric for us, because the faster we’re able to resolve incidents, the less potential for business disruption our clients face—and the less time would-be attackers have to gain a foothold in their environments.

Taking Advantage of Tomorrow’s Technologies Today

It’s likely that the majority of security operations programs will leverage similar kinds of capabilities in the not-too-distant future. Microsoft has already announced that it will be introducing Microsoft Security Copilot, an AI-driven engine that generates actionable responses to security analysts’ natural language prompts. Currently in beta release, this new solution will be able to triage alerts at machine speed, synthesize insights from multiple sources, and deliver guidance that saves time and mitigates risks.

These emerging technologies promise to make security analysts’ jobs easier, but they also promise to make these sought-after professionals (who remain in high demand and short supply) more effective at work. This is a win for everyone: the CISOs tasked with building and managing effective security operations programs, boots-on-the-ground security practitioners, and business leaders who need to protect the organization’s reputation and mitigate risk.

Interested in learning more about how we’re putting generative AI to work to protect our clients? Netrix’s Director of Security, Rich Lilly, drew on his 20+ years of Microsoft security expertise to talk about how industry leaders are taking advantage of solutions like ChatGPT, Microsoft Azure OpenAI, and Microsoft Sentinel in a webinar on July 26th. Click here to watch it on-demand.

Or reach out to a member of Netrix’s team of experts to schedule a free, no-obligation consultation today.

SHARE THIS

MEET THE AUTHOR

Rich Lilly

FIELD CTO, CYBERSECURITY

Rich Lilly has been working in the IT Consulting space for 20+ years in various positions and roles, including Architect, Director of Pre-Sales, Cloud Evangelist, and including his current role, Director of Security for Netrix, LLC. Rich brings extensive hands-on and practical knowledge to not only strategy for Microsoft-centric Security solutions, but also developing and operating Security Programs. 

Let's get problem-solving