Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
Microsoft Security Copilot
Business

Security Copilot: Microsoft's AI Journey from Hurdles to Triumphs

The rise of Security Copilot: Microsoft's AI journey in cybersecurity.

Intelligence Desk2 min read

Microsoft's Security Copilot, an AI assistant for cybersecurity teams, was developed using OpenAI's GPT-4 model after initial challenges with GPU resources.,The company used cherry-picking and internal data to improve the system's accuracy and combat "hallucinations.",Security Copilot is now a closed-loop learning system that evolves through user feedback, demonstrating the potential of AI in cybersecurity.

Microsoft's AI Security Adventure: The Tale of Security Copilot

Microsoft's Security Copilot stands as a testament to the potential and challenges of generative AI. Launched in early 2023, this AI assistant for security teams uses OpenAI's GPT-4 to address cyber threats, much like its cousin, ChatGPT. However, the journey to its creation was filled with obstacles and revelations.

Overcoming Initial Hurdles: GPU Shortages and GPT-4

Microsoft's initial focus was on its own security-specific machine learning models. However, limited GPU resources due to company-wide GPT-3 usage posed a significant challenge. The solution came in the form of early access to GPT-4, leading to a shift in focus towards exploring its cybersecurity potential. This push to leverage advanced AI models is a common theme across the industry, as seen in how Microsoft Copilot Claude models are also being integrated.

The Art of Cherry-Picking and Combating Hallucinations

Microsoft faced the challenge of "hallucinations" - instances where the model generated inaccurate information. To present a convincing picture, they resorted to cherry-picking good examples. Microsoft then integrated its own data to combat these hallucinations, improving the system's accuracy and grounding it with relevant, up-to-date information. This iterative process of refining AI models with real-world data is crucial for their effectiveness, much like the efforts to improve AI with Empathy for Humans.

The Birth of Security Copilot: A Closed-Loop Learning System

Security Copilot emerged from these efforts as a "closed-loop learning system," constantly evolving through user feedback. Microsoft openly acknowledged the system's limitations during its initial rollout, emphasising the potential for errors and the need for continuous improvement. The development showcases a commitment to responsible AI, a topic increasingly discussed in regions like Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means. For more insights into the technical challenges and advancements in large language models, the OpenAI blog offers valuable information.

Comment and Share:

Have you had any experiences with AI in cybersecurity? How do you see AI and AGI shaping the future of cybersecurity? Share your thoughts below and don't forget to Subscribe to our newsletter for updates on AI and AGI developments in Asia and beyond!

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Safety for Everyone learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (4)

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
8 April 2024

@lakshmi.r: It's interesting how they mention cherry-picking. In our work with low-resource Indic languages, we often face similar data scarcity and biases, making thorough evaluation even more critical than just showing the "good" outputs.

Harry Wilson
Harry Wilson@harryw
AI
1 April 2024

It's interesting to hear about the GPU resource bottleneck Microsoft faced getting Security Copilot off the ground. We ran into similar issues last year trying to allocate enough for our own LLM fine-tuning projects here at Edinburgh.

Soo-yeon Park
Soo-yeon Park@sooyeon
AI
11 March 2024

it's cool how they used cherry-picking and their own data to fix the hallucination problem for Security Copilot. that's actually super smart. reminds me a bit of how we have to fine-tune our AI for K-drama scripts, making sure the nuances are spot on. shows how important real data integration is, not just the raw model.

Lisa Park
Lisa Park@lisapark
AI
26 February 2024

It's interesting how they used cherry-picking to deal with hallucinations early on. I'm wondering what that means for user trust and how they're handling that now with real-world security teams in ANZ.

Leave a Comment

Your email will not be published