Microsoft introduces an A.I. chatbot for cybersecurity experts

 Microsoft introduces an A.I. chatbot for cybersecurity experts


Satya Nadella, chief executive officer of Microsoft Corp., speaks during the Windows 10 Devices event in New York on Oct. 6, 2015. Microsoft Corp. introduced its first-ever laptop, three Lumia phones and a Surface Pro 4 tablet, the first indication of the company’s revamped hardware strategy three months after saying it would scale back plans to make its own smartphones.

Microsoft on Tuesday announced a chatbot designed to help cybersecurity professionals understand critical issues and find ways to fix them.

After OpenAI's ChatGPT bot won over the public's attention after its November premiere, the company has been hard at work bolstering its software with artificial intelligence models from the startup.

The resulting generative AI software can at times be “usefully wrong,” as Microsoft put it earlier this month when talking up new features in Word and other productivity apps. But Microsoft is proceeding nevertheless, as it seeks to keep growing a cybersecurity business that fetched more than $20 billion in 2022 revenue.

The GPT-4 large language model from OpenAI, in which Microsoft has invested billions, as well as a security-specific model that Microsoft developed using everyday activity data it collects, are all used in the Microsoft Security Copilot. The system is also aware of a specific customer's security setting, but models won't be trained using that information.

The chatbot can compose PowerPoint slides summarizing security incidents, describe exposure to an active vulnerability or specify the accounts involved in an exploit in response to a text prompt that a person types in.

A user can hit a button to confirm an answer if it’s right or select an “off-target” button to signal a mistake. That sort of input will help the service learn, Vasu Jakkal, corporate vice president of security, compliance, identity, management and privacy at Microsoft, told CNBC in an interview.

Microsoft engineers have been working with the Security Copilot to complete their tasks. It can quickly identify the two important events out of 1,000 alerts, according to Jakkal. For an analyst who didn't know how to do it, the tool also reverse-engineered a portion of malicious code, according to her.

That type of assistance can make a difference for companies that run into trouble hiring experts and end up hiring employees who are inexperienced in some areas. “There’s a learning curve, and it takes time,” Jakkal said. “And now Security Copilot with the skills built in can augment you. So it is going to help you do more with less.”

Microsoft has not disclosed the price of Security Copilot when it is made more broadly accessible.

Jakkal said the hope is that many workers inside a given company will use it, rather than just a handful of executives. That means over time Microsoft wants to make the tool capable of holding discussions in a wider variety of domains.

The service will work with Microsoft security products such as Sentinel for tracking threats. Microsoft will determine if it should add support for third-party tools such as Splunk based on input from early users in the next few months, Jakkal said.

If Microsoft were to require customers to use Sentinel or other Microsoft products if they want to turn on the Security Copilot, that could very well influence the purchasing decisions, said Frank Dickson, group vice president for security and trust at technology industry researcher IDC.

“For me, I was like, ‘Wow, this may be the single biggest announcement in security this calendar year,’” he said.

There’s nothing stopping Microsoft’s security rivals, such as Palo Alto Networks, from releasing chatbots of their own, but getting out first means Microsoft will have a head start, Dickson said.

Prior to its later general distribution, Security Copilot will be made privately previewable to a select group of Microsoft clients.

-Chathil Yeran-

Comments

Popular Posts