With the release of ChatGPT-4 this week, security teams have been left to speculate about the impact generative AI will have on the threat landscape. While many now know that GPT-3 can be used to generate malware and ransomware code, GPT-4 is 571x more powerful, creating the potential for a significant increase in threats.
While the long-term implications of generative artificial intelligence remain to be seen, new research published today by cybersecurity vendor Sophos suggests that security teams can use GPT-3 to defend against cyberattacks.
Sophos researchers – including Sophos AI Chief Data Scientist Younghoo Lee – used GPT-3's large language models to develop a natural language query interface to look for malicious activity within the telemetry of XDR security tools, detect spam emails and analyze potential hidden "living outside country". binary command lines.
More broadly, Sophos research suggests that generative AI plays an important role in handling security events in SOCs, so defenders can better manage their workloads and detect threats faster.
Identifying malicious activity
The announcement comes as more security teams struggle to keep up with the volume of alerts generated by tools across the network, with 70% of SOC teams reporting that their home lives are emotionally affected by their work managing IT threat alerts.
"One of the growing challenges in security operations centers is the sheer volume of incoming 'noise,'" said Sean Gallagher, principal threat researcher at Sophos. “There are too many notifications and detections to sort through, and many companies are struggling with limited resources. We've proven that with something like GPT-3, we can simplify some labor-intensive proxies and give back valuable time to defenders.”
The Sophos pilot demonstrates that security teams can use "multiple learning" to train the GPT-3 language model with just a handful of data samples, without the need to collect and process large amounts of pre-classified data.
Using ChatGPT as a cybersecurity co-pilot
In the study, researchers deployed a natural language query interface where a security analyst could filter data collected by security tools for malicious activity by entering plain-text queries in English.
For example, a user could enter a command like "show me all processes named powershelgl.exe that were run by root" and generate XDR-SQL queries from them without having to understand the underlying database structure.
This approach gives defenders the ability to filter data without having to use programming languages like SQL, while offering a "co-pilot" to help reduce the burden of manually searching for threat data.
"We are already working to incorporate some prototypes into our products and have made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analytical environments," said Gallagher. "We believe that in the future, the GPT-3 may very well become the standard co-pilot for security experts."
It's worth noting that the researchers also found that using GPT-3 to filter threat data was much more effective than using other alternative machine learning models. With the release of GPT-4 and its superior processing capabilities, it's likely to get even faster with the next iteration of generative AI.
While these pilots remain in their infancy, Sophos has published the results of its spam filtering and command-line analysis tests on the SophosAI GitHub page for other organizations to follow suit.

Post a Comment
you have any problem , please let me know.