AI in Cybersecurity: Maximizing Benefits while Mitigating LLM-Driven Risks
- January 3, 2024
- 6:56 am
People think that ChatGPT and other forms of generative AI that can understand questions and come up with meaningful replies will change the way businesses work.
Imagine a chatbot for customer service that can understand difficult questions and give useful answers.
It turns out that generative AI can also change the way cybersecurity works. Some possible benefits are finding threat surfaces quickly, evaluating security posture more swiftly, getting suggestions for consolidation and orchestration, and, when added to threat detection and response solutions, finding threats, investigating them, and responding to them faster while being better at spotting unknown zero-day threats.
But there are big risks that come with powerful new tools. 51% of the 1,500 IT and cybersecurity decision-makers surveyed think that ChatGPT will be responsible for a successful hack within a year. 71% of those who answered think that nation-states may already be using ChatGPT for bad things. These are some examples of security risks that ChatGPT and other large language model (LLM) AI can make:
- ChatGPT can make new types of malware. As an example, researchers made Black Mamba. This proof-of-concept polymorphic keylogger changes its code automatically so that standard EDR solutions can’t find it.
- ChatGPT also makes it easier for threat players who aren’t very good with technology to plan complex attacks that they wouldn’t be able to make on their own. Beginners who want to save money may use ChatGPT instead of the already-low fees for common malware.
- Threat actors already use tools like ChatGPT to make phishing emails look more real by using correct language, spelling, and writing style. Threat actors are likely to stop just sending scam emails and start making social posts that look like they came from a real account and even “deepfake” audio and video that may not be able to be told apart from real content in the future.
Threat actors are also using the fact that ChatGPT is popular around the world to get people to install viruses. For instance, every day, about 2,000 people put in a malicious browser add-on called Quick Entry to ChatGPT that stole information from Facebook Business accounts. Promoting and sending out fake ChatGPT apps that install different kinds of malware is another way that people try to take advantage of the public interest.
As creative AI grows and has an effect on cybersecurity, it opens up new ways for MSSPs to make their services more valuable. One of the best things about MSSP has always been that it can provide top-notch security at a much lower cost than it would be for a business to build, staff, and manage its own SOC.
Malware may be aimed at more small and medium-sized businesses because creative AI can make it easier to make and spread it. Enterprise-strength cybersecurity, which includes both technology and human expertise, will help these groups find and stop known and unknown threats. It will be at the cutting edge of study and innovation.
Training and teaching in cybersecurity is another way for MSSPs to offer services that add value. For instance, as phishing scams and other fake content get better, workers need to be taught to spot and report possible threats instead of clicking on links right away.
Customers also need to know that when employees use creative AI, it can put their data at risk and break rules and regulations. For instance, workers at some companies have put company data into generative AI tools to help them with their research. Still, they didn’t think about whether or not this made private information about customers and finances public.
There may be a greater need for MSSPs that can help businesses make and carry out useful policies and training for data protection, privacy, cybersecurity, and staffing.
One way that MSSPs can set themselves apart from others is by helping companies understand and handle the effects of ChatGPT and other new AI on both the inside and outside of the company.