Bracing for Impact: Hyppönen Spotlights Top 5 AI Cyber Threats

Mikko Hyppönen is a pioneer in fighting malware. The 54-year-old Finn has defeated some of the world’s most harmful computer viruses over his decades-long career. He also helped identify the creators of the first PC virus. Since his youth, Hyppönen has sold his cybersecurity software out of Helsinki.

His accomplishments have earned him international recognition. As Chief Research Officer at WithSecure, the region’s biggest cybersecurity company, he continues leading the charge. Hyppönen believes artificial intelligence will bring even greater change than the internet revolution.

While optimistic about AI’s potential, Hyppönen worries about the new cyber threats it may enable. As this technology spreads, attackers may exploit AI systems in damaging ways. Defenders like Hyppönen must stay vigilant against emerging risks. Still, he hopes AI will ultimately have more benefits than drawbacks for society. The key is managing its introduction securely.

As 2024 begins, Hyppönen has identified five top cybersecurity issues demanding attention this year. While not ranked by order of importance, one stands out as most urgent in his view.

1. Deepfake

Deepfakes, or synthetic media, top Hyppönen’s 2024 watchlist. Experts have long warned of AI-powered fake videos enabling crime. While overblown so far, the threat is growing.

One UK firm registered a 3,000% annual spike in deepfake fraud attempts. Disinformation campaigns also exploit them. Russia weaponized crude deepfakes of Ukraine’s president early in its invasion. Yet quality is advancing rapidly in this arms race.

Most deepfake scams still involve celebrities promoting products or donations. But Hyppönen recently uncovered three financial deepfakes, an early warning sign. As tools spread, volumes could soon surge exponentially.

“It’s not yet massive, but will be a problem very soon,” he cautions. Hyppönen suggests practicing “safe words” for protection now.

When colleagues or family seek sensitive data via video, require a pre-set password first. If the caller can’t provide it, assume a deepfake. This basic protocol is cheap insurance before threats multiply.

“It may sound ridiculous today, but we should do it,” Hyppönen urges. “Establishing safe words now is very inexpensive protection for when deepfake hit large scale.”

Vigilance and pragmatic precautions like safe words offer hope of managing the coming deepfake deluge. Hyppönen aims to prepare organizations before this AI risk outpaces defenses.

2. Deep Scams

Unlike deepfakes, “deep scams” rely on scale over deception. Automation powers rapid targeting of countless victims instead of manual scamming.

From phishing to romance fraud, automating any scheme expands its scope exponentially. Take the notorious Tinder Swindler. With AI writing, image generation, and translation tools, he could have conned orders of magnitude more dates.

“You could scam 10,000 people at once instead of a few,” Hyppönen explains. Rental scams also stand to benefit. Scammers typically steal photos of legitimate Airbnbs to lure guests. Reverse image searches can catch these.

But AI art tools like Stable Diffusion, DALL-E, and Midjourney produce unlimited fake yet realistic listings. “No one will find them,” says Hyppönen. Other deep scams will harness similar generators.

The core ingredients – big language models, voice synthesis, and image creation – are all advancing rapidly. Soon these elements may combine into all-in-one mass deception engines.

Hyppönen believes the automation wave means no scam is safe from exponential expansion. Whether phishing, spoofing identities, fabricating evidence, or more – the scamming capacity unlocked by AI has no precedent.

With diligence and cooperation across security teams, companies can try heading off this rising threat. However, the scalability of AI scams presents a steep challenge in 2024.

3. LLM-enabled malware

Hyppönen already uncovered three malware samples that wield AI to mutate. Using an OpenAI API, these worms leverage GPT language models to rewrite their code each time they spread. This frustrates detection.

Although not yet spotted in the wild, these prototypes on GitHub confirm the risk. Since the AI models run remotely, OpenAI can blacklist their behavior. But that may soon change.

“This works because powerful generative AI is closed source,” Hyppönen explains. “If you could download the full model to run anywhere, blacklisting wouldn’t stop attacks.”

The same issue looms for image generators. Open up access to the algorithms, and restrictions fall apart. Violence, porn, fakery – the safeguards dissolve once models go open source.

So despite its name, OpenAI resists total transparency to maintain control. Of course, all that potential revenue would vanish too if competitors could freely replicate its technology.

As with deep scams, Hyppönen expects AI-powered malware to scale exponentially thanks to automation. However opaque models pose other risks. Trust placed in black boxes that can rewrite themselves brings unpredictability.

While AI cannot yet match human originality, its breakneck evolution suggests code generation may one day surpass manual programming. This prospect underscores the urgency of probing limits and tradeoffs for security’s sake.

4. Discovery of zero-days

Zero-day exploits, and software holes found before fixes exist, present another AI risk. The same tools that can detect these flaws for defense may also enable offense at scale.

“It’s great to use AI to find zero days in your code to patch,” says Hyppönen. “And it’s awful when others weaponize AI to uncover holes and breach you. We’re nearly there.”

One student demonstrated the danger in a project for WithSecure. Given basic Windows access, they automated hunting for escalation paths to admin privileges. The system found multi-step chains of latent bugs.

Concerned about publishing this technique, WithSecure classified the research. “It was too effective,” admits Hyppönen. Yet similar methods will emerge from elsewhere soon.

One core driver of AI’s threat is its trial-and-error speed. What takes white hats weeks of auditing, neural networks can test countless variants per second. The difference between defense versus offense boils down to intent and incentives.

Hyppönen expects the advantage to tip towards hackers as generative AI spreads new ammunition. Whether phishing content, malware logic, or penetration testing, automation brings daunting asymmetry.

While AI cannot yet match the creativity of top hackers, it continues closing fast. The line blurs between human ingenuity and machine learning in the cyber arena. This reality underpins the urgency of using AI safely under oversight.

5. Automated malware

Hyppönen’s top concern for 2024 is fully automated malware enabled by AI. WithSecure has long pioneered automation for defense, staying steps ahead of manual hackers. But this advantage could flip with self-propagating attacks.

“That would turn the game into good AI versus bad AI,” says Hyppönen. Malware focused solely on evasion and spread presents a wicked problem.

Rather than profit or politics, think of a pathogen single-mindedly obsessed with infection. It conceals itself through constant mutation, mines vulnerabilities around the clock, and calibrates infection methods for maximum contagion.

Such a fully automated creation would far surpass today’s Criminal hacker operations constrained by human speed and focus. And the ingredients are falling into place, from neural networks to generate logic, to lifelike conversation for social engineering.

The saving grace so far is the creativity gap – people still create better than machines. But AI is gaining quickly. Malware needs just formulaic goals rather than ingenious hacking. This perfect storm drives Hyppönen’s top concern for the year.

Yet peering farther ahead reveals an even more unsettling possibility: malware emerging spontaneously from the machine, free of human direction or intent. The rise of unsupervised learning brings this future closer daily.

The perilous path to AGI

Hyppönen’s law holds that “smart” devices are inherently vulnerable. If this applies to superintelligent AI, the consequences could be dire. Troublingly, he expects to see artificial general intelligence (AGI) emerge within his lifetime.

“I think we’ll become the planet’s second most intelligent being soon,” says Hyppönen. “Maybe not in 2024, but in coming decades.”

This prospect amplifies the urgency of alignment. Without shared values and incentives between man and machine, AGI poses a catastrophic risk. Hyppönen stresses that we must instill an innate understanding of humanity in the systems we build.

The potential upside of aligned AGI surpasses any past breakthrough. Disease eradication, climate solutions, space travel – ultra-intelligence could greatly advance these quests. Yet the downside includes human extinction if we get it wrong.

As AGI progresses from narrow to general abilities, it may hit an inflection point where oversight grows impossible. Some believe this shift will be gradual, while others expect a sharp discontinuity. In any case, the trajectory demands urgent attention well before that threshold arrives.

Hyppönen considers strong alignment non-negotiable for advanced AI. Otherwise, his namesake law suggests superintelligent systems will inevitably turn on their makers without built-in safeguards. There may be no reset button if we wait too long to install them.

Leave a Reply