Artificial intelligenceis nothing new for the cybersecurity industry.
Companies promised that their latest solution would stop malicious hackers in their tracks before they could do any damage.
Government officials see the need to prepare for that day, too.
AI heavyweights including ChatGPT creator OpenAI, Google and Microsoft have signed on to take part.
First place earns a prize of $4 million.
Experts worry that AI will enable massively scaled phishing operations that are highly customized and highly convincing.
The company started a decade ago in Cambridge, England, as an AI-research organization.
It now uses the technology in its cybersecurity operations.
“Think about the CGI in a video game 10 years ago,” he said.
“Ten years from now, who knows how good AI will be?”
The company also can unleash the offensive AI on its clients in simulations.
The idea isn’t to fool companies, just show them where they need to get better.
In one instance, they used AI software to get their fictitious bank to approve a fraudulent loan program.
“All of the sudden these things fall apart quite quickly,” he said.
“Either they leak sensitive information, or be embarrassing, or potentially tell you malware is safe.
There are lots of potential consequences.”
But it’s unclear how that actually could be enforced.
The internal algorithms of AI systems, like ChatGPT, are effectively black boxes, he says.
There just aren’t enough qualified professionals to fill all of the open cybersecurity jobs.
“There are genuine uses for AI in cybersecurity that are amazing,” he said.
“They’ve just gotten buried because we’re so focused on the nonsense.”