HackerOne: 48% of Security Professionals Believe AI Is Risky


A recent HackerOne survey shed light on the growing concerns AI brings to the cybersecurity landscape. The report drew insights from 500 security experts, a community survey of 2,000 members, feedback from 50 customers, and anonymized platform data.

Their most significant concerns related to AI were:

  • Leaked training data (35%).
  • Unauthorized usage (33%).
  • The hacking of AI models by outsiders (32%)

The survey also found that 48% believe AI poses the most significant security risk to their organization. These fears highlight the urgent need for companies to reassess their AI security strategies before vulnerabilities become real threats.

How the security research community changed in the age of AI

The HackerOne report indicated that AI can pose a threat — and the security community has been aiming to counter that threat. Among those surveyed, 10% of security researchers specialize in AI. In fact, 45% of security leaders consider AI among their organizations’ greatest risks. Data integrity, in particular, was a concern.

“AI is even hacking other AI models,” said Jasmin Landry, a security researcher, and HackerOne pentester, also known as @jr0ch17, in the report.

Of those surveyed, 51% say basic security practices are being overlooked as companies hurry to include generative AI. Only 38% of HackerOne customers felt confident in defending against AI threats.

Most commonly reported AI vulnerabilities include logic errors and LLM prompt injection

As a security platform, HackerOne has seen the number of AI assets included in its programs grow by 171% over the past year.

The most commonly reported vulnerabilities in AI assets are:

  • General AI safety (such as preventing AI from generating harmful content) (55%).
  • Business logic errors (30%).
  • LLM prompt injection (11%).
  • LLM training data poisoning (3%).
  • LLM sensitive information disclosure (3%).

HackerOne emphasized the importance of the human element in protecting systems from AI and keeping these tools safe.

“Even the most sophisticated automation can’t match the ingenuity of human intelligence,” said Chris Evans, HackerOne CISO and chief hacking officer, in a press release. “The 2024 Hacker-Powered Security Report proves how essential human expertise is in addressing the unique challenges posed by AI and other emerging technologies.”

SEE: For the third quarter in a row, executives are more concerned about AI-assisted attacks than any other threat, Gartner reported.

Outside AI, cross-site scripting problems occur the most

Some things haven’t changed: Cross-site scripting (XSS) and misconfigurations are the weaknesses most reported by the HackerOne community. The respondents consider penetration tests and bug bounties the best ways to identify issues.

AI tends to generate false positives for security teams

Further research from a HackerOne-sponsored SANS Institute report in September revealed that 58% of security professionals believe that security teams and threat actors could find themselves in an “arms race” to leverage generative AI tactics and techniques in their work.

Security professionals in the SANS survey said they have successfully used AI to automate tedious tasks (71%). However, the same participants acknowledged that threat actors could exploit AI to make their operations more efficient. In particular, respondents “were most concerned with AI-powered phishing campaigns (79%) and automated vulnerability exploitation (74%).”

SEE: Security leaders are getting frustrated with AI-generated code.

“Security teams must find the best applications for AI to keep up with adversaries while also considering its existing limitations — or risk creating more work for themselves,” Matt Bromiley, an analyst at the SANS Institute, said in a press release.

The solution? AI implementations should undergo an external review. Over two-thirds of those surveyed (68%) chose “external review” as the most effective way to identify AI safety and security issues.

“Teams are now more realistic about AI’s current limitations” than they were last year, said HackerOne Senior Solutions Architect Dane Sherrets in an email to TechRepublic. “Humans bring a lot of important context to both defensive and offensive security that AI can’t replicate quite yet. Problems like hallucinations have also made teams hesitant to deploy the technology in critical systems. However, AI is still great for increasing productivity and performing tasks that don’t require deep context.”

Further findings from the SANS 2024 AI Survey, released this month, include:

  • 38% plan to adopt AI within their security strategy in the future.
  • 38.6% of respondents said they have faced shortcomings when using AI to detect or respond to cyber threats.
  • 40% cite legal and ethical implications as a challenge to AI adoption.
  • 41.8% of companies have faced pushback from employees who do not trust AI decisions, which SANS speculates is “due to lack of transparency.”
  • 43% of organizations currently use AI within their security strategy.
  • AI technology within security operations is most often used in anomaly detection systems (56.9%), malware detection (50.5%), and automated incident response (48.9%).
  • 58% of respondents said AI systems struggle to detect new threats or respond to outlier indicators, which SANS attributes to a lack of training data.
  • Of those who reported shortcomings with using AI to detect or respond to cyber threats, 71% said AI generated false positives.

HackerOne’s tips for improving AI security

HackerOne recommends:

  • Regularly testing, validation, verification, and evaluation throughout an AI model’s life cycle — from training to deployment and use.
  • Researching whether government or industry-specific AI compliance requirements are relevant to your organization and establishing an AI governance framework.

HackerOne also strongly recommended that organizations communicate about generative AI openly and provide training on relevant security and ethical issues.

HackerOne released some survey data in September and the full report in November. This updated article considers both.



Source link

What do you think?

Your email address will not be published. Required fields are marked *

No Comments Yet.