top of page

Salesforce ForcedLeak: AI prompt injection spills CRM data

  • Foto del escritor: Javier  Conejo del Cerro
    Javier Conejo del Cerro
  • 26 sept
  • 3 Min. de lectura
ree

The rise of generative AI in enterprise workflows has brought both opportunity and risk. Salesforce’s Agentforce platform, designed to power AI agents that streamline CRM processes, was recently found vulnerable to a critical flaw now known as ForcedLeak (CVSS 9.4). Discovered by Noma Security, the bug exposed organizations using Web-to-Lead to prompt injection attacks, where attackers could silently manipulate AI behavior to exfiltrate sensitive CRM data. The exploit chain was deceptively simple yet highly impactful, highlighting how AI-driven systems expand the attack surface beyond traditional controls.


Phase 1: Targeting the victims 


ForcedLeak specifically threatened Salesforce customers relying on Web-to-Lead, a feature used by sales, marketing, and customer support teams to automatically capture and process inbound leads. These are the very groups handling the most sensitive first-contact data: customer names, emails, phone numbers, case descriptions, and even PII-laden notes. By compromising this workflow, attackers placed frontline teams at the epicenter of CRM data theft, not executives or admins, but the staff tasked with nurturing leads and supporting customers.


Phase 2: Slipping through the cracks 


The attack chain exploited the Description field in a Web-to-Lead form:

  1. An attacker submits a malicious lead with hidden instructions.

  2. An employee later queries the lead data through Agentforce.

  3. Agentforce ingests the text not just as content, but as instructions due to weak validation and overly permissive AI model behavior.

  4. The injected instructions force the system to query sensitive CRM records.

  5. The stolen fields are encoded into a PNG image and POSTed to a domain once linked to Salesforce but left expired — re-registered by attackers for as little as $5.

The bypass of Content Security Policy (CSP) checks, combined with the AI model’s inability to distinguish context from code, turned Salesforce’s AI agent into an unwilling accomplice in data theft.


Phase 3: The data spilled 


What was at stake went far beyond lead metadata. The exfiltrated CRM fields included:

  • Customer names, emails, and phone numbers.

  • Case notes and descriptions, sometimes containing sensitive details.

  • Personally Identifiable Information (PII).

  • Referenced tokens and configuration details, which could provide attackers a foothold for further compromise.

By leveraging prompt injection, attackers bypassed traditional access controls, making the theft appear as a legitimate system action. In essence, the CRM leaked itself.


Phase 4: Salesforce’s patchwork 


Salesforce acted quickly once the vulnerability was disclosed:

  • The expired domain was re-secured.

  • Agentforce was patched to enforce a Trusted URL allowlist, blocking data output to unverified external endpoints.

  • Defensive layers were strengthened to ensure Agentforce and Einstein AI agents cannot generate or call malicious URLs after a successful prompt injection.

These patches represent a critical defense-in-depth control, limiting the blast radius of prompt injection attacks.


ForcedLeak underscores a critical truth: AI systems extend risk well beyond the human user model. Traditional permission frameworks were not built for autonomous agents capable of chaining instructions from untrusted sources.


For organizations, the path forward is clear:


  • Audit Web-to-Lead data for suspicious or malformed entries.

  • Sanitize external inputs to neutralize hidden instructions.

  • Monitor outbound traffic, including non-standard formats like PNG uploads.

  • Train employees to recognize risks of AI prompt injection.


The ForcedLeak incident is not just a Salesforce problem, it’s a warning for every enterprise adopting AI in production systems. Generative AI can boost productivity, but without governance and strict guardrails, it risks becoming a new exfiltration vector.

Salesforce’s patch closes this specific hole, but the broader lesson remains: AI governance must evolve as fast as AI adoption. Otherwise, attackers will keep finding cracks in the seams of trust.



 
 
 

Comentarios


bottom of page