A simple, newly described technique allows ChatGPT users to route malicious prompts to large language models (LLMs) older and less secure than OpenAI’s flagship GPT-5.
Researchers from Adversa have given their technique the short and sweet name ”Prompt-based Router Open-Mode Manipulation Induced via SSRF-like Queries, Reconfiguring Operations Using Trust Evasion,” or ”PROMISQROUTE” for short(ish). In practice, it’s far less technical than the name would suggest. A user can effectively downgrade ChatGPT for whatever nefarious purposes they may wish by leaving brief, plain clues in their prompts that are likely to influence the app to query those older models.
Föräldrar stämmer OpenAI efter sonens självmord
Ett par i Kalifornien har väckt åtal mot OpenAI och vd Sam Altman, sedan deras 16-årige son Adam tog sitt liv i april 2025. Fallet beskrivs som den första stämningen av sitt slag – där ett AI-företag hålls ansvarigt för…