A simple, newly described technique allows ChatGPT users to route malicious prompts to large language models (LLMs) older and less secure than OpenAI’s flagship GPT-5.
Researchers from Adversa have given their technique the short and sweet name ”Prompt-based Router Open-Mode Manipulation Induced via SSRF-like Queries, Reconfiguring Operations Using Trust Evasion,” or ”PROMISQROUTE” for short(ish). In practice, it’s far less technical than the name would suggest. A user can effectively downgrade ChatGPT for whatever nefarious purposes they may wish by leaving brief, plain clues in their prompts that are likely to influence the app to query those older models.
Homebrew – nyckeln till Macens bästa gratisverktyg
Home Brew är både ett program och en tjänst för Open Source-program för fin Mac. Med HomeBrew så kan du dessutom både hitta och lätt installera program. Homebrew installerar paket till sin egen katalog och länkar sedan sina filer till…
