A simple, newly described technique allows ChatGPT users to route malicious prompts to large language models (LLMs) older and less secure than OpenAI’s flagship GPT-5.
Researchers from Adversa have given their technique the short and sweet name ”Prompt-based Router Open-Mode Manipulation Induced via SSRF-like Queries, Reconfiguring Operations Using Trust Evasion,” or ”PROMISQROUTE” for short(ish). In practice, it’s far less technical than the name would suggest. A user can effectively downgrade ChatGPT for whatever nefarious purposes they may wish by leaving brief, plain clues in their prompts that are likely to influence the app to query those older models.
TV4 lämnar marknätet – blir betalkanal vid årsskiftet
Vid årsskiftet sker en historisk förändring för svensk tv. TV4 lämnar marknätet och upphör därmed att vara en fri tv-kanal som hushåll kan ta emot med vanlig antenn. I stället samlar kanalen sina resurser kring streamingtjänsten TV4 Play. Där går…