Andrea Vallone, a senior safety researcher at OpenAI, has moved to Anthropic. She’ll be working on the alignment team, which focuses on AI model risks. Vallone spent three years at OpenAI, where she founded the ”Model Policy” research team and contributed to major projects including GPT-4, GPT-5, and the company’s reasoning models.
Källa: OpenAI safety researcher joins Anthropic’s alignment team
