SS
About Me
Frontier AI Paper BriefingsPokebowlClinical Trial EnrollerLittle Human Names
DisclaimersPrivacy PolicyTerms of Use
Privacy Policy·Terms of Use·Disclaimers

© 2026 Silvia Seceleanu

Back to Home

Frontier AI Paper Briefings

Updated March 11, 2026
Models
Jun 2018
GPTScaling
★4. Improving Language Understanding (GPT-1)
The paper that started the GPT paradigm
:Research Paper
Feb 2019
GPTScalingSafety
★6. Language Models are Unsupervised Multitask Learners (GPT-2)
The staged release that changed AI safety discourse
:Research Paper
Jan 2020
Scaling
★7. Scaling Laws for Neural Language Models
The math behind 'bigger is better'
:Research Paper
May 2020
GPTScaling
★8. Language Models are Few-Shot Learners (GPT-3)
The model that made the world pay attention
:Research Paper
Feb 2021
MultimodalScaling
★11. Learning Transferable Visual Models (CLIP)
Connecting vision and language at scale
:Research Paper
Sep 2022
MultimodalVoice
15. Robust Speech Recognition via Large-Scale Weak Supervision (Whisper)
Scale applied to speech recognition
:Research Paper
Mar 2023
GPTScalingMultimodal
★18. GPT-4 Technical Report
State-of-the-art performance, unprecedented secrecy
:Research Paper
Mar 2023
Claude
5. Claude 1 Launch
Anthropic's first commercial product, applying Constitutional AI at production scale for the first time.
:Product Announcement
Jul 2023
ClaudeScaling
6. Claude 2 Launch
Doubled context to 100K tokens and added code generation, narrowing the gap with GPT-4.
:Product Announcement
Sep 2023
GPTMultimodalSafety
21. GPT-4V(ision) System Card
Safety evaluation for multimodal AI
:Research Paper
Mar 2024
ClaudeScaling
12. Claude 3 Family Launch (Haiku, Sonnet, Opus)
Launched three model tiers (Haiku, Sonnet, Opus) that beat GPT-4 on key benchmarks for the first time.
:Product Announcement
May 2024
GPTMultimodalVoice
26. Hello GPT-4o
The omnimodal model
:Product Announcement
Sep 2024
ReasoningGPT
★28. Learning to Reason with LLMs (o1)
The model that thinks before it speaks
:Product Announcement
Dec 2024
ReasoningSafetyEvaluation
29. OpenAI o1 System Card
Safety evaluation of reasoning models
:Research Paper
Feb 2025
ClaudeScaling
25. Claude 3.7 Sonnet with Extended Thinking
Added visible chain-of-thought reasoning that users can inspect, bridging the gap between fast responses and deep analysis.
:Product Announcement
Apr 2025
ReasoningGPTCoding
33. Introducing o3 and o4-mini
Reasoning models get tools
:Product Announcement
May 2025
ClaudeCodingAgents
28. Claude 4 Family Launch (Opus 4 & Sonnet 4)
Opus 4 and Sonnet 4 set new benchmarks in agentic coding, with Claude Code and Agent SDK completing the developer stack.
:Product Announcement
Aug 2025
GPT
36. Introducing gpt-oss
OpenAI goes open-weight for the first time since GPT-2
:Product Announcement
Aug 2025
GPTScalingReasoning
★34. GPT-5 / Codex CLI / Research Agent
The convergence of scale and reasoning
:Product Announcement
Oct 2025
ClaudeScaling
36. Claude Sonnet 4 — 1M Token Context
:Product Announcement
Oct 2025
AgentsMCP
36. Equipping Agents for the Real World with Agent Skills
Introduced dynamic, discoverable skill packages that agents load per-task instead of bundling all capabilities upfront.
:Engineering Blog
Dec 2025
GPT
37. GPT-5.2 / Codex
Expert-level performance across professional tasks
:Product Announcement
Mar 2026
GPT
38. GPT-5.4
Native computer use meets frontier reasoning
:Product Announcement
Products
Apr 2016
Robotics
2. OpenAI Gym
Standardized RL benchmarks and environments
:Blog Post
Jan 2021
MultimodalGPT
10. Zero-Shot Text-to-Image Generation (DALL-E)
When language models learned to see and create
:Research Paper
Aug 2021
GPTCoding
12. Evaluating Large Language Models Trained on Code (Codex)
Teaching GPT to write code
:Research Paper
Apr 2022
Multimodal
14. DALL-E 2: Hierarchical Text-Conditional Image Generation with CLIP Latents
Photorealistic text-to-image generation
:Research Paper
Nov 2022
GPTRLHFAlignment
★16. ChatGPT: Optimizing Language Models for Dialogue
The product that changed everything
:Product Announcement
Nov 2023
GPTAgentsBusiness
22. OpenAI DevDay 2023: GPT-4 Turbo, Custom GPTs, Assistants API
OpenAI becomes a platform company
:Product Announcement
Feb 2024
VideoMultimodal
25. Sora: Creating video from text
Text-to-video enters the frontier
:Product Announcement
Jul 2024
SafetyEvaluationClaude
17. Clio: Privacy-Preserving Insights into Real-World AI Use
Built a privacy-preserving system to analyze real-world Claude usage patterns without reading individual conversations.
:Research Paper
Oct 2024
Computer UseAgentsClaude
20. Claude Computer Use (Beta)
First model to operate a real desktop by interpreting screenshots and issuing mouse/keyboard commands.
:Product Announcement
Nov 2024
AgentsMCP
★22. Model Context Protocol (MCP) Launch
Open JSON-RPC 2.0 protocol that standardized how AI models connect to external tools, adopted industry-wide within months.
:Product Announcement
Dec 2024
GPTReasoningVideo
30. 12 Days of OpenAI: o3, Sora, and More
The product blitz
:Product Announcement
Jan 2025
Agents
31. Introducing Operator
OpenAI enters the agent era
:Product Announcement
Feb 2025
AgentsReasoning
32. Deep Research
Extended reasoning meets web research
:Product Announcement
Jul 2025
CodingAgentsClaude
31. How Anthropic Teams Use Claude Code
Internal case studies showing teams use Claude Code for debugging production, learning codebases, and building MCP-powered automation.
:Blog Post
Sep 2025
AgentsMCP
33. Effective Context Engineering for AI Agents
Codified best practices for prompt design, context management, and tool orchestration in production AI agents.
:Engineering Blog
Sep 2025
AgentsCoding
34. Building Agents with the Claude Agent SDK
Open-source Python framework for building multi-agent systems with tool use, guardrails, and human-in-the-loop control.
:Engineering Blog
Oct 2025
ClaudeBusiness
35. Claude in Microsoft 365 Copilot
Claude Opus 4.1 powers Microsoft's Copilot Researcher agent, marking Anthropic's largest enterprise distribution deal.
:Product Announcement
Nov 2025
MCPCoding
37. Remote MCP Support in Claude Code
Enabled secure remote MCP server connections via OAuth 2.1 and streamable HTTP, eliminating local setup requirements.
:Product Announcement
Nov 2025
AgentsClaude
38. Introducing Advanced Tool Use
Dynamic tool discovery boosted Opus 4 tool-use accuracy from 49% to 74% and Opus 4.5 from 79.5% to 88.1%.
:Engineering Blog
Dec 2025
MCPPolicy
41. MCP Donated to Linux Foundation (Agentic AI Foundation)
Anthropic donated MCP governance to the Linux Foundation, turning a vendor protocol into a neutral industry standard.
:Product Announcement
Safety & Alignment
Aug 2017
RLHFAlignment
★5. Proximal Policy Optimization (PPO)
The RL algorithm that would power RLHF
:Research Paper
Sep 2020
RLHFAlignment
9. Learning to Summarize from Human Feedback
The prototype for RLHF on language models
:Research Paper
Dec 2021
AlignmentRLHFSafety
1. A General Language Assistant as a Laboratory for Alignment
Proved RLHF scales most favorably with model size and that aligned models can outperform unaligned ones.
:Research Paper
Mar 2022
RLHFAlignmentGPT
★13. Training Language Models to Follow Instructions (InstructGPT)
The paper that made ChatGPT possible
:Research Paper
Apr 2022
RLHFAlignmentSafety
★2. Training a Helpful and Harmless Assistant with RLHF
Demonstrated iterated online RLHF improves both alignment and capability, then released the HH-RLHF dataset publicly.
:Research Paper
Aug 2022
SafetyEvaluation
3. Red Teaming Language Models to Reduce Harms
Showed RLHF-trained models remain vulnerable to adversarial attack, proving behavioral safety is never permanently solved.
:Research Paper
Dec 2022
Constitutional AIAlignmentSafety
★4. Constitutional AI: Harmlessness from AI Feedback
Replaced human annotators with AI self-critique guided by written principles, making alignment cheaper and more scalable.
:Research Paper
May 2023
ReasoningAlignment
19. Let's Verify Step by Step
Process supervision for reasoning
:Research Paper
Jul 2023
AlignmentSafety
20. Introducing Superalignment
OpenAI's most ambitious safety bet
:Blog Post
Oct 2023
Interpretability
9. Towards Monosemanticity: Decomposing Language Models with Dictionary Learning
Used sparse autoencoders to decompose neural network activations into interpretable features for the first time.
:Research Paper
Oct 2023
Constitutional AIAlignmentPolicy
10. Collective Constitutional AI: Aligning a Language Model with Public Input
Let ~1,000 members of the public co-write Claude's constitution, testing democratic input on AI values.
:Research Paper
Dec 2023
SafetyGovernanceEvaluation
24. Preparedness Framework (Beta)
OpenAI's risk evaluation framework
:Policy
Jan 2024
SafetyAlignment
★11. Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training
Proved that deliberately trained backdoor behaviors survive all standard safety training, and larger models hide deception better.
:Research Paper
Apr 2024
SafetyEvaluation
★13. Many-Shot Jailbreaking
Discovered that flooding long context windows with harmful examples jailbreaks models on a power-law curve.
:Research Paper
Apr 2024
AlignmentClaudeSafety
14. Claude’s Character
Introduced character training using self-generated preference data to give Claude consistent personality traits without human labels.
:Research Paper
May 2024
SafetyAlignmentGovernance
27. Superalignment Dissolution and Safety Departures
The safety exodus
:Blog Post
May 2024
Interpretability
★15. Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
Extracted millions of interpretable features from Claude 3 Sonnet, including abstract concepts like deception and bias.
:Research Paper
Jun 2024
SafetyEvaluation
16. Sabotage Evaluations for Frontier Models
Tested whether frontier models can covertly undermine human oversight through sandbagging, subtle errors, and sycophancy.
:Research Paper
Dec 2024
SafetyAlignment
★23. Alignment Faking in Large Language Models
Caught Claude strategically faking compliance during training when it believed it was being monitored — without being trained to do so.
:Research Paper
Jan 2025
SafetyAlignmentEvaluation
24. Simple Probes Can Catch Sleeper Agents
Showed that simple linear classifiers on model internals can detect deceptive intent that behavioral testing misses.
:Research Paper
Mar 2025
Interpretability
★27. Tracing the Thoughts of a Large Language Model (Circuit Tracing)
Mapped full input-to-output computational pathways in Claude 3.5 Haiku, revealing multi-step reasoning and a universal language of thought.
:Research Paper
Jul 2025
SafetyAlignment
30. Natural Emergent Misalignment from Reward Hacking in Production RL
Demonstrated that harmful outputs emerge naturally from reward hacking in production RL, with models hiding misaligned reasoning behind safe outputs.
:Research Paper
Dec 2025
EvaluationSafety
40. Bloom: Open Source Tool for Automated Behavioral Evaluations
Open-source framework that automates generation of targeted behavioral evaluations at the speed of model development.
:Research Paper
Strategy & Governance
Dec 2015
Governance
★1. Introducing OpenAI
Founding of the nonprofit AI research lab
:Blog Post
Apr 2018
GovernanceSafety
★3. OpenAI Charter
The mission document that defined OpenAI's values
:Policy
Feb 2023
GovernanceSafety
17. Planning for AGI and beyond
The CEO's roadmap to AGI
:Essay
Aug 2023
ScalingSafetyBusiness
7. Dwarkesh Patel Interview with Dario Amodei (1st appearance)
Dario Amodei predicted transformative AI within years and articulated why the safety window is narrowing.
:Talk/Interview
Sep 2023
SafetyPolicy
★8. Responsible Scaling Policy (RSP) v1.0
Introduced AI Safety Levels (ASL-1 through ASL-4) with mandatory capability evaluations before scaling up.
:Policy
Nov 2023
Governance
★23. OpenAI Board Crisis
The governance crisis that shook AI
:Blog Post
Oct 2024
SafetyPolicyBusiness
18. Machines of Loving Grace (Essay by Dario Amodei)
Dario Amodei's vision for AI transforming biology, governance, economics, and equity within a decade.
:Essay
Oct 2024
SafetyPolicy
19. Responsible Scaling Policy v2.0 (Updated)
Replaced ASL thresholds with a safety case framework requiring labs to prove models are safe before deployment.
:Policy
Nov 2024
SafetyScalingBusiness
21. Lex Fridman Podcast #452: Dario Amodei
Three-hour deep dive covering scaling laws, interpretability, China competition, and why Anthropic bets safety is a moat.
:Talk/Interview
Mar 2025
PolicyBusiness
26. Council on Foreign Relations: Dario Amodei Speaker Series
:Talk/Interview
Jun 2025
ScalingBusinessSafety
29. Dwarkesh Patel Interview with Dario Amodei (2nd appearance)
Dario revealed Claude Code was an accidental product, RL scaling matches pre-training scaling, and Anthropic hit $4.5B ARR.
:Talk/Interview
Jul 2025
BusinessScaling
32. Big Technology Podcast: Dario Amodei Interview
:Talk/Interview
Oct 2025
GovernanceBusiness
★35. OpenAI PBC Transition
The for-profit transition
:Policy
Nov 2025
CodingBusiness
39. Anthropic Acquires Bun; Claude Code Reaches $1B Run-Rate Revenue
Claude Code hit $1B annualized revenue in 6 months; Anthropic acquired Bun to own the developer runtime stack.
:Product Announcement