Safety
Superalignment
Definition
“
OpenAI's research initiative focused on ensuring superintelligent AI systems remain aligned with human values. The superalignment team aims to solve alignment before superintelligence arrives, developing techniques for humans to oversee AI systems that may become more capable than their creators at any given task.
”
Related Terms
No related terms linked yet.
Explore all terms →