OpenAI is hiring a Head of Preparedness. Or, in different phrases, somebody whose main job is to consider all of the methods AI may go horribly, horribly fallacious. In a post on X, Sam Altman introduced the place by acknowledging that the fast enchancment of AI fashions poses “some actual challenges.” The put up goes on to particularly name out the potential impression on individuals’s psychological well being and the hazards of AI-powered cybersecurity weapons.

The job listing says the particular person within the function could be chargeable for:

“Monitoring and making ready for frontier capabilities that create new dangers of extreme hurt. You can be the immediately accountable chief for constructing and coordinating functionality evaluations, risk fashions, and mitigations that type a coherent, rigorous, and operationally scalable security pipeline.”

Altman additionally says that, wanting ahead, this particular person could be chargeable for executing the corporate’s “preparedness framework,” securing AI fashions for the discharge of “organic capabilities,” and even setting guardrails for self-improving programs. He additionally states that it will likely be a “traumatic job,” which looks as if an understatement.



Source link

By 12free

Leave a Reply

Your email address will not be published. Required fields are marked *