Left Navigation

OSC Announces Artificial Intelligence Policy to Promote Safety, Enhance Efficiency, and Protect Rights

10/16/2024
General
The policy ensures that OSC is equipped to safely and responsibly manage artificial intelligence within the agency. 

The U.S. Office of Special Counsel (OSC) today unveiled a new policy governing the use of artificial intelligence (AI). 

The policy ensures that OSC is equipped to safely and responsibly manage artificial intelligence within the agency.  OSC has created an inventory for current and future covered AI “use cases" at the agency, adopted review procedures for the safe deployment of new covered AI tools, and designated a Chief AI Officer (CAIO) to oversee agency practices.  These steps also ensure OSC has met its obligations under President Biden's Executive Order (EO) 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, OMB Memorandum M-24-10, and the Advancing American AI Act of 2022. 

“OSC will lead by example in the smart and responsible use of artificial intelligence in government," said Special Counsel Hampton Dellinger.

OSC's new policy recognizes that AI will affect agencies and programs across government.  Accordingly, the policy reflects OSC's commitment to executing its mission against the backdrop of a rapidly evolving technological landscape. “We stand ready to protect the fundamental principles of fairness, transparency, and accountability across the federal workforce in this new age of artificial intelligence," said Special Counsel Dellinger. 

The policy itself has three core components.  First, OSC is committed to rigorous enforcement when a claim within our jurisdiction involves AI.  “AI must be used lawfully, and it cannot be permitted to jeopardize the rights or safety of government employees or the federal merit system," said Special Counsel Dellinger.  In the employment context alone, AI will affect hiring practices, job responsibilities, employee evaluations, and program management.  “OSC is committed to addressing allegations of unlawful algorithmic bias and safeguarding novel threats to data security and individual privacy. Doing so is core to our mission of protecting the federal service," said Special Counsel Dellinger.  “There is a risk of harm when substituting computer decision-making for human judgment."

OSC has convened an official AI task force.  The task force will focus on the intersection of artificial intelligence and OSC's mission, ensuring the agency is prepared to identify and assess unlawful AI use—and to pursue corrective action and remedies where appropriate. 

Second, OSC's AI policy places transparency at the core.  OSC recognizes that transparency builds trust in government and creates important safeguards.  Transparency starts with meeting reporting deadlines and maintaining inventories of covered AI uses for review by oversight entities, but it does not end there.  OSC has also created its own webpage with information on OSC's AI Policy and its submission to OMB regarding AI use cases, available to watchdogs and the American public at https://osc.gov/ai

Third, OSC's AI policy ensures the agency may harness AI to increase efficiency.  For example, AI can accelerate basic and applied research across the agency.  “Where AI can be used responsibly to expand our capacity, we will explore how to take full advantage of the technology," said Special Counsel Dellinger. “And where AI has the potential to affect basic rights and safety, we will proceed responsibly and transparently."

For additional information about OSC uses of AI, please visit https://osc.gov/ai.

​***