New guidelines for the federal government’s use of AI are unveiled by the Biden administration

The Biden administration unveiled three new policies on Thursday to steer the federal government’s adoption of artificial intelligence (AI), presenting them as a benchmark for global action in the swiftly evolving technology landscape.

These policies, stemming from an executive order signed by President Joe Biden in October, respond to mounting concerns regarding AI’s implications for the U.S. workforce, privacy, national security, and the potential for discriminatory decision-making.

The Office of Management and Budget at the White House will mandate that federal agencies ensure their use of AI does not jeopardise the “rights and safety” of Americans. Additionally, agencies must enhance transparency by publicly disclosing the AI systems they employ, along with an evaluation of associated risks and risk management strategies.

Furthermore, all federal agencies are directed to appoint a chief AI officer with expertise in the field to oversee AI implementation within their respective organizations.

Vice President Kamala Harris, who has been pivotal in shaping the administration’s AI agenda, communicated these directives during a press briefing. She emphasised that the policies were crafted with input from various stakeholders, including the public and private sectors, computer scientists, civil rights advocates, legal scholars, and business leaders.

Harris underscored the administration’s commitment to positioning these domestic policies as a global model for AI governance, echoing similar sentiments expressed during a global summit in London last November.

The federal government has revealed over 700 instances of existing and planned AI utilisation across agencies, with the Department of Defence alone undertaking more than 685 unclassified AI projects, as reported by the nonpartisan Congressional Research Service.

Examples of AI applications in various agencies range from documenting suspected war crimes to detecting COVID-19 through smartphone cough analysis and preventing illegal activities like fentanyl smuggling and child exploitation.

To address safety concerns surrounding AI, federal agencies must implement measures by December to reliably assess, test, and monitor AI impacts, mitigate algorithmic discrimination risks, and provide public disclosure on AI utilization. Harris illustrated this with an example of ensuring AI systems used in Veterans Administration hospitals for diagnosis don’t yield racially biassed results.


Magazine made for you.


No posts were found for provided query parameters.