top of page

Governing the Future: How U.S. Federal AI Regulations are Shaping Accountability in Public and Private Sectors

Nevaeh Hicks

By: Nevaeh Hicks



Photo by Transcend

The rapid development of artificial intelligence (AI) is transforming industries and raising complex ethical and regulatory challenges. As AI systems become integral to both the public and private sectors, questions around accountability and risk management demand attention. To address these concerns, the U.S. government has introduced initial policies aimed at guiding responsible AI use, such as Executive Order 14110 and directives from the Office of Management and Budget (OMB). These measures signal a commitment to oversight, with an emphasis on individual rights and public safety. However, the diversity of applications across sectors has highlighted governance gaps that require a cohesive approach. To address these, I propose establishing an Interagency Governance Council tasked with enforcing ethical standards and best practices across federal AI deployments. This council would centralize risk assessment and transparency protocols, thereby reinforcing public trust in AI while supporting responsible innovation.
At the core of AI governance is the need for accountability. AI systems often function as opaque “black boxes,” where decision-making processes are difficult for humans to interpret. This lack of transparency poses significant challenges, particularly when AI is used in critical areas such as law enforcement, healthcare, or employment, where the consequences can impact individuals’ rights and well-being.  In the absence of clear guidelines, it is difficult to pinpoint accountability among developers, operators, and deploying entities. Scholars emphasize the need for accountability frameworks to ensure AI systems do not perpetuate biases or cause harm (Cheong).  For instance, model auditing techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) offer insights into how AI systems process information and make decisions (Salih et al.). SHAP assigns importance values to each input feature, showing its contribution to a model's prediction, while LIME approximates a model’s decision-making by creating simpler, interpretable models for individual predictions. These tools could enable stakeholders to evaluate fairness, detect biases, and better understand how specific inputs influence AI decisions, fostering greater transparency and accountability (Nguyen et al.). Additionally, requiring developers and deploying entities to maintain detailed records of training datasets, model parameters, and decision rationale could help trace accountability when errors or harms occur and support regulatory oversight. By incorporating these measures, it becomes possible to penetrate the “black box” of AI and provide stakeholders with a clearer understanding of its decision-making processes.
Transparency is equally essential to ethical AI. It allows stakeholders, including the public, to understand and scrutinize AI systems (Bhattacharjea). A lack of transparency undermines public trust and obstructs individuals’ ability to contest AI-driven decisions, particularly when these systems impact rights and livelihoods. For U.S. federal agencies, which must comply with privacy and civil rights standards, transparent AI systems are crucial for democratic oversight and accountability, particularly in government applications (Floridi and Cowls). However, achieving effective transparency remains a challenge. A recent study by Standford University researchers  Lawrence, Cui, and Ho (2023) found that despite the existence of binding laws designed to mandate transparency, less than 40% of the legal requirements related to AI governance had been implemented across federal agencies. Specifically, nearly half of these agencies failed to publicly disclose inventories of AI use cases, even when such use cases existed (Lawrence et al., 2023). This highlights significant bureaucratic obstacles in enforcing transparency, revealing the gap between policy intentions and actual implementation and underlining the need for stronger governance mechanisms to ensure transparency and accountability in AI systems.
However, achieving transparency in AI governance involves navigating the tension between openness and protecting intellectual property. Transparency in this context does not necessarily mean the full publication of proprietary source code, which could expose companies to unfair competition or security risks. Instead, it could entail structured disclosure requirements tailored to specific stakeholders. For example, federal agencies and regulators could mandate the submission of “algorithmic impact assessments” (AIAs) and technical documentation, including summaries of how models function, the data used for training, and the steps taken to mitigate bias and ensure compliance with legal standards (Metcalf et al.). An AIA is a detailed evaluation process that examines the potential risks and societal impacts of an AI system before deployment. It includes a comprehensive analysis of the system’s functionality, data sources, and any bias mitigation strategies (Kelly-Lyth and Thomas). Additionally, AIAs assess the potential impact on different stakeholders, ensuring that the system adheres to ethical principles, protects privacy, and complies with legal regulations. These disclosures would be reviewed confidentially by designated oversight bodies to maintain proprietary protections while ensuring accountability.
In response to these challenges, the U.S. government has taken foundational steps. Executive Order 14110, for instance, underscores the importance of safe, secure, and trustworthy AI deployment within federal agencies (Exec. Order No. 14110). This order establishes standards for managing risks to public safety and individual rights, promoting alignment between federal AI systems and ethical principles. Similarly, the OMB's Memorandum M-24-18 prioritizes fairness, accountability, and privacy in AI systems used for government programs (OMB Memo M-24-18). However, it is important to note that these policies primarily apply to government applications of AI rather than regulating the private sector more broadly. While these measures set an important precedent, their scope is limited, leaving private-sector AI largely unregulated.
These steps mark progress, but they focus on individual agency-level risks rather than a unified federal approach. Current governance is fragmented, with each agency applying policies based on specific missions, leading to inconsistencies in regulation that could erode public trust (Fazlioglu). The current framework is largely reactive, focusing on risk management after issues arise rather than proactive governance to prevent ethical dilemmas. Effective AI governance requires forward-thinking policies that address accountability and transparency before deployment (Bryson and Theodorou). The absence of a centralized governing body exacerbates these issues, leaving inconsistencies in standards across federal AI systems.
Moreover, the current framework is purely based on executive action rather than Congressional authorization, meaning it lacks the permanence of legislation. Executive actions like Executive Order 14110 and OMB directives can be modified or eliminated by future administrations, creating uncertainty in AI governance. This reliance on executive authority emphasizes the need for more robust legislative action to establish durable and comprehensive regulatory frameworks.
The formation of an Interagency AI Governance Council would address these governance gaps by standardizing protocols for risk assessment, data privacy, and algorithmic transparency across federal agencies. Currently, AI systems in law enforcement, public safety, or healthcare may face varying levels of scrutiny, which a unified council could harmonize. This council would ensure all federal AI systems adhere to rigorous, consistent standards, regardless of agency or field. For example, the council could require agencies to submit periodic algorithmic impact assessments (AIAs) detailing system functionality, data sources, and mitigation strategies for bias. These assessments would be reviewed against a unified federal standard, reducing discrepancies and enhancing accountability. Additionally, the IAIGC would facilitate interagency collaboration through the development of centralized guidelines and repositories for best practices, such as frameworks for data governance or models for integrating ethical considerations into system design. The council could also provide a platform for agencies to report on the implementation of AI systems, fostering alignment with public safety and civil rights priorities.
To monitor AI's societal impact, the IAIGC would employ a multi-pronged approach, including the collection of stakeholder feedback through public surveys and consultations, regular audits of deployed systems, and mandatory reporting from AI service providers on system usage, outcomes, and identified risks. These findings would be consolidated into annual public reports, providing transparency and identifying emerging trends or issues. This proactive oversight would address challenges such as algorithmic bias and accountability, ensuring policies are adapted to evolving risks. By shifting the focus from post-issue remediation to preventive governance, the council would build public trust while supporting responsible innovation.
One obstacle to establishing an Interagency AI Governance Council is potential agency resistance. Unlike a traditional pro/con issue, this resistance could influence whether such a council can feasibly be implemented. Agencies may view centralized oversight as a challenge to their operational autonomy. Addressing this resistance will require demonstrating the long-term benefits of unified governance, such as increased public trust, reduced regulatory ambiguity, and streamlined compliance. or example, a centralized framework could simplify interagency coordination and reduce duplicative oversight efforts, ultimately improving operational efficiency. Another key challenge is potential opposition from industry groups and AI developers. Companies may resist additional regulations out of concern that stricter oversight could stifle innovation or increase operational costs. However, engaging industry stakeholders in the council’s design and governance process could help mitigate these concerns. Offering clear benefits, such as predictable regulatory environments, reduced liability risks, and the opportunity to shape ethical AI standards, could incentivize industry participation.
Despite these challenges, a coordinated approach could provide substantial benefits for both agencies and industry stakeholders. By reducing ambiguity, enhancing accountability, and fostering collaboration, a centralized council could address ethical concerns preemptively and build public trust in AI systems. These shared incentives highlight the potential for alignment between government agencies and industry in creating a sustainable and responsible AI governance model.
Ultimately, as artificial intelligence continues to redefine the public and private sectors, a cohesive governance framework is not just beneficial but necessary. While recent federal efforts, such as Executive Order 14110 and OMB Memorandum M-24-18, mark important steps forward, they also reveal the limitations of a fragmented approach. The establishment of an Interagency AI Governance Council would fill these gaps by harmonizing regulations, promoting transparency, and ensuring accountability across federal applications. Such a council would go beyond addressing immediate risks, providing a foundation for long-term trust in AI systems by embedding ethical principles into governance. By standardizing oversight and fostering collaboration, it would support innovation that aligns with democratic values, protects individual rights, and prioritizes public safety. This proactive and unified approach not only prepares the United States to manage the complexities of AI but also positions it as a global leader in shaping responsible and equitable technological progress.


The views expressed in this publication are the authors' own and do not necessarily reflect the position of The Rice Journal of Public Policy, its staff, or its Editorial Board.
 
References

Almeida, Virgilio, et al. “On the development of AI Governance Frameworks.” IEEE Internet Computing, vol. 27, no. 1, 1 Jan. 2023, pp. 70–74, https://doi.org/10.1109/mic.2022.3186030.

Balasubramaniam, Nagadivya, et al. “Transparency and explainability of AI systems: From ethical guidelines to requirements.” Information and Software Technology, vol. 159, July 2023, p. 107197, https://doi.org/10.1016/j.infsof.2023.107197.

Bryson, Joanna J., and Andreas Theodorou. “How society can maintain human-centric artificial intelligence.” Translational Systems Sciences, 2019, pp. 305–323, https://doi.org/10.1007/978-981-13-7725-9_16.

Cath, Corinne. “Governing Artificial Intelligence: Ethical, legal and technical opportunities and challenges.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, vol. 376, no. 2133, 15 Oct. 2018, p. 20180080, https://doi.org/10.1098/rsta.2018.0080.

Cheong, Ben Chester. “Transparency and accountability in AI systems: Safeguarding Wellbeing in the age of algorithmic decision-making.” Frontiers in Human Dynamics, vol. 6, 3 July 2024, https://doi.org/10.3389/fhumd.2024.1421273.

Fazlioglu, Müge. “US Federal AI Governance: Laws, Policies and Strategies.” US Federal AI Governance: Laws, Policies and Strategies, Nov. 2023, iapp.org/resources/article/us-federal-ai-governance/.

Floridi, Luciano, and Josh Cowls. “A unified framework of five principles for AI in society.” Harvard Data Science Review, 23 June 2019, https://doi.org/10.1162/99608f92.8cd550d1.

Kelly-Lyth, Aislinn, and Anna Thomas. “Algorithmic management: Assessing the impacts of AI at work.” European Labour Law Journal, vol. 14, no. 2, 10 May 2023, pp. 230–252, https://doi.org/10.1177/20319525231167478.

Lawrence, Christie, et al. “The bureaucratic challenge to AI Governance: An empirical assessment of implementation at U.S. Federal Agencies.” Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 8 Aug. 2023, pp. 606–652, https://doi.org/10.1145/3600211.3604701

Metcalf, Jacob, et al. “Algorithmic impact assessments and accountability.” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Mar. 2021, pp. 735–746, https://doi.org/10.1145/3442188.3445935

Nguyen, Hung and Cao, Hung and Nguyen, Vo and Pham, Dinh. (2021). “Evaluation of Explainable Artificial Intelligence: SHAP, LIME, and CAM.” 

Salih, Ahmed M., et al. “A perspective on explainable artificial intelligence methods: Shap and lime.” Advanced Intelligent Systems, 27 June 2024, https://doi.org/10.1002/aisy.202400304

United States, Executive Office of the President. (2024). Executive Order 14110: Safe, secure, and trustworthy artificial intelligence. The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

United States, Office of Management and Budget. (2024). Memorandum M-24-18: Promoting the responsible use of artificial intelligence in government programs and procurement. The White House. https://www.whitehouse.gov/wp-content/uploads/2024/10/M-24-18-AI-Acquisition-Memorandum.pdf

Kommentare


Die Kommentarfunktion wurde abgeschaltet.
Screen Shot 2022-09-08 at 2.37.45 PM.png

The views of our writers are entirely their own and do not necessarily represent the opinions of the Editorial Board, the Baker Institute Student Forum, or Rice University.

©2022 by ricejpp. Proudly created with Wix.com

bottom of page