top of page
Search

Automated Research: Mitigating the Risks of the Genesis Mission

  • Ridhi Dondeti
  • 3 days ago
  • 5 min read

By: Ridhi Dondeti

Edited by: Ingrid Sizer


Photo by Joyce N. Boghosian/ Tech Policy .press

The rise of global AI has sparked debates over innovation, expanding research on the topic, and whether AI is a state or federal issue. Largely missing from these debates, however, are the ethical concerns associated with AI innovation. More recently, the executive branch has increased its involvement in AI policy through executive orders, notably through the introduction of the Genesis Mission this past year.

On November 24, 2025, the White House issued an executive order launching the Genesis Mission, which will create an Artificial Intelligence database to be used by the Department of Energy to build an AI model. This model would then conduct research on behalf of the U.S. federal government, consulting federal scientific databases (The White House, 2025). By working with U.S.-based companies such as NVIDIA, an AI semiconductor company, to build the AI framework, the Trump administration argued that the Genesis Mission would also decrease the scale of environmental harm associated with the AI framework manufacturing (Kang, 2025). While this may be true in the long run, the Genesis Mission creates a dangerous precedent of automating the research process, mooting generations of human-developed research, eliminating the environmental research field, and increasing the likelihood of unreliable findings. To mitigate these risks, the Department of Energy should hire researchers as consultants to develop ethical AI frameworks that embed consistent checks throughout the Genesis Mission's development.

Ultimately, the primary goal of the Genesis Mission is to accelerate research, clean energy innovation, mitigate waste buildup, improve grid modernization, and support environmental cleanup. This is done by using existing or newly created federal databases under the Genesis Mission that survey current environmental issues and emerging grid data (Office of Environmental Management, 2025). Under Genesis, the government would rely only on its own data to compile environmental research. Thus, private research groups (especially those at universities or private companies) and government funding will ultimately collapse or fall through, as their jobs are being taken over by the Genesis Mission (Office of Environmental Management, 2025). This has the potential to harm a multitude of researchers worldwide who focus on mitigating environmental harms and present their research to the federal government (Girishankar & Borges, 2025). The decimation of the environmental research field could have long lasting consequences as the use of AI in innovative research would ultimately lead to more research similar to preexisting work because AI models tend to follow conventional algorithms and processes (Desdevises, 2025) thus limiting the production of truly innovative research. Moreover, this contributes to, and even worsens, the recent prediction that AI will take 6% of jobs, or 10.4 million jobs, by 2030 (Gownder, 2026). Another issue is the increasing likelihood of AI hallucinations. An AI hallucination is a situation in which generative AI models produce convincing but incorrect or fabricated information (Sun et al., 2024). For example, in a simulation conducted by OpenAI, the company found that its o3 and o4-mini models had hallucination rates from 51-79% (Metz & Weise, 2025). It is possible that the Genesis Mission will be subject to AI hallucinations once the final product is developed; notably the Genesis Mission does not outline methods to address these possible hallucinations. This means that the Genesis Mission AI model could develop false methods to solve climate change leading to a misallocation of government resources.

The Genesis Mission must make several key adjustments to improve its system to become a leading example of ethical AI innovation. First, the Department of Energy should hire researchers as consultants through an impartial hiring process led by the Office of Critical and Emerging Technology, which coordinates government policy and activity in AI. These researchers can help in collecting the data used to build federal scientific databases. This would encourage the Genesis Mission to build and utilize peer-reviewed databases, which are mainly independently and internally reviewed, compared to government databases, which are not (Louisiana State University, 2023). Outsourcing database construction to researchers reduces the likelihood that the research field will be taken over by AI.

The Department of Energy should also establish consistent testing of the Genesis Mission by running simulations, evaluating potential research prompts, and consulting with the hired researchers to assess the quality and accuracy of the Genesis Mission’s research. This framework is modeled after S.2938, the Artificial Intelligence Risk Evaluation Act of 2025, a bipartisan bill introduced into the Senate in September of 2025, that mandates risk testing of AI platforms to mitigate bias or hallucinations. While this bill authorizes Congressional oversight of an AI model’s compliance with ethical standards, the proposal above differs in that it involves researchers in testing the model instead.

While Congress is subject to political polarization, researchers hired by an impartial panel are not. The Artificial Intelligence Risk Evaluation Act has not left the Senate Committee on Commerce, Science, and Transportation, and has a low likelihood of passing Congress due to potential concerns about states' rights under the 10th Amendment. However, since the bill establishes a national AI framework for U.S.-based research, it provides a solid basis for the ethical building and testing of AI models. (Mollenkamp, 2025).

The rise of AI innovation has led to calls to develop ethical AI frameworks and methods to mitigate the risks of job loss in fields such as research. These concerns have risen since the introduction of the Genesis Mission. At its core, the primary, and perhaps only benefit of the mission is to help mitigate environmental impacts by automating the research process. However, this can be enhanced by introducing more ethical AI practices, which would also allow the U.S. to lead AI innovation and set the stage for developing AI safety practices.


The views expressed in this publication are the authors' own and do not necessarily reflect the position of The Rice Journal of Public Policy, its staff, or its Editorial Board.
References

Brennan Center for Justice. (2025, September 26). Artificial Intelligence Legislation Tracker. https://www.brennancenter.org/our-work/research-reports/artificial-intelligence-legislation-tracker

Desdevises J. (2025, August 7). The paradox of creativity in generative AI: high performance, human-like bias, and limited differential evaluation. Frontiers in psychology, 16, 1628486. https://doi.org/10.3389/fpsyg.2025.1628486

Girishankar, N., & Borges, C. (2025, December 4). The Genesis Mission: Can the United States’ Bet on AI Revitalize U.S. Science? Center for Strategic and International Studies. https://www.csis.org/analysis/genesis-mission-can-united-states-bet-ai-revitalize-us-science

Gownder, J. P. (2026, January 13). AI And Automation Will Take 6% Of US Jobs By 2030. Forrester. https://www.forrester.com/blogs/ai-and-automation-will-take-6-of-us-jobs-by-2030/

Kang, C. (2025, November 24). Trump Orders Construction of A.I. Platform to Use Troves of Government Data for Research. https://www.nytimes.com/2025/11/24/us/politics/trump-ai-executive-order.html

Library of Congress. (2025, September 29). S.2938 - Artificial Intelligence Risk Evaluation Act of 2025. Congress.gov. https://www.congress.gov/bill/119th-congress/senate-bill/2938/text

Louisiana State University. (2023, November 16). AskUs!: Government Documents. LSU Library . https://askus.lib.lsu.edu/govdocs/faq/399649

Metz, C., & Weise, K. (2025, May 6). A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

Mollenkamp, A. (2025, November 21). White House isn't giving up on AI regulations ban. Roll Call. https://rollcall.com/2025/11/21/white-house-isnt-giving-up-on-ai-regulations-ban

Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: Towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11, Article 1278. https://doi.org/10.1057/s41599-024-03811-x

White House. (2025, December 12). Ensuring a National Policy Framework for Artificial Intelligence. The White House. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/

U.S. Department of Energy, Office of Environmental Management. (2025, December 23). SRNL Contributes Key Expertise to DOE’s New Genesis Mission. https://www.energy.gov/em/articles/srnl-contributes-key-expertise-does-new-genesis-mission
 
 
 

Comments


Screen Shot 2022-09-08 at 2.37.45 PM.png

The views of our writers are entirely their own and do not necessarily represent the opinions of the Editorial Board, the Baker Institute Student Forum, or Rice University.

©2022 by ricejpp. Proudly created with Wix.com

bottom of page