October 10, 2022

By Susan Benesch, Brandi Geurkink, David Karpf, David Lazer, Nathalie Maréchal, J. Nathan Matias & Rebekah Tromble

The National AI Initiative Act is designed to ensure that the United States remains a leader on AI research and development. It sets out to leverage the opportunities of AI to increase the well-being of Americans while safeguarding against potential and actual harms to our communities. As Lynn Parker noted in her February blog post announcing an update of the National AI Research & Development Strategic Plan, achieving this goal requires “consistent and sustained Federal investments in cutting-edge AI R&D, particularly for those areas in which industry has few incentives to invest.

One of those areas is research on how artificial intelligence is shaping the information infrastructure of America and the world. Right now, a handful of companies control artificial intelligence systems that already shape public health, education, stock markets, public services, and political discourse, just to name a few. The way these AI systems work, and the impacts that they have, are poorly understood. This is in large part because the companies control access to the systems necessary to understand these questions. The primary motivation of these companies is to make money, not to produce research in the public interest. Industry has insufficient incentives to open these systems to people outside of companies—to independent researchers at news organizations, academia, and civil society groups—who could study their impacts on the well-being of the public.1Whittaker, M. (2021). The steep cost of capture. Interactions, 28(6), 50-55. Zuckerman, E. (2021) Demand five precepts to aid social-media Watchdogs. Nature.Matias, J.N. (2020). Why we need industry-independent research on tech & society. Citizens & Technology Lab Haibe-Kains et al (2020) Transparency and reproducibility in artificial intelligence. Natur

Tech companies have repeatedly demonstrated that they will not permit oversight of these systems on their own. Independent researchers have attempted for years to collaborate with tech companies on methods to make their AI systems safer, to no avail. Many companies have made grand promises and then failed to share essential information with public interest researchers.2Seetharaman, D. (2020) Jack Dorsey’s push to clean up Twitter stalls. Wall Street Journal. Hegelich, S. (2020) Facebook needs to share more with researchers. NatureThose who have tried to study these systems using their own tools and accounts have had their access revoked.3Edelson, L. (2021) How Facebook Hinders Misinformation Research. Scientific American

Through the National AI Initiative Act, the US government can make a positive intervention to change this. The implementation of this Act should support a new system of governance and external oversight of AI systems that shape America’s communication infrastructure. It should do this by:

  • Increasing support for industry-independent research to spur innovation, protect the public, advance science, and contribute to AI governance.
  • Facilitating the availability of and equal access to people, systems, and data in a way that upholds thehighest standards of ethics and privacy.
  • Ensuring that industry-independent researchers from civil society, news organizations, and academia are included in the implementation of the National AI Initiative Act.

While the current strategic aims of the National AI R&D Strategic Plan are laudable, they cannot be achieved without the participation of independent researchers, including journalists and those representing civil society organizations. Much of the most influential and most-cited research on the impacts of artificial intelligence on society have been conducted by journalists, citizen scientists, and civil society, including research on pre-trial risk assessment,4Angwin, J., Larsen, J. (2016) Machine Bias. ProPublica. predatory targeted advertising,5Riecke, A., Koepke, l. Led Astray. Upturn flawed predictive policing,6Main, F., Dumke, M. (2017) A look inside the watch list Chicago police fought to keep secret. Chicago Sun-Times Saunders, J., Hunt, P., & Hollywood, J. S. (2016). Predictions put into practice: a quasi-experimental evaluation of Chicago’s predictive policing pilot. Journal of Experimental Criminology, 12(3), 347-371. content moderation systems,7Matias, J., Johnson, A., Boesel, W. E., Keegan, B., Friedman, J., & DeTar, C. (2015). Reporting, reviewing, and responding to harassmenton Twitter. Available at SSRN 2602018. Matias, J. N., Hounsel, A., & Feamster, N. (2022). Software-Supported Audits of Decision-Making Systems: Testing Google and Facebook’s Political Advertising Policies. Computer-Supported Cooperative Work. market algorithm discrimination,8Cox, M. (2017)The Face of Airbnb, New York City. and harmful content from search engines.9Kayser-Bril, Nicolas (2020). Ten years on, search auto-complete still suggests slander and disinformation. AlgorithmWatch

The most impactful and immediate way to support research to evaluate and address social concerns related to the use of AI is to empower people to do this work outside of industry. Expanding the participation of journalists and civil society researchers in particular will provide more inclusive pathways for more Americans to participate in AI R&D.

The Coalition for Independent Technology Research is a new group of academics, journalists, civil society researchers, and community scientists who work independently from the technology industry. Our mission is to advance, defend, and sustain the right to ethically study the impact of technology on society.

We recommend that the National AI R&D Strategic Plan:

    1. Increase support for industry-independent research to understand societal issues related to artificial intelligence.

      Strategies 3 and 4 of the National AI R&D Strategic Plan aim to understand and address the ethical, legal, and societal implications of AI and ensure the safety and security of AI systems. This cannot be done within the current paradigm where tech company employees exclusively control access to systems, data, and affected communities. These companies have amassed large teams of talented researchers, but their research studies are aimed at supporting the corporate rather than the public interest. This is a perilous paradigm, akin to asking the public to trust automakers to be the only ones to perform safety tests on the cars that they manufacture. Research from companies should not be dismissed, but it must be part of a system of oversight that also includes researchers who are independent of the companies’ corporate interests.

      Industry independent research can play an important role in developing a trustworthy American artificial intelligence industry. According to research by Pew, Americans do not believe leaders of tech companies admit to mistakes, do not believe that tech leaders care about people like them, and trust technology leaders less than any other group.10Pew Research Center. (2019). Why Americans Don’t Fully Trust Many Who Hold Positions of Power and Responsibility. As has been the case in other industries, independent research can provide trustworthy evidence about risks and safety in ways that enhance the common good.11Carpenter, D. (2014). Reputation and power. Princeton University Press.Silber, N. (1983). Test and Protest: The Influence of Consumers Union. Holmes & Meier.Dietz, T., Ostrom, E., & Stern, P. C. (2003). The struggle to govern the commons. science, 302(5652), 1907-1912.

      Industry-independent research is essential for making progress on these strategies; yet many of the programs implemented within the Strategic Plan so far are being carried out with industry. For example, the Strategic Plan 2019 Update cites a project between NSF and Amazon to “jointly support research focused on AI fairness with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society.” The updated National AI R&D Strategic Plan should include specific goals for supporting industry-independent research within the implementation of Strategies 3 and 4. This could include prioritizing industry-independent actors for funding, redirecting funding away from industry-dominated projects, and developing funding partnerships with funders outside of the AI industry.

      While the addition of Strategy 8, “Expand Public-Private Partnerships to accelerate advances in AI,” to the Strategic Plan is a promising step in this direction, the implementation of that strategy has relied on industry-dominated groups like the Partnership on AI, which has been criticized by civil society organisations12(2020) Access Now Resigns from the Partnership on AI. AccessNow for doing little to change the attitudes of member companies or foster genuine dialogue with civil society on a systematic basis. This dynamic is common in partnerships like these; it is caused by a fundamental mismatch in incentives and power between different members of the group and is rarely solved by governance protocols. The agencies responsible for the implementation of Strategy 8 should consider how these dynamics can prevent progress towards the aims of the Strategic Plan and prioritize partnerships that are truly independent of the tech industry.

    2. Facilitate the creation of curated, standardized, secure, representative, aggregate, and privacy-protected data sets to enable independent research.Agencies implementing the National AI R&D Strategic Plan have made significant progress on Strategy 5, which aims to release public datasets to advance the field of AI research. Much of this work has focused on areas such as computer vision, natural language processing, and speech recognition, as well as getting public agencies to contribute data to such initiatives. These efforts are necessary for the development of inclusive AI systems.There should be a parallel effort to develop procedures which facilitate independent audits of active AI systems to achieve the objectives outlined in Strategies3 and 4 of the Plan. For example, researchers have called for the development of a universal digital ad archive13Edelson, L., Chuang, J., Fowler, F. F., Franz, M. M., Ridout, R. (2021) A Standard for Universal Digital Ad Transparency. Knight First Amendment Institute, Columbia University which would bring transparency to online ad content, targeting and delivery, thus allowing independent research into the harms caused or amplified by digital ads. Several legislative proposals also focus on researcher access to social media data as an important intervention in AI governance.14Social Media DATA Act. H.R.3451. 117th Cong. (2021) Algorithmic Justice and Online Platform Transparency Act. S.1896. 117th Cong. (2021) Algorithm Accountability Act of 2019. H.H.2231. 116th Cong (2019)Platform Accountability and Transparency Act. S.4066. 116th Cong (2019) Filter Bubble Transparency Act. S.2024. 117th Cong (2021) Beyond facilitating access to social media data, agencies implementing the Plan could also invest in the development of observatories and citizen science programs that contribute to the public’s understanding of artificial intelligence, its safety, and its impacts. Given a consensus from technology firms and scientists that the societal harms of AI are hard to predict and prevent,15Bak-Coleman, J. B., Alfano, M., Barfuss, W., Bergstrom, C. T., Centeno, M. A., Couzin, I. D., … & Weber, E. U. (2021). Stewardship of global collective behavior. Proceedings of the National Academy of Sciences, 118(27). Clegg, Nick. 2021. “You and the Algorithm: It Takes Two to Tango.” Facebook. March 31, 2021 the Plan should also support work to develop new methods for studying the impact of AI systems on society.
    3. Ensure industry-independent researchers from civil society, news organizations, and academia are included in the implementation of the National AI Initiative Act. Independent researchers in civil society groups have an important role to play in AI R&D—particularly to evaluate and address bias, equity, or other concerns related to the development, use, and impact of AI. It has often been through painstaking research done by poorly-resourced civil society representatives (often in the face of an adversarial posture from platforms) that we have been able to learn about significant harms arising from AI.Similarly, some of the most impactful research and investigations into AI systems have come from journalists working within news organizations. Journalists have produced several of the most cited sources on safety and fairness in AI, spurring the creation of whole subfields in computer science through their data analyses and investigative reporting. Moreover, journalists know how to relate their research findings to broader conversations among the public, which leads to accountability and sparks change.Innovation happens when people from a variety of disciplines apply different skills and perspectives to solve collective problems. Yet so far, the implementation of the National AI R&D Strategic Plan has not included many of these important actors. This exclusion significantly limits the scope of important research, including research into harms that disproportionately affect marginalized and underrepresented groups, and into how these harms might be addressed.

      Researchers of all kinds should be assessed on the basis of their expertise, their ability to implement necessary privacy, data protection and ethics safeguards and protocols, and their independence from commercial interests.

      The implementation of the National AI R&D Strategic Plan should prioritize researchers from a broad range of organizations and backgrounds who meet these qualifications for funding, partnerships, and other forms of participation. Researchers from these groups should be consulted in the development of research as outlined above, and affiliation with an academic institution should not be required, per se, for access to these data sets. Instead, the agencies responsible for implementing the Strategic Plan should design inclusive standards for the necessary privacy, security, and data protection protocols that must underpin research projects and build capacity for vetting whether these standards are met.

We thank OSTP for the opportunity to provide this input as they develop the National AI R&D strategic plan. Please contact Susan Benesch <susan@dangerousspeech.org> if you want to discuss our comment further.

Signatories (institutions included for identification purposes):

Susan Benesch, Dangerous Speech Project
Brandi Geurkink, Mozilla Foundation
David Karpf, The George Washington University
David Lazer, Northeastern University
Nathalie Maréchal, Ranking Digital Rights
J. Nathan Matias, Citizens and Technology Lab, Cornell University
Rebekah Tromble, The George Washington University