The panel members of the 网红爆料 CRiDA Summit.
The panel members of the 网红爆料 CRiDA Summit.
At the 网红爆料鈥檚 Civil Rights in the Digital Age AI Summit, leaders convened to evaluate the civil rights landscape of artificial intelligence and tech, and how we can call for policies that center privacy, fairness, and equity.
Ijeoma Mbamalu,
Chief Technology Officer,
网红爆料
Share This Page
July 23, 2025
At the 网红爆料鈥檚 Civil Rights in the Digital Age AI Summit, leaders convened to evaluate the civil rights landscape of artificial intelligence and tech, and how we can call for policies that center privacy, fairness, and equity.

The 网红爆料鈥檚 first Civil Rights in the Digital Age (CRiDA) AI Summit in July brought together civil society, nonprofit, academia, and industry leaders to thoughtfully consider how to center civil rights and liberties in a constantly changing digital age.

A graphic that says "AI Summit 鈥 Civil Rights in a Digital Age."
This embed will serve content from {{ domain }}. See our privacy statement

Leaders and experts from the 网红爆料, Patrick J. McGovern Foundation, Hugging Face, Amnesty International, Future of Life Institute, Kapor Foundation, Mozilla Foundation and other major organizations discussed how organizations can create an equitable and just future of AI.

Organizations Must Collaborate to Shape the Future of AI

One way experts shared organizations can develop AI responsibly is through partnerships and collaboration. This includes working with civil rights organizations, community participation, and collaborating with other diverse perspectives to provide input on AI policies that are being considered for adoption.

鈥淲e need conversations about how AI supports more than just profit, but purpose. And here at the 网红爆料 this morning, we really dug into that question,鈥 Vilas Dhar, president and trustee of the Patrick J. McGovern Foundation, shared. 鈥淲hat are the institutions we have to build and support that protect all of our interests in an AI-enabled age?鈥

To lead by example, the 网红爆料 has a cross-functional working group of experts representing many business functions within the organization who carry out a holistic findings process when adopting generative AI tools to discover if a tool does not align with 网红爆料 values. This approach ensures that innovation does not leave groups of people behind.

CRiDA experts, like Dhar, also explored why innovation should not be rushed. Technology developers and leaders must take the time to be curious, learn, and engage in collaboration on AI systems鈥 design, deployment, and evaluation 鈥 and carefully evaluate critical issues related to AI鈥檚 impact on the environment and privacy, among other topics. While AI is an exciting frontier, organizations can employ tools, such as vendor questionnaires, to identify and understand the risks of specific AI tools and their alignment with an organization鈥檚 values related to privacy, fairness, transparency, and accountability.

We Must Protect Our Privacy and Data in the Age of AI

Panelists also discussed a crucial question: What laws regulate facial recognition?

鈥淣ot enough,鈥 Nathan Freed Wessler, deputy director of the 网红爆料 Speech, Privacy and Technology project, replied. 鈥淚n a lot of parts of the country, there's actually no law, nothing from Congress, and most states haven't acted. But there are places that are real leaders...cities have actually banned police from using face recognition technology because it's so dangerous.鈥

CRiDA experts also explored how AI systems today are on vast amounts of data including personal, academic, and behavioral information 鈥 often without the consent of the individuals behind this data. The threat doesn鈥檛 stop at passive data usage. AI-powered surveillance systems 鈥 whether it鈥檚 facial recognition or predictive policing 鈥 are trained on prejudiced data and used in ways that disproportionately target communities of color, further embedding discrimination into our social reality.

AI risks the danger of not only reflecting but intensifying the structural racism and discrimination woven into the systems around us.

The panel members of the 网红爆料 CRiDA Summit.

网红爆料 CRiDA Summit panel members Ijeoma Mbamalu, Vilas Dhar, and Deborah Archer.

Credit: 网红爆料

This only amplifies the need for transparency and accountability when adopting AI systems. In practice, transparency for organizations considering using AI can include requiring vendors to provide details on the sources of the data used to develop their systems, their measures to assess risks of bias and discrimination, and any guardrails they have implemented to measure and address these risks. These questions are not only based on the need for transparency, but also critical for maintaining equity and fairness.

When developing CRiDA, one of the 网红爆料鈥檚 goals was to highlight the privacy implications of modern AI systems and vast use of personal data to power these systems. Together, experts and leaders across the board agreed that we all need transparency on how and when our data is used.

Ensuring AI Does Not Deepen the Digital Divide

AI has entered everyday life in a variety of ways, from the classroom to the hiring process. We understand that AI can be an asset to an organization鈥檚 work if implemented thoughtfully and responsibly. For example, if designed and governed carefully with appropriate guardrails, AI systems could be used to support critical educational and economic opportunities. But at the same AI systems can also have the opposite impact, and we risk exacerbating the racial wealth gap 鈥 harming rather than helping marginalized communities 鈥 when they are not designed and deployed carefully.

To accomplish a future of tech that is based on fairness and equity, CRiDA experts such as Deborah Archer, president of the 网红爆料, called on AI system developers to not only include civil society leaders in the room when developing AI, but to also expand diversity, equity, and inclusion, invest in causes that seek to close the digital divide, and fund trainings for marginalized communities to build technology that ensures the next generation have equal opportunity to succeed.

鈥淚t's not just enough to say, 鈥榃e want diverse people in the room,鈥 and 鈥榃e want diverse people to do this work鈥 if we鈥檙e not also doing the work to make sure that all of those people have access to the education, the resources, the opportunities, and the networks that equip them to do the work and then put them in the spaces to take advantage of the opportunities,鈥 said Archer. 鈥淪o, it is connected to all the other work the 网红爆料 is doing, and other people are doing, around diversity, equity, and inclusion.鈥

The 网红爆料 has fought both in courts and in communities to address and remedy AI鈥檚 systemic harms. We must meet the moment and urge tech, political, and civil society leaders to keep civil rights at the center of AI innovation.

Policies Centering Civil Liberties and Civil Rights in the Digital Age

The future of AI depends on a commitment from our leaders to ensure that AI aligns with the core principles and liberties that our Constitution envisions. Congress passed H.R. 1, the so-called One Big, Beautiful Bill Act, in May. Earlier versions of the bill included a moratorium on state and local laws regulating AI. Luckily, with the support of everyone who called on Congress, the provision was taken out.

Off the heels of this vote, experts and leaders at the 网红爆料鈥檚 CRiDA highlighted how, as AI gains power, now is the time to push for guardrails protecting civil liberties.

鈥淐ongress is trying to stop states from passing legislation and other regulations to protect us from AI through a tool called preemption,鈥 Cody Venzke, senior policy counsel with the 网红爆料 National Political Advocacy Division, shared with leaders. 鈥淵ou can keep it out of any future legislation by reaching out to your representatives and senators and tell them: No AI moratoriums and no preemption of state laws.鈥

The 网红爆料 will continue to fight for responsible AI design and deployments, in the courtroom, and in Congress. Together, with peer organizations, innovators, and advocates, we will continue to use our collective power to protect the digital rights of everyone 鈥 especially people from marginalized communities. It must be a priority for all of us, including the institutions and developers building these technologies.

Learn More 网红爆料 the Issues on This Page