Bloomberg: Facebook’s Frustrated Critics Take Their Fight to Washington
Civil rights groups shift strategy toward legislation to clean up social media
After years of directly pressuring Facebook Inc. and other social media companies to rid their platforms of hate speech—with limited success—civil rights groups are shifting tack: They’re taking their fight to Washington.
Organizations such as Color of Change, the Anti-Defamation League and Common Sense Media are increasingly pushing Congress and the Biden administration to force tech companies to take more aggressive steps to moderate their sites for bigotry, misinformation, voter suppression and discrimination. Now that Democrats control the White House and a majority in the House and Senate, the groups say, they aim to force change through legislation, rather than just pleading with the companies.
Rashad Robinson, president of racial justice group Color of Change, and other civil rights groups had regularly emailed and talked with Facebook Chief Executive Officer Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg in the past two years about the way the company handled misinformation, hate speech and Trump’s efforts to question the validity of mail-in ballots.
Zuckerberg even hosted civil rights advocates for a dinner at his Palo Alto, California, home in November 2019. But Robinson and others said their conversations haven’t led Facebook to make significant changes to its business because the company’s leadership failed to understand the civil rights issues at stake.
There is “nothing worse than to go and beg a billionaire to stop hurting us,” Robinson said.
Now, civil rights groups and other advocates are backing a flurry of proposed laws covering a range of issues. One bill would weaken tech companies’ legal protections if their platforms interfere with civil rights cases or facilitate harassment, and one would force companies to maintain public advertising databases. Another bill would block companies from selling users’ personal information to law enforcement without court oversight.
Since January, when a mob of Trump supporters stormed the U.S. Capitol after using social media to organize, the groups’ work has taken on heightened urgency. But as Robinson’s staff watched footage of the troubling riot on Jan. 6, he told them the shifting balance of power in Washington gave them more powerful options for holding the company accountable beyond just sending another email to Zuckerberg and Sandberg.
A special election in Georgia a day earlier had given Democrats the Senate majority, making Robinson and other groups suddenly more optimistic that meaningful legislation and regulation were possible, and that they could play a role in shaping it.
“The most predictable terror attack in recent history, in the United States maybe ever, was planned out in the open on Facebook,” said Lauren Krapf, counsel for the Anti-Defamation League, which is backing a number of bills. “I think that there is consensus that platforms cannot self-regulate. Self-regulation is not working.”
Her group was one of more than 130 that organized an effort to get Facebook to do more on its own—an advertiser boycott last year in the wake of widespread protests of racial injustices spurred by the shocking video of a police officer killing George Floyd. More than 1,000 companies paused advertising on Facebook, including Starbucks Corp., Levi Strauss & Co. and PepsiCo Inc., to protest the hate speech proliferating on the social media giant’s networks, according to the civil rights groups. Even celebrities lent their public support for the so-called Stop Hate for Profit campaign. The boycott took on a high profile but had scant impact on the company’s advertising revenue.
In response to the boycott, Facebook said it invests in technology and employees to remove the vast majority of hateful content before it’s reported by users. The action capped years of complaints from civil rights groups that Facebook executives would lend a sympathetic ear to their critiques—but resist any real changes that would help protect disadvantaged communities.
Now, a year later, there’s little evidence that the companies have significantly cleaned up their content, and advocacy groups say their cause is even more exigent. Media reports illustrated how far-right communities and Trump supporters were circulating election conspiracy theories, planning the attack at the capitol and openly discussing violence on sites such as Parler, 4chan, Gab and Facebook.
Brenda Castillo, head of the National Hispanic Media Coalition, said Facebook has an accountability problem in part because Zuckerberg is the president, CEO, chairman of the board and owner of a controlling share of the company’s stock.
Castillo said the problem is urgent and needs to be addressed with some structural changes like a digital agency to oversee internet companies. “Not only are people dying, and people of color are dying, but our democracy is at stake.”
The biggest social media companies have rules against discrimination in advertisements, white supremacist content and falsehoods on sensitive topics such as elections and Covid-19. Facebook, YouTube and Twitter will often rely on algorithms and human reviewers to detect posts that may break their rules. They also try to apply labels to misleading posts, reduce the spread of conspiracy theories and penalize repeat offenders.
Facebook said in a statement that it has a team that engages with civil rights groups to review its policies and processes. Over the last year, the company said it has banned Holocaust denial, militarized social movements, and taken down down tens of thousands of QAnon pages, groups and accounts.
“We hold ourselves accountable through regular reports to the public on the progress we’re making and areas where we can improve,” the company said.
Google, which owns YouTube, said that after updating its hate speech policy in 2019, the company has been increasingly removing channels and comments on the video-sharing platform that violate its rules. Late last year, Google created a Human Rights Executive Council made up of senior leaders to oversee the company’s approach to civil rights.
Twitter said it plans to coordinate closely with civil rights advocacy organizations and government leaders in improving its platform. “We’re committed to examining and iterating on our own policies, building trust, and doing this work out in the open,” the company said in a statement.
Despite the companies’ rules, problematic content often still ripples across the platforms quickly. Sindy Benavides, head of the League of United Latin American Citizens, said Facebook’s business practices are truly a matter of life and death, citing incidents like the one in 2019 when a shooter killed 23 people, targeting Latinos in El Paso, Texas. The gunman’s online manifesto referenced another mass shooting targeting Muslims that had been live-streamed on Facebook.
Benavides’s group is supporting three bills, including the Protecting Americans from Dangerous Algorithms Act, the Social Media Data Act and the SAFE TECH act. LULAC has also signed on to a coalition advocating for the end of what they call surveillance advertising—the practice of tracking all kinds of user activity to sell micro-targeted ads.
Representative Tony Cardenas, a Democrat from California, said he has met several times with Sandberg and spoken with her by phone since he first joined Congress eight years ago.
“I’m very pleased with what they’ve been telling me,” Cardenas said, “but I’m very disappointed that it’s been years now and the progress that they’ve made and the promises— I’m not going to call them the commitments yet—the promises they’ve made have not been consistent with their actions.”
Cardenas and others said the content moderation that Facebook does in English far outpaces the nearly nonexistent protections from misinformation in Spanish. Benavides said the first step to addressing this problem is having more diversity, including Spanish-speaking Latinos, in the company’s highest corporate ranks and on its board.
To press their case, advocates have been contacting key Democratic offices including members of the Tri-Caucuses, a group of lawmakers from minority groups; Rhode Island Representative David Cicilline, who conducted a probe of the tech industry; New Jersey Senator Cory Booker; and Virginia Senator Mark Warner, among others.
Legislative fixes have to stay within the parameters that are set by current free speech and legal liability rules, experts say. Under the First Amendment, the government can’t compel tech companies to remove or leave up user-generated posts that are offensive to many but ultimately legal. Section 230 of the 1996 Communications Decency Act protects social media platforms from lawsuits over user-generated content.
Tech companies have largely opposed changes to the law because they fear the proliferation of lawsuits will force them to shut down user-generated content and stifle innovation in the space, though Zuckerberg and Twitter CEO Jack Dorsey have expressed openness to Section 230 reforms.
There are several measures already introduced in Congress that civil rights advocates are supporting. Some of these include:
- The SAFE TECH Act, S.299, proposed by Democratic Senators Warner, Mazie Hirono of Hawaii and Amy Klobuchar of Minnesota, seeks to ensure the Section 230 legal liability shield doesn’t interfere with harassment laws, civil rights laws, international human rights laws, and victims’ legal claims in wrongful death cases.
Senator Edward Markey and Representative Doris Matsui, both Democrats, have proposed the Algorithmic Justice and Online Platform Transparency Act of 2021, S.1896, which would prohibit online platforms from using algorithms that discriminate on the basis of race, age, gender or other sensitive characteristics. The bill would also require tech companies to explain the privacy practices around the algorithm with the Federal Trade Commission. - New York Representative Yvette Clarke has introduced the Civil Rights Modernization Act of 2021, H.R. 3184, which would clarify that Section 230 doesn’t excuse platforms from enforcement of civil rights laws on targeted advertising.
Even with Democrats narrowly controlling both chambers of Congress, the bills face an uphill battle. They would need to attract at least 10 Republicans to get around the Senate’s filibuster rules, and there is little bipartisan agreement about the best way to regulate big technology companies. Republicans are more likely to be concerned that Democrats' efforts to force companies to embrace tougher content moderation might lead to biased outcomes against conservatives on social media platforms. On Friday, a handful of House Republicans formed a new "Freedom from Big Tech Caucus" to fight what they see as social media companies' censorship of right-wing users, among other ills.
Still, tech accountability activists are cheered by the heightened attention in Washington on the industry’s business practices, from both Democrats and Republicans. And Jim Steyer, head of Common Sense Media, a nonprofit focused on kids and families, said he has personally spoken with President Joe Biden about needing stronger tech regulation.
“We do not believe they’re going to change voluntarily,” Steyer said of Facebook. “The only way to force change is through regulatory pressure and through public shaming.”
Robinson said he and other racial justice advocates understand the political realities. His group plans to continue calling on the tech companies to change while pushing for more concrete action from Congress.
“Now there is a path,” Robinson said. “It’s a tough path. It’s a path cluttered with tree stumps and briar patches—but it’s a path.”
By: Anna Edgerton
Source: Bloomberg