Silicon Valley’s influence is most apparent among computer science professors at such top-tier schools as Berkeley, University of Toronto, Stanford and MIT. According to a 2021 paper by University of Toronto and Harvard researchers, most tenure-track professors in computer science at those schools whose funding sources could be determined had taken money from the technology industry, including nearly 6 of 10 scholars of AI.
The proportion rose further in certain controversial subjects, the study found. Of 33 professors whose funding could be traced who wrote on AI ethics for the top journals Nature and Science, for example, all but one had taken grant money from the tech giants or had worked as their employees or contractors.
Academics say they are increasingly dependent on tech companies to access the large amounts of data required to study social behavior, including the spread of disinformation and hate speech. Both Meta and X, formerly Twitter, have reduced the flow of that data to researchers, requiring them to pay far more or negotiate special deals to obtain access.
This shifting power dynamic was thrust into the spotlight Monday with news that renowned disinformation researcher Joan Donovan had filed complaints with state and federal officials against Harvard University. Donovan claims that the personal connections of Meta executives — along with a mammoth $500 million grant for AI research — were behind her ouster this year from the Harvard Kennedy School. Harvard has denied that it was improperly influenced.
“Big Tech has played this game really successfully in the past decade,” said Lawrence Lessig, a Harvard Law School professor who previously founded Stanford’s Center for internet and Society without raising money outside the university. “The number of academics who have been paid by Facebook alone is extraordinary.”
Most tech-focused academics say their work is not influenced by the companies, and the journals that publish their studies have ethics rules designed to ward off egregious interference. But in interviews, two dozen professors said that by controlling funding and access to data, tech companies wield “soft power,” slowing down research, sparking tension between academics and their institutions, and shifting the fields’ targets in small — but potentially transformative — ways.
“It’s subtle. It just sort of creeps in,” McGill University professor Taylor Owen said.
Owen had brushes with corporate power when Meta’s Canadian public policy head Kevin Chan joined the advisory board of McGill’s public policy school. Chan complained about the school publicizing Owen’s research, which was critical of the company, and suggested that Meta could fund intensive classes proposed for educating journalists. After Owen objected, the school turned that offer down. Chan did not reply to request for comment.
While Meta didn’t dispute the Transparency Project’s accounting of its grants, spokesman David Arnold said gifts to academic institutions are designed to “better understand our platforms’ impact” and that the “anti-tech organizations” supporting the Transparency Project also fund academic research.
“We of course want this research to be rigorous and unbiased,” Arnold said. “It would be wrong for us not to support external academic research and, in fact, the calls for us to do more of this are only growing.”
Many academics say the explosion of AI is accelerating ties between the industry and universities — normalizing a system where some stars of academia draw salaries from companies like Meta and Google while continuing to teach on campus.
“They pay for the research of the very people in a position to criticize them,” said Hany Farid, a University of California at Berkeley professor in computer science and the School of Information. “It’s what the oil and gas industry has done with climate change, and it’s what the tobacco companies did with cigarette research.”
Farid, who says he has taken money from most of the major companies, got $2 million from Meta in 2019 to study deepfakes and integrity in news posts on Facebook. But the following year, after he was critical of Meta in a media interview, he says that a company employee told him the social media giant was upset. Though Farid doesn’t think the conversation was intended to be menacing, it was an unwelcome reminder of who was paying the bills.
He walked away from the remainder of the funds, citing to The Post “a disconnect between the research and the policy decisions.”
Farid said it was “highly unlikely” he would partner with the organization again. Meta said it disputed the characterization but declined to discuss the previously unreported rift.
‘This is why science exists’
Scholarship on the impact of technology ballooned after the 2018 Cambridge Analytica scandal and revelations that Russian operatives used social media to attempt to influence the U.S. presidential election. As public scrutiny of Facebook and other companies amplified, policymakers began to rely on academics for unvarnished information about the dangers of social media.
Laura Edelson, an assistant professor of computer science at Northeastern University, likened this scholarship to efforts to understand the automobile. “The early cars were wildly unsafe, and we needed to study them and figure out how to make them safer,” she said. “This is why science exists, so we can both have these important things but also make sure that society’s interests are well represented.”
However, academics, universities and government agencies have been overhauling, cutting back or ending disinformation research programs amid lawsuits and investigations by Republican regulators and conservative activists, who accuse them of colluding with tech companies to censor right-wing views.
The rapid growth of AI has triggered-close relationships between companies and academics. This fall, the University of Cambridge used Google money to expand the work of the Centre for Human-Inspired Artificial Intelligence to advance AI research “for the benefit of humanity.”
Two leaders of Meta’s Fundamental AI Research team, Yann LeCun and Joelle Pineau, also hold positions at New York University and McGill, respectively. Geoffrey Hinton, often called the “godfather of AI,” taught at the University of Toronto while serving as Google’s top AI expert. Hinton said that he worked for Google only half-time for 10 years and that his university appointment “was mainly advising graduate students on theses they had already started.” LeCun and Pineau did not respond to requests for comment.
“We are proud to openly fund research on a range of important topics such as responsible AI,” said Google spokesperson José Castañeda. “We value the independence and integrity of researchers and their work, and we expect and require them to properly disclose their funding.”
Google was one of the first modern tech giants to fund research at universities, funding 331 research papers on subjects relevant to Google’s business interests between 2005 and 2017, according to the Tech Transparency Project. Often that funding was not disclosed, and the papers were circulated to policymakers and the media.
The most popular topic of those papers was antitrust law, peaking during the Federal Trade Commission’s antitrust investigation of Google.
Even grants from tech giants that come without restrictive requirements leave researchers worrying their funds might dry up. Before the 2016 election, Google began pouring millions of dollars into a coalition of academics and nonprofits called First Draft. The collective became one of the earliest voices on disinformation, publishing early research that sometimes impugned its biggest funder.
After contributing $4.5 million one year, Google cut its funding more than 90 percent the next, according to a person familiar with the effort. The group shut down in 2022.
“They never told us what we could or couldn’t publish, but I did wonder, if I come out with a terrible exposé, is that going to prevent us from getting money later?” said a coalition member who spoke on the condition of anonymity to discuss politically sensitive issues.
For scholars, tech company money is often hard to turn down. Funding can be hard to come by and is often limited to a narrow set of research interests.
“For most of the past 25 years the federal government has underfunded social-science research into the effects of digital technology,” University of Virginia professor Siva Vaidhyanathan said. “Foundations … have historically tended to avoid directly funding basic research. So for many years the tech companies were the only major source of research funding.”
Though he said he saw no evidence of bias in company-funded research, the industry has impact in “what gets promoted and emphasized.”
The American Association of University Professors has acknowledged scandals including economics professors paid by sellers of mortgage-backed securities who downplayed risks of such products before the 2008 economic collapse. In a 2014 book, the association noted that pressure was “mounting, even in the humanities and other traditional nonmarket disciplines, to become more commercially ‘relevant’ and to generate private revenue.”
It welcomed outside funding but urged faculty bodies to pay close attention, draft detailed rules and enforce them.
Tech companies are also curtailing access to the internal data many researchers have used for their work. Elon Musk has begun charging thousands of dollars to researchers for previously free access to large volumes of posts on X, limiting the pool of academics who can study the platform effectively. In 2021, Meta disabled accounts associated with NYU’s Ad Observatory project, crippling the widely heralded initiative to research how political ads target users, citing privacy concerns about their use of the data.
Meta bought and then reduced support for the social media tracking tool CrowdTangle, which academics use to analyze how specific ideas spread.
Joining up with tech companies to gain access to data comes with its own set of struggles. In 2018 Meta, then known as Facebook, announced Social Science One, a partnership with researchers to study the social network’s impact on elections. Facebook agreed to give scholars a set of web addresses shared by users to measure the flow of misinformation.
But Social Science One’s work was delayed when the company didn’t release the promised data, citing privacy concerns, and some funders pulled out. The researchers finally got the full data set in October 2021, three years after the project’s start.
In 2020, Meta tried again. Though Meta did not pay the academics, 10 of the 17 researchers selected had previously received research grants from the company or worked for it as a consultant, the group disclosed. To protect against surprises, Meta employees pushed the academics to define in advance what would qualify as a major impact, said Michael W. Wagner, a University of Wisconsin journalism professor who served as an observer of the collaboration. The researchers agreed.
Yet researchers and Meta still clashed over how to interpret results.
The studies suggested that small experimental interventions, like making the Facebook news feed chronological, didn’t impact political polarization. Meta President of Global Affairs Nick Clegg touted the findings as part of “a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or have meaningful effects on these outcomes.” In interviews, researchers said the results were a far cry from saying Meta did not increase divisions.
Samuel Woolley, a University of Texas misinformation expert, noted a clear trend in such conflicts. “There’s all of this momentum toward creating a systematic effort to study things,” Woolley said. “Promises get made, and then suddenly things seem to stop.”
Woolley, who studies how groups use propaganda, decided not to obtain company data for his 2018 book, “Computational Propaganda.” He described the process of cobbling together data from other sources as excruciating — “like assembling a patchwork quilt” — but necessary. “I had a realization early on that doing quantitative research in this space was always going to be a heartbreaking endeavor,” he said.
Harvard’s Lessig, who spent years heading a center on ethics issues in society at the university, is developing a system for academics to verify that their research is truly independent. He hopes to present the initiative, the Academic Integrity Project, to the American Academy of Arts and Sciences.
He is still looking for funding.