Print

Meta oversight board tells company to clean up rules on AI-generated pornography

Meta’s Oversight Board on Thursday said the company’s rules were “not sufficiently clear” in barring sexually explicit AI-generated depictions of real people and called for changes to stop such imagery from circulating on its platforms.

The board, which is funded by the social media giant but operates independently, issued its ruling after reviewing two pornographic fakes of famous women created using artificial intelligence and posted on Meta’s Facebook and Instagram.

Meta said it would review the board’s recommendations and provide an update on any changes adopted.

In its report, the board identified the two women only as female public figures from India and the United States, citing privacy concerns.

The board found both images violated Meta’s rule barring “derogatory sexualized photoshop,” which the company classifies as a form of bullying and harassment, and said Meta should have removed them promptly.

In the case involving the Indian woman, Meta failed to review a user report of the image within 48 hours, prompting the ticket to be closed automatically with no action taken.

The user appealed, but the company again declined to act, and only reversed course after the board took up the case, it said.

In the American celebrity’s case, Meta’s systems automatically removed the image.

“Restrictions on this content are legitimate,” the board said. “Given the severity of harms, removing the content is the only effective way to protect the people impacted.”

The board recommended Meta update its rule to clarify its scope, saying, for example, that use of the word “photoshop” is “too narrow” and the prohibition should cover a broad range of editing techniques, including generative AI.

The board also slammed Meta for declining to add the Indian woman’s image to a database that enables automatic removals like the one that occurred in the American woman’s case.

According to the report, Meta told the board it relies on media coverage to determine when to add images to the database, a practice the board called “worrying.”

“Many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board said.

(Reuters)

RELATED ARTICLES

7 mins ago | 1979 Islamic Revolution

Iran defies Trump, elevates Khamenei’s son Mojtaba as successor

Iran's clerical leadership chose confrontation over compromise in appointing Mojtaba Khamenei to succeed his father, Ali Khamenei, a move regional officials say is a direct rebuke to U.S. President Donald Trump, who had declared the son "unacceptable...

28 mins ago | Bangladesh

Bangladesh shuts universities early to save power amid energy crisis

Bangladesh will close all universities from Monday, bringing forward the Eid al-Fitr holidays as part of emergency measures to conserve electricity and fuel amid a worsening energy crisis linked to the conflict in the Middle East. Authorities said t...

2 hours ago | Iraqi oil

Oil surges over 25%, on track for record daily jump due to escalating Iran war

Oil prices surged more than 25% on Monday to their highest levels since mid-2022 as some major producers cut supplies and fears of prolonged shipping disruptions gripped the market due to the expanding U.S.-Israeli war with Iran. Energy markets are...