misinformation – Artifex.News https://artifex.news Stay Connected. Stay Informed. Mon, 01 Apr 2024 03:26:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png misinformation – Artifex.News https://artifex.news 32 32 Meta Shuts Down Misinformation Monitoring Tool In Poll Year https://artifex.news/grave-step-backwards-meta-shuts-down-misinformation-monitoring-tool-in-poll-year-5349186/ Mon, 01 Apr 2024 03:26:38 +0000 https://artifex.news/grave-step-backwards-meta-shuts-down-misinformation-monitoring-tool-in-poll-year-5349186/ Read More “Meta Shuts Down Misinformation Monitoring Tool In Poll Year” »

]]>

Washington:

A digital tool considered vital in tracking viral falsehoods, CrowdTangle will be decommissioned by Facebook owner Meta in a major election year, a move researchers fear will disrupt efforts to detect an expected firehose of political misinformation.

The tech giant says CrowdTangle will be unavailable after August 14, less than three months before the US election. The Palo Alto company plans to replace it with a new tool that researchers say lacks the same functionality, and which news organizations will largely not have access to.

For years, CrowdTangle has been a game-changer, offering researchers and journalists crucial real-time transparency into the spread of conspiracy theories and hate speech on influential Meta-owned platforms, including Facebook and Instagram.

Killing off the monitoring tool, a move experts say is in line with a tech industry trend of rolling back transparency and security measures, is a major blow as dozens of countries hold elections this year — a period when bad actors typically spread false narratives more than ever.

“In a year where almost half of the global population is expected to vote in elections, cutting off access to CrowdTangle will severely limit independent oversight of harms,” Melanie Smith, director of research at the Institute for Strategic Dialogue, told AFP.

“It represents a grave step backwards for social media platform transparency.”

Meta is set to replace CrowdTangle with a new Content Library, a technology still under development.

It’s a tool that some in the tech industry, including former CrowdTangle chief executive Brandon Silverman, said is currently not an effective replacement, especially in elections likely to see a proliferation of AI-enabled falsehoods.

“It’s an entire new muscle” that Meta is yet to build to protect the integrity of elections, Silverman told AFP, calling for “openness and transparency.”

‘Direct threat’

In recent election cycles, researchers say CrowdTangle alerted them to harmful activities including foreign interference, online harassment and incitements to violence.

By its own admission, Meta — which bought CrowdTangle in 2016 — said that in 2019 elections in Louisiana, the tool helped state officials identify misinformation, such as inaccurate poll hours that had been posted online.

In the 2020 presidential vote, the company offered the tool to US election officials across all states to help them “quickly identify misinformation, voter interference and suppression.”

The tool also made dashboards available to the public to track what major candidates were posting on their official and campaign pages.

Lamenting the risk of losing these functions forever, global nonprofit Mozilla Foundation demanded in an open letter to Meta that CrowdTangle be retained at least until January 2025.

“Abandoning CrowdTangle while the Content Library lacks so much of CrowdTangle’s core functionality undermines the fundamental principle of transparency,” said the letter signed by dozens of tech watchdogs and researchers.

The new tool lacks CrowdTangle features including robust search flexibility and decommissioning it would be a “direct threat” to the integrity of elections, it added.

Meta spokesperson Andy Stone said the letter’s claims are “just wrong,” insisting the Content Library will contain “more comprehensive data than CrowdTangle” and be made available to academics and non-profit election integrity experts.

‘Lot of concerns’

Meta, which has been moving away from news across its platforms, will not make the new tool accessible to for-profit media.

Journalists have used CrowdTangle in the past to investigate public health crises as well as human rights abuses and natural disasters.

Meta’s decision to cut off journalists comes after many used CrowdTangle to report unflattering stories, including its flailing moderation efforts and how its gaming app was overrun with pirated content.

CrowdTangle has been a crucial source of data that helped “hold Meta accountable for enforcing its policies,” Tim Harper, a senior policy analyst at the Center for Democracy & Technology, told AFP.

Organizations that debunk misinformation as part of Meta’s third-party fact-checking program, including AFP, will have access to the Content Library.

But other researchers and nonprofits will have to apply for access or look for expensive alternatives. Two researchers told AFP under condition of anonymity that in one-on-one meetings with Meta officials, they demanded firm commitments from company officials.

“While most fact-checkers already working with Meta will have access to the new tool, it’s not super clear if many independent researchers — already worried about losing CrowdTangle’s functionality — will,” Carlos Hernandez-Echevarria, head of the Spanish nonprofit Maldita, told AFP.

“It has generated a lot of concerns.”

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
Big tech told to identify AI deepfakes ahead of EU vote https://artifex.news/article67994977-ece/ Tue, 26 Mar 2024 21:09:00 +0000 https://artifex.news/article67994977-ece/ Read More “Big tech told to identify AI deepfakes ahead of EU vote” »

]]>

The European Commission has issued a set aft of guidelines for digital giants to tackle risks to elections including disinformation. File
| Photo Credit: AFP

The EU called on Facebook, TikTok and other tech titans on March 26 to crack down on deepfakes and other AI-generated content by using clear labels ahead of Europe-wide polls in June.

The recommendation is part of a raft of guidelines published under a landmark content law by the European Commission for digital giants to tackle risks to elections including disinformation. The EU executive body has unleashed a string of measures to clamp down on big tech, especially regarding content moderation.

Its biggest tool is the Digital Services Act (DSA) under which the bloc has designated 22 digital platforms as “very large” including Instagram, Snapchat, YouTube and X.

There has been feverish excitement over artificial intelligence since OpenAI’s ChatGPT arrived on the scene in late 2022, but the EU’s concerns over the technology’s harms have grown in parallel.

Brussels especially fears the impact of Russian “manipulation” and “disinformation” on elections taking place in the bloc’s 27 member states on June 6-9.

In the new guidelines, the Commission said the largest platforms “should assess and mitigate specific risks linked to AI, for example by clearly labelling content generated by AI (such as deepfakes)”.

It recommended that big platforms promote official information on elections and “reduce the monetisation and virality of content that threatens the integrity of electoral processes” to diminish any risks.

“With today’s guidelines we are making full use of all the tools offered by the DSA to ensure platforms comply with their obligations and are not misused to manipulate our elections, while safeguarding freedom of expression,” said the EU’s top tech enforcer, Thierry Breton.

While the guidelines are not legally binding, platforms must explain what other “equally effective” measures they are taking to limit the risks if they do not adhere to them.

The EU can ask for more information and if regulators do not believe there is full compliance, they can hit the firms with probes that could lead to hefty fines.

‘Trusted’ information

Under the new guidelines, the Commission also said political advertising “should be clearly labelled as such” before a tougher law on the issue comes into force in 2025. It also urges platforms to have mechanisms “to reduce the impact of incidents that could have a significant effect on the election outcome or turnout”. The EU will conduct “stress-tests” with relevant platforms in late April, it said.

X has already been under investigation since December over content moderation.

It pressed Facebook, Instagram, TikTok and four other platforms to provide more information on how they are countering AI risks to polls on March 14.

In the past few weeks, several of the companies including Meta have outlined their plans.

TikTok has announced more of the measures it was taking including push notifications from April that will direct users to find more “trusted and authoritative” information about the June vote.

TikTok has around 142 million monthly active users in the EU — and is increasingly used as a source of political information among young people.



Source link

]]>