Deepfake – Artifex.News https://artifex.news Stay Connected. Stay Informed. Sun, 02 Jun 2024 00:13:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png Deepfake – Artifex.News https://artifex.news 32 32 Australia To Outlaw Sharing Deepfake Pornography Without Consent https://artifex.news/australia-to-outlaw-sharing-deepfake-pornography-without-consent-5797522/ Sun, 02 Jun 2024 00:13:35 +0000 https://artifex.news/australia-to-outlaw-sharing-deepfake-pornography-without-consent-5797522/ Read More “Australia To Outlaw Sharing Deepfake Pornography Without Consent” »

]]>

The law will be introduced to parliament in the coming week. (Representational)

Sydney:

Australia’s government has announced new legislation making it a criminal offence to share deepfake pornographic images of people without their consent.

The law, to be introduced to parliament in the coming week, would bring in jail sentences of up to six years for sharing non-consensual deepfake pornography.

The penalty rises to seven years if the offender also created the material.

“Digitally created and altered sexually explicit material that is shared without consent is a damaging and deeply distressing form of abuse”, Attorney General Mark Dreyfus said in a statement late Saturday.

“We know it overwhelmingly affects women and girls who are the target of this kind of deeply offensive and harmful behaviour. It can inflict deep, long-lasting harm on victims.”

The new criminal offence would only apply to adults since children are already protected under separate child abuse legislation.

Countries around the world are grappling with the spread of deepfake pornography — digitally created sexually explicit material, usually generated with artificial intelligence.

In April, Britain said it would criminalise the creation of sexually explicit deepfake images without consent, with plans for unlimited fines and even jail if the image is widely shared.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
Mumbai Police Registers Case Against Unnamed Person https://artifex.news/aamir-khan-deepfake-video-mumbai-police-registers-case-against-unnamed-person-5465128rand29/ Wed, 17 Apr 2024 19:53:18 +0000 https://artifex.news/aamir-khan-deepfake-video-mumbai-police-registers-case-against-unnamed-person-5465128rand29/ Read More “Mumbai Police Registers Case Against Unnamed Person” »

]]>

In the purported 27-second clip, Amir Khan could be seen talking about staying away from rhetoric.

Mumbai:

 The Mumbai Police Wednesday registered an FIR against an unnamed person in connection with a deepfake video of actor Aamir Khan in which he was seen promoting a political party, official here said.

The FIR was filed at the Khar Police station by Mr Khan’s office under relevant sections of the Indian Penal Code (IPC), including 419 (impersonation), 420 (cheating) and other sections of the Information Technology Act.

In the purported 27-second clip, which seems to have been edited using artificial intelligence (AI), Mr Khan could be seen talking about staying away from rhetoric (jumla).

A spokesperson for the actor had said on Tuesday that while Mr Khan has in the past raised awareness through Election Commission campaigns through the years, he never promoted any political party.

The disputed deepfake video inserts Mr Khan into a scene from a decade-old episode of his television show, ‘Satyamev Jayate’.

“We want to clarify that Mr Aamir Khan has never endorsed any political party throughout his 35-year career. He has dedicated his efforts to raising awareness through Election Commission public awareness campaigns for many past elections,” Mr Khan’s spokesperson had said in a statement.

“We are alarmed by the recent viral video alleging that Aamir Khan is promoting a particular political party. He would like to clarify that this is a fake video and totally untrue. He has reported the matter to various authorities related to this issue, including filing an FIR with the Cyber Crime Cell of the Mumbai Police,” Mr Khan’s spokesperson said in a statement.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>
Woman Finds Her Deepfake Pics On Porn Site https://artifex.news/woman-finds-her-deepfake-pics-on-porn-site-her-best-friend-was-behind-it-5379600/ Fri, 05 Apr 2024 07:48:30 +0000 https://artifex.news/woman-finds-her-deepfake-pics-on-porn-site-her-best-friend-was-behind-it-5379600/ Read More “Woman Finds Her Deepfake Pics On Porn Site” »

]]>

The woman said she felt her “whole world fall away”.

A woman in the UK has described the moment she found out who had been creating her deepfake pornographic photos. She spoke to the BBC and the outlet did not reveal her real identity. The woman said she was sent a link to a porn website from an anonymous email account. When she clicked on the link, she saw mocked-up images and a video of her appearing to have sex with various men. They had been fabricated using the modern artificial intelligence (AI) drive n technology.

“I was screaming and crying and violently scrolling through my phone to work out what I was reading and what I was looking at. I knew that this could genuinely ruin my life,” the woman told the BBC.

Deepfaking is a process, which involves projecting a person’s face onto someone else’s using computer editing software. It can often result in convincing, life-like clips that are then used to spread disinformation or malicious content.

The person who had posted the deepfake images on the website asked other users to make fake pornography of her. In exchange for the fakes, the user offered to share more photos of the woman and details about her.

Scrolling through the website, she felt her “whole world fall away”.

The incident happened in 2021 and the woman faced online harassment for years at the hands of strangers online. Her picture was used on several social media platform, including Reddit.

Along with her friend, the woman decided to compile a list of men who could have been responsible for spreading her deepfake pictures. And one particular picture caught her attention and she made a horrible realisation.

The photo had an image of King’s College, Cambridge. She clearly remembered it being taken and sharing it with only one person – her best friend Alex Woolf.

Woolf went on to get a double first in music from Cambridge University and won BBC Young Composer of the Year 2012, as well as appearing on Mastermind in 2021.

They had bonded over their love of music in their teens, and she said she had always known him as a person who was sympathetic to struggles faced by women online.

It was Woolf who had been offering to share more original pictures of the woman in exchange for them being turned into deepfakes.

“He knew the impact that it was having on my life so profoundly. And yet he still did it,” she said while speaking to the BBC.

In August 2021, Woolf was convicted of taking photos of 15 women from social media and uploading them to pornographic websites. He was given a 20-week prison sentence, and ordered to pay each of his victims 100 pounds as compensation.

Woolf told the BBC he is “utterly ashamed” of the behaviour which led to his conviction and he is “deeply sorry” for his actions.

“There are no excuses for what I did, nor can I adequately explain why I acted on these impulses so despicably at that time,” he said.

For Jodie, finding out what her friend had done was the “ultimate betrayal and humiliation”.

Waiting for response to load…



Source link

]]>
2024 US Presidential Contest Could Be First “AI Election”, Warns CEO https://artifex.news/2024-us-presidential-contest-could-be-first-ai-election-warns-ceo-5374193/ Thu, 04 Apr 2024 12:05:52 +0000 https://artifex.news/2024-us-presidential-contest-could-be-first-ai-election-warns-ceo-5374193/ Read More “2024 US Presidential Contest Could Be First “AI Election”, Warns CEO” »

]]>

The AI boss predicted foreign powers are likely to get involved.

In an era where technology blurs the lines between reality and fiction, the spectre of deepfakes created by artificial intelligence (AI) tools casts a long shadow over the democratic process. With the upcoming US presidential election looming, the spread of AI deepfakes poses a grave threat to the integrity of the electoral system. And the CEO of a prominent AI company has suggested that the technology could “threaten democracy” in the US unless it is controlled. Many prominent voices, including Elon Musk, have raised concern against the rapid spread of AI and Simona Vasyte is the latest to join the list.

“We’ve seen an AI-generated video of Ukrainian President Volodymyr Zelensky ‘asking’ Ukrainian soldiers to lay down their weapons. It’s entirely possible to see similar generated videos of presidential candidates right before the election,” Ms Vasyte, the head of Perfection42, told Newsweek.

“Another tactic might be to encourage youth voters not to vote in the election with fake videos on TikTok… If the election is close, it might become a determining factor, and that is definitely concerning,” she added.

The AI boss predicted foreign powers are likely to get involved.

The incident Ms Vasyte talked about happened in March 2023, when an unknown group – believed to be hackers – uploaded a deepfake video onto a Ukrainian news website of Mr Zelensky telling his soldiers to stop fighting. It was debunked, but the news renewed concerns around how deepfakes could be used to influence politics.

Ms Vasyte pointed towards Russia, hinting it might interfere with the election using AI.

“It’s important to note that AI can be used not only for malicious but also for positive content generation, so in that case, it might be the first major ‘AI election’ where both parties will try to keep up with the changing game of campaigning,” she said while speaking to Newsweek.

“The closer the race, the bigger the impact of AI will be during the election period,” Ms Vasyte added.

The tech entrepreneur called for a strong legislation to check the rapid spread of deepfakes.

Waiting for response to load…



Source link

]]>
S Jaishankar Warns Of AI, Deepfake Threat https://artifex.news/wont-come-out-of-thin-air-s-jaishankar-warns-of-ai-deepfake-threat-5164839rand29/ Sat, 02 Mar 2024 18:07:53 +0000 https://artifex.news/wont-come-out-of-thin-air-s-jaishankar-warns-of-ai-deepfake-threat-5164839rand29/ Read More “S Jaishankar Warns Of AI, Deepfake Threat” »

]]>

New Delhi:

External Affairs Minister S Jaishankar on Saturday warned against risks that new technologies like artificial intelligence and deepfakes pose for national security and said attempts of foreign interference through the cyber domain are growing.

In an interactive session at a think-tank, Jaishankar said there is a need to guard against threats emanating from the cyber domain.

“When we think of security, it is not just the defence of the borders, it is not countering terrorism alone…. But there is the daily routine which is so susceptible today to manipulation and this is growing,” he said.

“I would say frankly, in many ways, today foreign interference in this country is growing. It is important for the average person to understand how the world is changing because it is an era of AI (artificial intelligence) and deepfakes,” Jaishankar said.

The external affairs minister was speaking at the Ananta Aspen Centre.

“They will not come out of thin air. They are today at a certain level. There was a whole culture and a process which has allowed it to happen,” he said.

Jaishankar was asked whether India is becoming a surveillance state as the map of security threats to the ordinary Indian has increased exponentially.

“It is not a question of being paranoid. I mean, there are real problems out there. It is not again a question of surveillance. There is a certain responsibility that the state has. Let us not confuse anarchy and irresponsibility with freedom,” he said. PTI MPB RC

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>