artificial inteligence – Artifex.News https://artifex.news Stay Connected. Stay Informed. Wed, 17 Apr 2024 19:53:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png artificial inteligence – Artifex.News https://artifex.news 32 32 Mumbai Police Registers Case Against Unnamed Person https://artifex.news/aamir-khan-deepfake-video-mumbai-police-registers-case-against-unnamed-person-5465128rand29/ Wed, 17 Apr 2024 19:53:18 +0000 https://artifex.news/aamir-khan-deepfake-video-mumbai-police-registers-case-against-unnamed-person-5465128rand29/ Read More “Mumbai Police Registers Case Against Unnamed Person” »

]]>

In the purported 27-second clip, Amir Khan could be seen talking about staying away from rhetoric.

Mumbai:

 The Mumbai Police Wednesday registered an FIR against an unnamed person in connection with a deepfake video of actor Aamir Khan in which he was seen promoting a political party, official here said.

The FIR was filed at the Khar Police station by Mr Khan’s office under relevant sections of the Indian Penal Code (IPC), including 419 (impersonation), 420 (cheating) and other sections of the Information Technology Act.

In the purported 27-second clip, which seems to have been edited using artificial intelligence (AI), Mr Khan could be seen talking about staying away from rhetoric (jumla).

A spokesperson for the actor had said on Tuesday that while Mr Khan has in the past raised awareness through Election Commission campaigns through the years, he never promoted any political party.

The disputed deepfake video inserts Mr Khan into a scene from a decade-old episode of his television show, ‘Satyamev Jayate’.

“We want to clarify that Mr Aamir Khan has never endorsed any political party throughout his 35-year career. He has dedicated his efforts to raising awareness through Election Commission public awareness campaigns for many past elections,” Mr Khan’s spokesperson had said in a statement.

“We are alarmed by the recent viral video alleging that Aamir Khan is promoting a particular political party. He would like to clarify that this is a fake video and totally untrue. He has reported the matter to various authorities related to this issue, including filing an FIR with the Cyber Crime Cell of the Mumbai Police,” Mr Khan’s spokesperson said in a statement.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>
China Plans To Use AI To Disrupt Elections In India, US: Microsoft https://artifex.news/lok-sabha-elections-2024-china-plans-to-use-ai-to-disrupt-elections-in-india-us-microsoft-5385407rand29/ Sat, 06 Apr 2024 04:15:39 +0000 https://artifex.news/lok-sabha-elections-2024-china-plans-to-use-ai-to-disrupt-elections-in-india-us-microsoft-5385407rand29/ Read More “China Plans To Use AI To Disrupt Elections In India, US: Microsoft” »

]]>

During the Taiwanese election, a Beijing-backed group was notably active, Microsoft said.

New Delhi:

Microsoft has warned that China is gearing up to disrupt the upcoming elections in India, the United States and South Korea by using artificial intelligence-generated content. The warning comes after China conducted a trial run during Taiwan’s presidential election, employing AI to influence the outcome.

Across the world, at least 64 countries, in addition to the European Union, are expected to hold national elections. These countries collectively account for approximately 49 per cent of the global population.

According to Microsoft’s threat intelligence team, Chinese state-backed cyber groups, along with involvement from North Korea, are expected to target several elections scheduled for 2024. Microsoft said that China will likely deploy AI-generated content via social media to sway public opinion in favour of their interests during these elections.

“With major elections taking place around the world this year, particularly in India, South Korea and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests,” Microsoft said in its statement.

Threat Of AI In Elections

The threat posed by political advertisements employing AI technology to produce deceptive and false content, including “deepfakes” or fabricating events that never took place, is significant in a crucial poll year. Such tactics aim to mislead the public regarding candidates’ statements, stances on various issues, and even the authenticity of certain events. If allowed to go unchecked, these manipulative attempts have the potential to undermine voters’ ability to make well-informed decisions.

While the immediate impact of AI-generated content remains relatively low, Microsoft warned that China’s increasing experimentation with this technology could potentially become more effective over time. The tech giant noted that China’s previous attempt to influence Taiwan’s election involved the dissemination of AI-generated disinformation, marking the first instance of a state-backed entity utilising such tactics in a foreign election.

Latest and Breaking News on NDTV

During the Taiwanese election, a Beijing-backed group known as Storm 1376, or Spamouflage, was notably active, Microsoft said. This group circulated AI-generated content, including fake audio endorsements and memes, aimed at discrediting certain candidates and influencing voter perceptions. The use of AI-generated TV news anchors, is a tactic also employed by Iran.

“Storm-1376 has promoted a series of AI-generated memes of Taiwan’s then-Democratic Progressive Party (DPP) presidential candidate William Lai, and other Taiwanese officials as well as Chinese dissidents around the world. These have included an increasing use of AI-generated TV news anchors that Storm-1376 has deployed since at least February 2023,” Microsoft said.

AI Influence In US Affairs

Microsoft pointed out that Chinese groups continue to conduct influence campaigns in the United States, leveraging social media platforms to pose divisive questions and gather intelligence on key voting demographics.

“There has been an increased use of Chinese AI-generated content in recent months, attempting to influence and sow division in the US and elsewhere on a range of topics including the train derailment in Kentucky in November 2023, the Maui wildfires in August 2023, the disposal of Japanese nuclear wastewater, drug use in the US as well as immigration policies and racial tensions in the country. There is little evidence these efforts have been successful in swaying opinion,” Microsoft stated.

Latest and Breaking News on NDTV

The use of AI in US election campaigns is not new. In the lead-up to the 2024 New Hampshire Democratic primaries, an AI-generated phone call mimicked President Joe Biden’s voice, advising voters against participating in the polling.

The call falsely insinuated that voters should withhold their votes for the general election in November instead. Upon hearing this message, the average voter could easily have been misled into believing that President Biden himself had endorsed this directive, potentially leading to their disenfranchisement.

Although there is no evidence of Chinese involvement in the New Hampshire episode, the incident marks one of many such instances where AI posed a direct threat to democratic practices.

Road Ahead For India

India’s general elections are scheduled to begin on April 19, with the results set to be declared on June 4. The electoral process will unfold across seven phases, with the first phase commencing on April 19, followed by the second phase on April 26, the third phase on May 7, the fourth phase on May 13, the fifth phase on May 20, the sixth phase on May 25, and culminating with the seventh phase on June 1.

The current term of the 17th Lok Sabha Assembly is set to conclude on June 16.

Latest and Breaking News on NDTV

The Election Commission of India (ECI) has already provided guidelines and protocols for promptly identifying and responding to false information and misinformation.

Last month, representatives from OpenAI, the developer of ChatGPT, met with members of the ICI and delivered a presentation to the commission members outlining the measures being undertaken to prevent the misuse of AI in the upcoming elections.



Source link

]]>
Microsoft Reveals How China Plans To Disrupt Indian Elections Using AI https://artifex.news/lok-sabha-elections-2024-china-plans-to-use-ai-to-disrupt-elections-in-india-us-microsoft-5385407/ Sat, 06 Apr 2024 04:15:39 +0000 https://artifex.news/lok-sabha-elections-2024-china-plans-to-use-ai-to-disrupt-elections-in-india-us-microsoft-5385407/ Read More “Microsoft Reveals How China Plans To Disrupt Indian Elections Using AI” »

]]>

During the Taiwanese election, a Beijing-backed group was notably active, Microsoft said.

New Delhi:

Microsoft has warned that China is gearing up to disrupt the upcoming elections in India, the United States and South Korea by using artificial intelligence-generated content. The warning comes after China conducted a trial run during Taiwan’s presidential election, employing AI to influence the outcome.

Across the world, at least 64 countries, in addition to the European Union, are expected to hold national elections. These countries collectively account for approximately 49 per cent of the global population.

According to Microsoft’s threat intelligence team, Chinese state-backed cyber groups, along with involvement from North Korea, are expected to target several elections scheduled for 2024. Microsoft said that China will likely deploy AI-generated content via social media to sway public opinion in favour of their interests during these elections.

“With major elections taking place around the world this year, particularly in India, South Korea and the United States, we assess that China will, at a minimum, create and amplify AI-generated content to benefit its interests,” Microsoft said in its statement.

Threat Of AI In Elections

The threat posed by political advertisements employing AI technology to produce deceptive and false content, including “deepfakes” or fabricating events that never took place, is significant in a crucial poll year. Such tactics aim to mislead the public regarding candidates’ statements, stances on various issues, and even the authenticity of certain events. If allowed to go unchecked, these manipulative attempts have the potential to undermine voters’ ability to make well-informed decisions.

While the immediate impact of AI-generated content remains relatively low, Microsoft warned that China’s increasing experimentation with this technology could potentially become more effective over time. The tech giant noted that China’s previous attempt to influence Taiwan’s election involved the dissemination of AI-generated disinformation, marking the first instance of a state-backed entity utilising such tactics in a foreign election.

Latest and Breaking News on NDTV

During the Taiwanese election, a Beijing-backed group known as Storm 1376, or Spamouflage, was notably active, Microsoft said. This group circulated AI-generated content, including fake audio endorsements and memes, aimed at discrediting certain candidates and influencing voter perceptions. The use of AI-generated TV news anchors, is a tactic also employed by Iran.

“Storm-1376 has promoted a series of AI-generated memes of Taiwan’s then-Democratic Progressive Party (DPP) presidential candidate William Lai, and other Taiwanese officials as well as Chinese dissidents around the world. These have included an increasing use of AI-generated TV news anchors that Storm-1376 has deployed since at least February 2023,” Microsoft said.

AI Influence In US Affairs

Microsoft pointed out that Chinese groups continue to conduct influence campaigns in the United States, leveraging social media platforms to pose divisive questions and gather intelligence on key voting demographics.

“There has been an increased use of Chinese AI-generated content in recent months, attempting to influence and sow division in the US and elsewhere on a range of topics including the train derailment in Kentucky in November 2023, the Maui wildfires in August 2023, the disposal of Japanese nuclear wastewater, drug use in the US as well as immigration policies and racial tensions in the country. There is little evidence these efforts have been successful in swaying opinion,” Microsoft stated.

Latest and Breaking News on NDTV

The use of AI in US election campaigns is not new. In the lead-up to the 2024 New Hampshire Democratic primaries, an AI-generated phone call mimicked President Joe Biden’s voice, advising voters against participating in the polling.

The call falsely insinuated that voters should withhold their votes for the general election in November instead. Upon hearing this message, the average voter could easily have been misled into believing that President Biden himself had endorsed this directive, potentially leading to their disenfranchisement.

Although there is no evidence of Chinese involvement in the New Hampshire episode, the incident marks one of many such instances where AI posed a direct threat to democratic practices.

Road Ahead For India

India’s general elections are scheduled to begin on April 19, with the results set to be declared on June 4. The electoral process will unfold across seven phases, with the first phase commencing on April 19, followed by the second phase on April 26, the third phase on May 7, the fourth phase on May 13, the fifth phase on May 20, the sixth phase on May 25, and culminating with the seventh phase on June 1.

The current term of the 17th Lok Sabha Assembly is set to conclude on June 16.

Latest and Breaking News on NDTV

The Election Commission of India (ECI) has already provided guidelines and protocols for promptly identifying and responding to false information and misinformation.

Last month, representatives from OpenAI, the developer of ChatGPT, met with members of the ICI and delivered a presentation to the commission members outlining the measures being undertaken to prevent the misuse of AI in the upcoming elections.

Waiting for response to load…



Source link

]]>
This AI Worm Can Steal Data, Break Security Of ChatGPT And Gemini https://artifex.news/this-ai-worm-can-steal-data-break-security-of-chatgpt-and-gemini-5173985/ Mon, 04 Mar 2024 10:29:56 +0000 https://artifex.news/this-ai-worm-can-steal-data-break-security-of-chatgpt-and-gemini-5173985/ Read More “This AI Worm Can Steal Data, Break Security Of ChatGPT And Gemini” »

]]>

The researchers also warned about “bad architecture design” within the AI system.

As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, researchers are now developing AI worms which can steal your confidential data and break security measures of the generative AI systems, as per a report in Wired.

Researchers from Cornell University, Technion-Israel Institute of Technology, and Intuit created the first generative AI worm called ‘Morris II’ which can steal data or deploy malware and spread from one system to another. It has been named after the first worm which was launched on the internet in 1988. Ben Nassi, a Cornell Tech researcher, said, “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,”

The AI worm can breach some security measures in ChatGPT and Gemini by attacking a generative AI email assistant with the intent of stealing email data and sending spam, as per the outlet.

The researchers used an “adversarial self-replicating prompt” to develop the generative AI worm. According to them, this prompt causes the generative AI model to generate a different prompt in response. To execute it, the researchers then created an email system that could send and receive messages using generative AI, adding into ChatGPT, Gemini, and open-source LLM. Further, they discovered two ways to utilise the system- by using a self-replicating prompt that was text-based and by embedding the question within an image file.

In one case, the researchers took on the role of attackers and sent an email with an adversarial text prompt. This “poisons” the email assistant’s database by utilising retrieval-augmented generation, which allows LLMs to get more data from outside their system. According to Mr Nassi, the retrieval-augmented generation “jailbreaks the GenAI service” when it retrieves an email in response to a user inquiry and sends it to GPT-4 or Gemini Pro to generate a response. This eventually results in the theft of data from the emails.

“The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” he added.

For the second method, the researcher mentioned, “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent.”

A video showcasing the findings shows the email system repeatedly forwarding a message. The researchers claim that they could also obtain email data.”It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential,” Mr Nassi said.

The researchers also warned about “bad architecture design” within the AI system. They also reported their observations to Google and OpenAI. “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered,” a spokesperson for OpenAI told the outlet. Further, they mentioned that they are working to make systems “more resilient” and developers should “use methods that ensure they are not working with harmful input.” 

Google declined to comment on the subject.

Waiting for response to load…



Source link

]]>
There Is Above-Zero Chance That AI Will Kill Us All, Says Elon Musk https://artifex.news/there-is-above-zero-chance-that-ai-will-kill-us-all-says-elon-musk-4535903/ Wed, 01 Nov 2023 14:01:07 +0000 https://artifex.news/there-is-above-zero-chance-that-ai-will-kill-us-all-says-elon-musk-4535903/ Read More “There Is Above-Zero Chance That AI Will Kill Us All, Says Elon Musk” »

]]>

London:

Tesla CEO and ‘X’ owner, Elon Musk said on Wednesday that Artificial Intelligence (AI) could endanger the existence of human civilisation.

“There is some chance, above zero, that AI will kill us all. I think it’s slow but there is some chance. I think this also concerns the fragility of human civilization. If you study history, you will realise that every civilisation has a sort of lifespan,” he said.

His remarks came during a media interaction as he arrived to attend the United Kingdom hosted world’s first global Artificial Intelligence (AI) Safety Summit.

Moreover, Musk was also seen having conversations with some of the participants at the summit.

Earlier, a day before attending the AI summit, Musk appeared on comedian Joe Rogan’s podcast, in which he that artificial intelligence (AI), if programmed by people in the “environmental movement”, may lead to the extinction of humanity.

In the podcast, he said, “Actually, what I think the biggest danger is for AI is that if AI is implicitly programmed, I don’t think they explicitly but implicitly programmed with values that led to the destruction of downtown San Francisco. And a bunch of these AI companies are in San Francisco or in the San Francisco Bay Area. Then you could implicitly program an AI to believe that extinction of humanity is what it should try to do.”

He added, “I think the most likely outcome to be specific about it is a good outcome, most likely a good outcome. But it’s not for sure. So i think we’re to be careful how we program the eye and make sure that it is not accidentally antihuman.”

Moreover, the social media company Meta President said, “I’m looking forward to attending the AI Safety Summit in the UK this week. I hope we spend as much time as possible developing much-needed solutions to current problems – for example, on the transparency and detectability of AI-generated content.”

He added, “not just debating speculative future risks about AI models that currently do not exist, and may never possess the autonomy and agency that some people fear.”

The summit underway in UK will see a convergence of governments, academia and companies working in artificial intelligence to debate and identify risks, opportunities and the “need for international collaboration, before highlighting consensus on the scale, importance and urgency for AI opportunities” a statement by the British High Commission read.

Moreover, British Prime Minister Rishi Sunak has said that he will do a live conversation with Elon Musk on X after the AI summit.

PM Sunak posted on X, “In conversation with @elonmusk. After the AI Safety Summit, Thursday night on @x.”

The summit aims to put light on the transformative benefits that AI technology can offer, putting a predominant focus on “education and areas for international research collaborations”.

Representatives from The Alan Turing Institute, The Organisation for Economic Cooperation and Development (OECD) and the Ada Lovelace Institute are also among the groups confirmed to attend.

Prime Minister Sunak had last week stated that the summit will focus on understanding the risks such as potential threats to national security including the dangers a loss of control the technology could bring.

On the agenda are discussions around issues likely to impact society, such as election disruption and erosion of social trust.

According to government estimates, the UK already employs over 50,000 people in the AI sector and contributes 3.7 billion pounds to its economy annually. Michelle Donelan will be joined by members of the UK’s Frontier AI Taskforce – including its Chair, Ian Hogarth. The task force was launched earlier this year with an aim to evaluate the risks of frontier AI models (generative language models of AI).

Additionally, on the first day of the Summit, Union Minister of State Rajeev Chandrasekhar participated the AI summit and conveyed India’s thoughts on AI.

On the second day of the summit, Chandrasekhar will contribute to discussions regarding the establishment of a collaborative framework for AI among like-minded nations. He will shed light on India’s perspective concerning AI risks in areas such as disinformation and electoral security.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>