artificial intelligence news – Artifex.News https://artifex.news Stay Connected. Stay Informed. Thu, 09 May 2024 02:29:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png artificial intelligence news – Artifex.News https://artifex.news 32 32 Google DeepMind unveils next generation of drug discovery AI model https://artifex.news/article68156080-ece/ Thu, 09 May 2024 02:29:26 +0000 https://artifex.news/article68156080-ece/ Read More “Google DeepMind unveils next generation of drug discovery AI model” »

]]>

Google DeepMind unveils next generation of drug discovery AI model.
| Photo Credit: AP

Google Deepmind has unveiled the third major version of its “AlphaFold” artificial intelligence model, designed to help scientists design drugs and target disease more effectively.

In 2020, the company made a significant advance in molecular biology by using AI to successfully predict the behaviour of microscopic proteins.

With the latest incarnation of AlphaFold, researchers at DeepMind and sister company Isomorphic Labs – both overseen by cofounder Demis Hassabis – have mapped the behaviour for all of life’s molecules, including human DNA.

The interactions of proteins – from enzymes crucial to the human metabolism, to the antibodies that fight infectious diseases – with other molecules is key to drug discovery and development.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

DeepMind said the findings, published in research journal Nature on Wednesday, would reduce the time and money needed to develop potentially life-changing treatments.

“With these new capabilities, we can design a molecule that will bind to a specific place on a protein, and we can predict how strongly it will bind,” Hassabis said in a press briefing on Tuesday.

“It’s a critical step if you want to design drugs and compounds that will help with disease.”

The company also announced the release of the “AlphaFold server”, a free online tool that scientists can use to test their hypotheses before running real-world tests.

Since 2021, AlphaFold’s predictions have been freely accessible to non-commercial researchers, as part of a database containing more than 200 million protein structures, and has been cited thousands of times in others’ work.

DeepMind said the new server required less computing knowledge, allowing researchers to run tests with just a few clicks of a button.

John Jumper, a senior research scientist at DeepMind, said: “It’s going to be really important how much easier the AlphaFold server makes it for biologists – who are experts in biology, not computer science – to test larger, more complex cases.”

Dr Nicole Wheeler, an expert in microbiology at the University of Birmingham, said AlphaFold 3 could significantly speed up the drug discovery pipeline, as “physically producing and testing biological designs is a big bottleneck in biotechnology at the moment”.



Source link

]]>
This AI Worm Can Steal Data, Break Security Of ChatGPT And Gemini https://artifex.news/this-ai-worm-can-steal-data-break-security-of-chatgpt-and-gemini-5173985/ Mon, 04 Mar 2024 10:29:56 +0000 https://artifex.news/this-ai-worm-can-steal-data-break-security-of-chatgpt-and-gemini-5173985/ Read More “This AI Worm Can Steal Data, Break Security Of ChatGPT And Gemini” »

]]>

The researchers also warned about “bad architecture design” within the AI system.

As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, researchers are now developing AI worms which can steal your confidential data and break security measures of the generative AI systems, as per a report in Wired.

Researchers from Cornell University, Technion-Israel Institute of Technology, and Intuit created the first generative AI worm called ‘Morris II’ which can steal data or deploy malware and spread from one system to another. It has been named after the first worm which was launched on the internet in 1988. Ben Nassi, a Cornell Tech researcher, said, “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,”

The AI worm can breach some security measures in ChatGPT and Gemini by attacking a generative AI email assistant with the intent of stealing email data and sending spam, as per the outlet.

The researchers used an “adversarial self-replicating prompt” to develop the generative AI worm. According to them, this prompt causes the generative AI model to generate a different prompt in response. To execute it, the researchers then created an email system that could send and receive messages using generative AI, adding into ChatGPT, Gemini, and open-source LLM. Further, they discovered two ways to utilise the system- by using a self-replicating prompt that was text-based and by embedding the question within an image file.

In one case, the researchers took on the role of attackers and sent an email with an adversarial text prompt. This “poisons” the email assistant’s database by utilising retrieval-augmented generation, which allows LLMs to get more data from outside their system. According to Mr Nassi, the retrieval-augmented generation “jailbreaks the GenAI service” when it retrieves an email in response to a user inquiry and sends it to GPT-4 or Gemini Pro to generate a response. This eventually results in the theft of data from the emails.

“The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” he added.

For the second method, the researcher mentioned, “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent.”

A video showcasing the findings shows the email system repeatedly forwarding a message. The researchers claim that they could also obtain email data.”It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential,” Mr Nassi said.

The researchers also warned about “bad architecture design” within the AI system. They also reported their observations to Google and OpenAI. “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered,” a spokesperson for OpenAI told the outlet. Further, they mentioned that they are working to make systems “more resilient” and developers should “use methods that ensure they are not working with harmful input.” 

Google declined to comment on the subject.

Waiting for response to load…



Source link

]]>
Britain uses UN speech to show that it wants to be a leader on how the world handles AI https://artifex.news/article67337255-ece/ Sat, 23 Sep 2023 04:32:30 +0000 https://artifex.news/article67337255-ece/ Read More “Britain uses UN speech to show that it wants to be a leader on how the world handles AI” »

]]>

Major U.S. tech companies have acknowledged a need for AI regulations.
| Photo Credit: REUTERS

Britain pitched itself to the world Friday as a ready leader in shaping an international response to the rise of artificial intelligence, with Deputy Prime Minister Oliver Dowden telling the UN General Assembly his country was “determined to be in the vanguard.”

Touting the United Kingdom’s tech companies, its universities and even Industrial Revolution-era innovations, he said the nation has “the grounding to make AI a success and make it safe.” He went on to suggest that a British AI task force, which is working on methods for assessing AI systems’ vulnerability, could develop expertise to offer internationally.

His remarks at the assembly’s annual meeting of world leaders previewed an AI safety summit that British Prime Minister Rishi Sunak is convening in November. Dowden’s speech also came as other countries and multinational groups — including the European Union, the bloc that Britain left in 2020 — are making moves on artificial intelligence.

The EU this year passed pioneering regulations that set requirements and controls based on the level of risk that any given AI system poses, from low (such as spam filters) to unacceptable (for example, an interactive, children’s toy that talks up dangerous activities).

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

The U.N., meanwhile, is pulling together an advisory board to make recommendations on structuring international rules for artificial intelligence. Members will be appointed this month, Secretary-General António Guterres told the General Assembly on Tuesday; the group’s first take on a report is due by the end of the year.

Major U.S. tech companies have acknowledged a need for AI regulations, though their ideas on the particulars vary. And in Europe, a roster of big companies ranging from French jetmaker Airbus to to Dutch beer giant Heineken signed an open letter to urging the EU to reconsider its rules, saying it would put European companies at a disadvantage.

“The starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible,” Dowden said. He argued that “the most important actions we will take will be international.”

Listing hoped-for benefits — such improving disease detection and productivity — alongside artificial intelligence’s potential to wreak havoc with deepfakes, cyberattacks and more, Dowden urged leaders not to get “trapped in debates about whether AI is a tool for good or a tool for ill.”

“It will be a tool for both,” he said.

It’s “exciting. Daunting. Inexorable,” Dowden said, and the technology will test the international community “to show that it can work together on a question that will help to define the fate of humanity.”



Source link

]]>
UN calls for age limits for AI tools in schools https://artifex.news/article67279840-ece/ Thu, 07 Sep 2023 05:02:35 +0000 https://artifex.news/article67279840-ece/ Read More “UN calls for age limits for AI tools in schools” »

]]>

Generative AI programs burst into the spotlight late last year. (File)
| Photo Credit: REUTERS

The United Nations called on Thursday for strict rules on the use of AI tools such as viral chatbot ChatGPT in classrooms, including limiting their use to older children.

In new guidance for governments, the UN’s education body UNESCO warned that public authorities were not ready to deal with the ethical issues of rolling out “generative” Artificial Intelligence programs in schools.

The Paris-based body said relying on such programs rather than human teachers could affect a child’s emotional wellbeing and leave them vulnerable to manipulation.

“Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” said Audrey Azoulay of UNESCO.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

“It cannot be integrated into education without public engagement, and the necessary safeguards and regulations from governments.”

Generative AI programs burst into the spotlight late last year, with ChatGPT demonstrating an ability to generate essays, poems and conversations from the briefest prompts.

It sparked fears of plagiarism and cheating in schools and universities.

But investors poured money into the field and boosters targeted education as a possible lucrative market.

The UNESCO guidance said AI tools have the potential to help children with special needs, act as an opponent in “Socratic dialogues” or as a research assistant.

But the tools would only be safe and effective if teachers, learners and researchers helped to design them and governments regulated their use.

The guidance stopped short of recommending a minimum age for schoolchildren but pointed out that ChatGPT had a lower age limit of 13.

“Many commentators understand this threshold to be too young and have advocated for legislation to raise the age to 16,” said the guidance.



Source link

]]>