ai news – Artifex.News https://artifex.news Stay Connected. Stay Informed. Thu, 30 May 2024 17:56:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png ai news – Artifex.News https://artifex.news 32 32 Humanity In “Race Against Time” To Harness Emerging Power Of AI, Says UN https://artifex.news/humanity-in-race-against-time-to-harness-emerging-power-of-ai-says-un-5781912/ Thu, 30 May 2024 17:56:24 +0000 https://artifex.news/humanity-in-race-against-time-to-harness-emerging-power-of-ai-says-un-5781912/ Read More “Humanity In “Race Against Time” To Harness Emerging Power Of AI, Says UN” »

]]>

European Union recently announced the creation of an AI Office. (Representational)

Geneva:

Humanity is in a race against time to harness the colossal emerging power of artificial intelligence for the good of all, while averting dire risks, a top UN official said Thursday.

“We’ve let the genie out of the bottle,” said Doreen Bogdan-Martin, head of the United Nations’ International Telecommunications Union (ITU).

“We are in a race against time,” she told the opening of a two-day AI for Good Global Summit in Geneva.

“Recent developments in AI have been nothing short of extraordinary.”

The thousands gathered at the conference heard how advances in generative AI are already speeding up efforts to solve some of the world’s most pressing problems, such as climate change, hunger and social care.

“I believe we have a once-in-a-generation opportunity to guide AI to benefit all the world’s people,” Bogdan-Martin told AFP in an email ahead of the summit.

But she lamented Thursday that one-third of humanity still remains completely offline, and is “excluded from the AI revolution without a voice”.

“This digital and technological divide is no longer acceptable.”

Bogdan-Martin highlighted that AI holds “immense potential for both good and bad”, stressing that it was vital to “make AI systems safe”.

She said that was especially important now, given that “2024 is the biggest election year in history”, with votes in dozens of countries, including in the United States.

And “with the rise of sophisticated deep fakes disinformation campaigns, it’s also the most contentious one,” she said.

“Not only does this misuse of AI threaten democracy, it also endangers young people’s mental health and compromises cyber-security.”

In an address to a separate event focused on AI governance this week, the ITU chief said that “the power of AI is concentrated in the hands of too few”.

Bogdan-Martin hailed that governments and others had become more focused on regulation and protections around the use of AI.

For instance, on Wednesday the European Union announced the creation of an AI Office to regulate artificial intelligence under a sweeping new law.

“It’s our responsibility to write the next chapter in the great story of humanity, and technology, and to make it safe, to make it inclusive and to make it sustainable,” Bogdan-Martin said.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
Google DeepMind unveils next generation of drug discovery AI model https://artifex.news/article68156080-ece/ Thu, 09 May 2024 02:29:26 +0000 https://artifex.news/article68156080-ece/ Read More “Google DeepMind unveils next generation of drug discovery AI model” »

]]>

Google DeepMind unveils next generation of drug discovery AI model.
| Photo Credit: AP

Google Deepmind has unveiled the third major version of its “AlphaFold” artificial intelligence model, designed to help scientists design drugs and target disease more effectively.

In 2020, the company made a significant advance in molecular biology by using AI to successfully predict the behaviour of microscopic proteins.

With the latest incarnation of AlphaFold, researchers at DeepMind and sister company Isomorphic Labs – both overseen by cofounder Demis Hassabis – have mapped the behaviour for all of life’s molecules, including human DNA.

The interactions of proteins – from enzymes crucial to the human metabolism, to the antibodies that fight infectious diseases – with other molecules is key to drug discovery and development.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

DeepMind said the findings, published in research journal Nature on Wednesday, would reduce the time and money needed to develop potentially life-changing treatments.

“With these new capabilities, we can design a molecule that will bind to a specific place on a protein, and we can predict how strongly it will bind,” Hassabis said in a press briefing on Tuesday.

“It’s a critical step if you want to design drugs and compounds that will help with disease.”

The company also announced the release of the “AlphaFold server”, a free online tool that scientists can use to test their hypotheses before running real-world tests.

Since 2021, AlphaFold’s predictions have been freely accessible to non-commercial researchers, as part of a database containing more than 200 million protein structures, and has been cited thousands of times in others’ work.

DeepMind said the new server required less computing knowledge, allowing researchers to run tests with just a few clicks of a button.

John Jumper, a senior research scientist at DeepMind, said: “It’s going to be really important how much easier the AlphaFold server makes it for biologists – who are experts in biology, not computer science – to test larger, more complex cases.”

Dr Nicole Wheeler, an expert in microbiology at the University of Birmingham, said AlphaFold 3 could significantly speed up the drug discovery pipeline, as “physically producing and testing biological designs is a big bottleneck in biotechnology at the moment”.



Source link

]]>
AI has a large and growing carbon footprint https://artifex.news/article67928221-ece/ Fri, 08 Mar 2024 08:59:32 +0000 https://artifex.news/article67928221-ece/ Read More “AI has a large and growing carbon footprint” »

]]>

A Copilot page showing the incorporation of AI technology is shown in London, Tuesday, February 13, 2024. Image for Representation.
| Photo Credit: AP

Given the huge problem-solving potential of artificial intelligence (AI), it wouldn’t be far-fetched to think that AI could also help us in tackling the climate crisis. However, when we consider the energy needs of AI models, it becomes clear that the technology is as much a part of the climate problem as a solution.

The emissions come from the infrastructure associated with AI, such as building and running the data centres that handle the large amounts of information required to sustain these systems.

But different technological approaches to how we build AI systems could help reduce its carbon footprint. Two technologies in particular hold promise for doing this: spiking neural networks and lifelong learning.

The lifetime of an AI system can be split into two phases: training and inference. During training, a relevant dataset is used to build and tune – improve – the system. In inference, the trained system generates predictions on previously unseen data.

For example, training an AI that’s to be used in self-driving cars would require a dataset of many different driving scenarios and decisions taken by human drivers.

After the training phase, the AI system will predict effective manoeuvres for a self-driving car. Artificial neural networks (ANN), are an underlying technology used in most current AI systems.

They have many different elements to them, called parameters, whose values are adjusted during the training phase of the AI system. These parameters can run to more than 100 billion in total.

While large numbers of parameters improve the capabilities of ANNs, they also make training and inference resource-intensive processes. To put things in perspective, training GPT-3 (the precursor AI system to the current ChatGPT) generated 502 metric tonnes of carbon, which is equivalent to driving 112 petrol powered cars for a year.

GPT-3 further emits 8.4 tonnes of CO₂ annually due to inference. Since the AI boom started in the early 2010s, the energy requirements of AI systems known as large language models (LLMs) – the type of technology that’s behind ChatGPT – have gone up by a factor of 300,000.

With the increasing ubiquity and complexity of AI models, this trend is going to continue, potentially making AI a significant contributor of CO₂ emissions. In fact, our current estimates could be lower than AI’s actual carbon footprint due to a lack of standard and accurate techniques for measuring AI-related emissions.

Spiking neural networks

The previously mentioned new technologies, spiking neural networks (SNNs) and lifelong learning (L2), have the potential to lower AI’s ever-increasing carbon footprint, with SNNs acting as an energy-efficient alternative to ANNs.

ANNs work by processing and learning patterns from data, enabling them to make predictions. They work with decimal numbers. To make accurate calculations, especially when multiplying numbers with decimal points together, the computer needs to be very precise. It is because of these decimal numbers that ANNs require lots of computing power, memory and time.

This means ANNs become more energy-intensive as the networks get larger and more complex. Both ANNs and SNNs are inspired by the brain, which contains billions of neurons (nerve cells) connected to each other via synapses.

Like the brain, ANNs and SNNs also have components which researchers call neurons, although these are artificial, not biological ones. The key difference between the two types of neural networks is in the way individual neurons transmit information to each other.

Neurons in the human brain communicate with each other by transmitting intermittent electrical signals called spikes. The spikes themselves do not contain information. Instead, the information lies in the timing of these spikes. This binary, all-or-none characteristic of spikes (usually represented as 0 or 1) implies that neurons are active when they spike and inactive otherwise.

This is one of the reasons for energy efficient processing in the brain.

Just as Morse code uses specific sequences of dots and dashes to convey messages, SNNs use patterns or timings of spikes to process and transmit information. So, while the artificial neurons in ANNs are always active, SNNs consume energy only when a spike occurs.

Otherwise, they have closer to zero energy requirements. SNNs can be up to 280 times more energy efficient than ANNs.

My colleagues and I are developing learning algorithms for SNNs that may bring them even closer to the energy efficiency exhibited by the brain. The lower computational requirements also imply that SNNs might be able to make decisions more quickly.

These properties render SNNs useful for broad range of applications, including space exploration, defence and self-driving cars because of the limited energy sources available in these scenarios.

Lifelong learning

L2 is another strategy for reducing the overall energy requirements of ANNs over the course of their lifetime that we are also working on.

Training ANNs sequentially (where the systems learn from sequences of data) on new problems causes them to forget their previous knowledge while learning new tasks. ANNs require retraining from scratch when their operating environment changes, further increasing AI-related emissions.

L2 is a collection of algorithms that enable AI models to be trained sequentially on multiple tasks with little or no forgetting. L2 enables models to learn throughout their lifetime by building on their existing knowledge without having to retrain them from scratch.

The field of AI is growing fast and other potential advancements are emerging that can mitigate the energy demands of this technology. For instance, building smaller AI models that exhibit the same predictive capabilities as that of a larger model.

Advances in quantum computing – a different approach to building computers that harnesses phenomena from the world of quantum physics – would also enable faster training and inference using ANNs and SNNs. The superior computing capabilities offered by quantum computing could allow us to find energy-efficient solutions for AI at a much larger scale.

The climate change challenge requires that we try to find solutions for rapidly advancing areas such as AI before their carbon footprint becomes too large.

The Conversation

Shirin Dora, Lecturer, Computer Science, Loughborough University

This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link

]]>
Tech industry leaders endorse regulating artificial intelligence at rare summit https://artifex.news/article67305893-ece/ Thu, 14 Sep 2023 03:26:01 +0000 https://artifex.news/article67305893-ece/ Read More “Tech industry leaders endorse regulating artificial intelligence at rare summit” »

]]>

The nation’s biggest technology executives on Wednesday loosely endorsed the idea of government regulations for artificial intelligence at an unusual closed-door meeting in the US Senate. But there is little consensus on what regulation would look like, and the political path for legislation is difficult.

Senate Majority Leader Chuck Schumer, who organized the private forum on Capitol Hill as part of a push to legislate artificial intelligence, said he asked everyone in the room — including almost two dozen tech executives, advocates and skeptics — whether government should have a role in the oversight of artificial intelligence, and “every single person raised their hands, even though they had diverse views,” he said.

Among the ideas discussed was whether there should be an independent agency to oversee certain aspects of the rapidly-developing technology, how companies could be more transparent and how the United States can stay ahead of China and other countries.

“The key point was really that it’s important for us to have a referee,” said Elon Musk, CEO of Tesla and X, during a break in the daylong forum. “It was a very civilized discussion, actually, among some of the smartest people in the world.”

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Schumer will not necessarily take the tech executives’ advice as he works with colleagues on the politically difficult task of ensuring some oversight of the burgeoning sector. But he invited them to the meeting in hopes that they would give senators some realistic direction for meaningful regulation.

Congress should do what it can to maximize AI’s benefits and minimize the negatives, Schumer said, “whether that’s enshrining bias, or the loss of jobs, or even the kind of doomsday scenarios that were mentioned in the room. And only government can be there to put in guardrails.”

Other executives attending the meeting were Meta’s Mark Zuckerberg, former Microsoft CEO Bill Gates and Google CEO Sundar Pichai. Musk said the meeting “might go down in history as being very important for the future of civilization.” First, though, lawmakers have to agree on whether to regulate, and how.

Congress has a lackluster track record when it comes to regulating new technology, and the industry has grown mostly unchecked by government in the past several decades. Many lawmakers point to the failure to pass any legislation surrounding social media, such as for stricter privacy standards.

Schumer, who has made AI one of his top issues as leader, said regulation of artificial intelligence will be “one of the most difficult issues we can ever take on,” and he listed some of the reasons why: It’s technically complicated, it keeps changing and it “has such a wide, broad effect across the whole world,” he said.

Sparked by the release of ChatGPT less than a year ago, businesses have been clamoring to apply new generative AI tools that can compose human-like passages of text, program computer code and create novel images, audio and video. The hype over such tools has accelerated worries over its potential societal harms and prompted calls for more transparency in how the data behind the new products is collected and used.

Republican Sen. Mike Rounds of South Dakota, who led the meeting with Schumer, said Congress needs to get ahead of fast-moving AI by making sure it continues to develop “on the positive side” while also taking care of potential issues surrounding data transparency and privacy.

“AI is not going away, and it can do some really good things or it can be a real challenge,” Rounds said.

The tech leaders and others outlined their views at the meeting, with each participant getting three minutes to speak on a topic of their choosing. Schumer and Rounds then led a group discussion.

During the discussion, according to attendees who spoke about it, Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, and Zuckerberg brought up the question of closed vs. “open source” AI models. Gates talked about feeding the hungry. IBM CEO Arvind Krishna expressed opposition to proposals favored by other companies that would require licenses.

In terms of a potential new agency for regulation, “that is one of the biggest questions we have to answer and that we will continue to discuss,” Schumer said. Musk said afterward he thinks the creation of a regulatory agency is likely.

Outside the meeting, Google CEO Pichai declined to give details about specifics but generally endorsed the idea of Washington involvement.

“I think it’s important that government plays a role, both on the innovation side and building the right safeguards, and I thought it was a productive discussion,” he said.

Some senators were critical that the public was shut out of the meeting, arguing that the tech executives should testify in public.

Sen. Josh Hawley, R-Mo., said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Sen. Richard Blumenthal, D-Conn., to require tech companies to seek licenses for high-risk AI systems.

“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.

While civil rights and labor groups were also represented at the meeting, some experts worried that Schumer’s event risked emphasizing the concerns of big firms over everyone else.

Sarah Myers West, managing director of the nonprofit AI Now Institute, estimated that the combined net worth of the room Wednesday was $550 billion and it was “hard to envision a room like that in any way meaningfully representing the interests of the broader public.” She did not attend.

In the United States, major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means. Similarly, members of Congress agree that legislation is needed, but there is little consensus on what to do.

There is also division, with some members of Congress worrying more about overregulation of the industry while others are concerned more about the potential risks. Those differences often fall along party lines.

“I am involved in this process in large measure to ensure that we act, but we don’t act more boldly or over-broadly than the circumstances require,” Young said. “We should be skeptical of government, which is why I think it’s important that you got Republicans at the table.”

Some concrete proposals have already been introduced, including legislation by Sen. Amy Klobuchar, D-Minn., that would require disclaimers for AI-generated election ads with deceptive imagery and sounds. Schumer said they discussed “the need to do something fairly immediate” before next year’s presidential election.

Hawley and Blumenthal’s broader approach would create a government oversight authority with the power to audit certain AI systems for harms before granting a license.

Some of those invited to Capitol Hill, such as Musk, have voiced dire concerns evoking popular science fiction about the possibility of humanity losing control to advanced AI systems if the right safeguards are not in place. But the only academic invited to the forum, Deborah Raji, a University of California, Berkeley researcher who has studied algorithmic bias, said she tried to emphasize real-world harms already occurring.

“There was a lot of care to make sure the room was a balanced conversation, or as balanced as it could be” Raji said. What remains to be seen, she said, is which voices senators will listen to and what priorities they elevate as they work to pass new laws.

Some Republicans have been wary of following the path of the European Union, which signed off in June on the world’s first set of comprehensive rules for artificial intelligence. The EU’s AI Act will govern any product or service that uses an AI system and classify them according to four levels of risk, from minimal to unacceptable.

A group of European corporations has called on EU leaders to rethink the rules, arguing that it could make it harder for companies in the 27-nation bloc to compete with rivals overseas in the use of generative AI.



Source link

]]>
UN calls for age limits for AI tools in schools https://artifex.news/article67279840-ece/ Thu, 07 Sep 2023 05:02:35 +0000 https://artifex.news/article67279840-ece/ Read More “UN calls for age limits for AI tools in schools” »

]]>

Generative AI programs burst into the spotlight late last year. (File)
| Photo Credit: REUTERS

The United Nations called on Thursday for strict rules on the use of AI tools such as viral chatbot ChatGPT in classrooms, including limiting their use to older children.

In new guidance for governments, the UN’s education body UNESCO warned that public authorities were not ready to deal with the ethical issues of rolling out “generative” Artificial Intelligence programs in schools.

The Paris-based body said relying on such programs rather than human teachers could affect a child’s emotional wellbeing and leave them vulnerable to manipulation.

“Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” said Audrey Azoulay of UNESCO.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

“It cannot be integrated into education without public engagement, and the necessary safeguards and regulations from governments.”

Generative AI programs burst into the spotlight late last year, with ChatGPT demonstrating an ability to generate essays, poems and conversations from the briefest prompts.

It sparked fears of plagiarism and cheating in schools and universities.

But investors poured money into the field and boosters targeted education as a possible lucrative market.

The UNESCO guidance said AI tools have the potential to help children with special needs, act as an opponent in “Socratic dialogues” or as a research assistant.

But the tools would only be safe and effective if teachers, learners and researchers helped to design them and governments regulated their use.

The guidance stopped short of recommending a minimum age for schoolchildren but pointed out that ChatGPT had a lower age limit of 13.

“Many commentators understand this threshold to be too young and have advocated for legislation to raise the age to 16,” said the guidance.



Source link

]]>