Artificial Intelligence AI – Artifex.News https://artifex.news Stay Connected. Stay Informed. Wed, 17 Jul 2024 07:54:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png Artificial Intelligence AI – Artifex.News https://artifex.news 32 32 Google To Empower 10,000 Indian Startups In AI, Unveils New Tools https://artifex.news/google-to-empower-10-000-indian-startups-in-ai-unveils-new-tools-6124088rand29/ Wed, 17 Jul 2024 07:54:22 +0000 https://artifex.news/google-to-empower-10-000-indian-startups-in-ai-unveils-new-tools-6124088rand29/ Read More “Google To Empower 10,000 Indian Startups In AI, Unveils New Tools” »

]]>

Google said it is working with India’s MeitY ‘Startup Hub’ to train 10,000 startups in AI.

Bengaluru:

Google said today that it is working with MeitY ‘Startup Hub’ to train 10,000 startups in artificial intelligence (AI), as the tech giant expanded access to its AI models and introduced new language tools for the developers in the country.

At its ‘I/O Connect’ event here, the company unveiled a range of tools, programmes and partnerships to empower Indian developers and startups to be at the forefront of the global AI revolution.

The company said that developers in India now have expanded access to Google’s powerful AI models with the two million token context window in Gemini 1.5 Pro and Gemma 2, the next generation of open models.

“We’re committed to empowering Indian innovators to harness AI’s full potential, creating solutions that not only address India’s unique needs but also shape the future of AI globally,” said Ambharish Kenghe, Vice President, Google.

The opportunities with multimodal, mobile, and multilingual AI are immense, and we’re thrilled to be a part of India’s AI journey, he added.

“The fastest way to build with Gemini is through its developer platform Google AI Studio, and India has one of the largest developer bases on Google AI Studio today,” said the company.

The Google DeepMind India team has expanded Project Vaani, in collaboration with the Indian Institute of Science (IISc), which provides developers with over 14,000 hours of speech data across 58 languages, collected from 80,000 speakers in 80 districts.

The team also introduced IndicGenBench, a comprehensive benchmark to evaluate the generation capabilities of LLMs on Indic languages, and open-sourced CALM (Composition of Language Models).

The company said it is introducing Google Wallet APIs to simplify the integration of loyalty programs, tickets, and gift cards.

For developers using the Google Maps Platform, India-specific pricing is being introduced with up to 70 per cent lower costs on most APIs.

Google is also collaborating with the Open Network for Digital Commerce (ONDC), offering developers building for ONDC up to 90 per cent off on select Google Maps Platform APIs.

“From consumer experiences to agriculture, to social enterprises, AI has the power to address some of the biggest challenges of our time across many sectors and industries,” said Seshu Ajjarapu, Senior Director, Google DeepMind.

The company will also soon launch the Agricultural Landscape Understanding (ALU) Research API, a limited availability tool designed to make agricultural practices more data-driven and efficient.
 

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>
RBI Is Using AI For Real-Time Data Analysis, Says Governor Shaktikanta Das https://artifex.news/rbi-is-using-ai-for-real-time-data-analysis-says-governor-shaktikanta-das-5991129rand29/ Fri, 28 Jun 2024 14:30:27 +0000 https://artifex.news/rbi-is-using-ai-for-real-time-data-analysis-says-governor-shaktikanta-das-5991129rand29/ Read More “RBI Is Using AI For Real-Time Data Analysis, Says Governor Shaktikanta Das” »

]]>

The Reserve Bank has ventured into Artificial Intelligence (AI) and Machine Learning (ML) analytics

The Reserve Bank has ventured into Artificial Intelligence (AI) and Machine Learning (ML) analytics in multiple areas in order to develop cutting-edge systems for high frequency and real-time data monitoring and analysis, RBI Governor Shaktikanta Das said today.

In his address at the inauguration of the RBI’s 18th Statistical Day Conference, Governor Das said: “The focus now is naturally on enhancing capacity in AI and ML techniques and analysing unstructured textual data. While doing so, ethical considerations need to be addressed and biases in algorithms need to be eliminated.”

He said that this annual event provides an opportunity to reflect on the current and evolving state of the statistical system. It also helps to take stock of the refinements in the application of statistical methods and technologies in the realm of public policy.

“Looking ahead, the year 2025 has a special significance for the compilation of official statistics the world over. Global efforts are expected to culminate in new global standards for the compilation of macroeconomic statistics, especially for national accounts and balance of payments. Our team in the Reserve Bank is closely tracking these developments,” the RBI Governor said.

The surge in computing power is being increasingly harnessed in combination with statistical methods to improve efficiency in decision-making and enrich end-user experience in various fields of human knowledge, he added,

The celebration of Statistics Day in India coincides with the birth anniversary of Professor Prasanta Chandra Mahalanobis whose contributions in laying the foundations of modern-day official statistics in India have been pioneering. Inspired by his work, Indian statisticians are making their presence felt – both domestically and globally in traditional as well as in newer applications of statistics, he added.

Governor Das highlighted the areas in which the Reserve Bank’s cutting-edge information management is contributing to the formulation of public policies and the overall economic development in India.

“One year ago, we launched our next-generation data warehouse, i.e., the Centralised Information Management System (CIMS) at the Statistics Day Conference. Several new features were introduced in the new system. Scheduled commercial banks (SCBs), urban co-operative banks (UCBs) and non-banking financial companies (NBFCs) have already been onboarded for reporting on the new portal,” he said.

The new CIMS is also facilitating research on the Indian economy, minimising reporting burden, exploiting the technological advances and improving the experience of both data providers and users, Governor Das added.
 

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>
Scientists Develop Speech Recognition Tool To Predict Alzheimer’s Onset https://artifex.news/scientists-develop-speech-recognition-tool-to-predict-alzheimers-onset-5974976/ Wed, 26 Jun 2024 14:16:28 +0000 https://artifex.news/scientists-develop-speech-recognition-tool-to-predict-alzheimers-onset-5974976/ Read More “Scientists Develop Speech Recognition Tool To Predict Alzheimer’s Onset” »

]]>

Alzheimer’s disease is the most common form of dementia and impacts one’s everyday activities

New Delhi:

A new AI-based model could predict the onset of Alzheimer’s disease by analysing an individual’s speech, the developers said.

Trained on audio recordings of patients with mild cognitive impairment — early stages of memory loss, the model achieved 78.5 per cent accuracy in forecasting whether patients would remain stable or progress to dementia within six years, according to the researchers.

Alzheimer’s disease is the most common form of dementia and impacts one’s everyday activities by impairing memory and thinking.

The researchers at Boston University, US, used recordings of initial interviews of 166 patients aged 63-97 and trained the model using machine learning to discern patterns between speech, demographics, diagnosis, and how their condition was worsening.

The model analyses interview content such as spoken words and sentence structure, rather than speech features such as enunciation or speed, showed the study published in the journal Alzheimer’s and Dementia.

“We combine the information we extract from the audio recordings with some very basic demographics – age, gender, and so on – and we get the final score,” said Ioannis C. Paschalidis, a professor of engineering and the study’s corresponding author.

“You can think of the score as the likelihood, the probability, that someone will remain stable or transition to dementia. It had significant predictive ability,” said Paschalidis.

The researchers said that the model was able to perform well, despite challenges like low-quality recordings and background noise.

The researchers emphasised that early prediction is crucial as current diagnostic tests often identify Alzheimer’s disease only after significant cognitive decline has occurred such as memories starting to slip away and personality traits beginning to shift.

The team aims to make their model accessible through an app to make it accessible for patients in remote areas, potentially increasing the number of people getting screened.
 

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
AI, ChatGPT, Social Media Can Worsen The Climate Crisis, Researchers Say https://artifex.news/ai-chatgpt-social-media-can-worsen-the-climate-crisis-researchers-say-5630217/ Sat, 11 May 2024 02:17:22 +0000 https://artifex.news/ai-chatgpt-social-media-can-worsen-the-climate-crisis-researchers-say-5630217/ Read More “AI, ChatGPT, Social Media Can Worsen The Climate Crisis, Researchers Say” »

]]>

AI, ChatGPT, social media can worsen climate crisis, researchers have claimed. (Representational)

New Delhi:

Generative artificial intelligence (AI) which includes large language models like OpenAI’s ChatGPT, and social media can undermine efforts to address climate change, said researchers in a new forum article published in the journal Global Environmental Politics, on Friday.

Researchers from the University of British Columbia (UBC) noted that it is a common conception that AI, social media, and other tech products and platforms are either neutral or potentially net positive in their impact on climate change action.

Further, these can reduce human capacities for creative thinking and problem-solving — crucial for tackling climate change.

Additionally, the platforms also work to take away attention from pressing global issues and foster feelings of hopelessness, they said.

According to Dr Hamish van der Ven, Assistant Professor of sustainable business management of natural resources at UBC, “These technologies are influencing human behaviour and societal dynamics, shaping attitudes and responses to climate change.”

He noted that AI and social technologies can lessen our focus on the climate crisis, as they always offer “new, ever-changing content.”

Recurrent exposure to “negative news on social media may also erode optimism and increase feelings of hopelessness. All this could prevent us from organising or taking collective action on climate change,” he noted, calling for a cautious review of generative AI.

Increased dependence on these technologies may decrease the “capacity for creativity and forward-thinking solution,” noted Dr van der Ven.

Both social media and AI are also known to contribute to the spread of false or biased information — which can hobble the actions we need to take on climate change.
 

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
AI Can Identify Clinically Anxious Youth Based On Brain Structure: Study https://artifex.news/ai-can-identify-clinically-anxious-youth-based-on-brain-structure-study-5268029/ Tue, 19 Mar 2024 09:12:16 +0000 https://artifex.news/ai-can-identify-clinically-anxious-youth-based-on-brain-structure-study-5268029/ Read More “AI Can Identify Clinically Anxious Youth Based On Brain Structure: Study” »

]]>

Artificial intelligence (AI) can help recognise individuals with anxiety disorders. (Representational)

New Delhi:

Artificial intelligence (AI) can help recognise individuals with anxiety disorders based on their unique brain structure, according to a study.

The research, published in the journal Nature Mental Health, involved about 3,500 youth between 10 and 25 years old from across the globe.

The researchers used machine learning (ML) – a type of AI that help machines learn and improve from data analysis without explicit programming – looked at cortical thickness and surface area, along with volumes of deep-lying brain regions.

To improve the results, the algorithms must be further refined and other types of brain data, such as brain function and connections, must be added, they said.

These initial results tend to hold – are generalisable – in such a diverse group of youngsters in terms of ethnicity, geographical location and clinical characteristics, the researchers said.

This renders the study outcomes rather fascinating, they said.

According to lead researcher Moji Aghajani, Assistant Professor at Leiden University in Netherlands, the study could eventually facilitate a more personalised approach to prevention, diagnostics and care.

Anxiety disorders typically first emerge during adolescence and early adulthood. These disorders cause major emotional, social and economic problems for millions of youngsters worldwide.

However, it is unclear which brain processes are involved in these anxiety disorders, the researchers said.

“This incomplete understanding of underlying brain bases is largely due to our simplistic approach to mental disorders among youths, in which clinical studies are often too small in size, with way too much focus on the ‘average patient’ rather than the individual,” said Aghajani.

“This, moreover, concurs with use of traditional analytical techniques, which are unable to produce individual-level outcomes,” the researcher added.

However, the field is slowly changing, with more focus on individuals and their unique brain characteristics, through the use of large and diverse datasets – also known as “big data” – combined with AI.
 

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Waiting for response to load…



Source link

]]>
AI Starts Creating Fake Legal Cases, Making Its Way Into Real Courtrooms https://artifex.news/ai-starts-creating-fake-legal-cases-making-its-way-into-real-courtrooms-5248314/ Sat, 16 Mar 2024 04:01:18 +0000 https://artifex.news/ai-starts-creating-fake-legal-cases-making-its-way-into-real-courtrooms-5248314/ Read More “AI Starts Creating Fake Legal Cases, Making Its Way Into Real Courtrooms” »

]]>

Its hardly surprising, then, that AI also has a strong impact on our legal systems. (Representational)

We’ve seen deepfake, explicit images of celebrities, created by artificial intelligence (AI). AI has also played a hand in creating music, driverless race cars and spreading misinformation, among other things.

It’s hardly surprising, then, that AI also has a strong impact on our legal systems.

It’s well known that courts must decide disputes based on the law, which is presented by lawyers to the court as part of a client’s case. It’s therefore highly concerning that fake law, invented by AI, is being used in legal disputes.

Not only does this pose issues of legality and ethics, it also threatens to undermine faith and trust in global legal systems.

How do fake laws come about?

There is little doubt that generative AI is a powerful tool with transformative potential for society, including many aspects of the legal system. But its use comes with responsibilities and risks.

Lawyers are trained to carefully apply professional knowledge and experience, and are generally not big risk-takers. However, some unwary lawyers (and self-represented litigants) have been caught out by artificial intelligence.

AI models are trained on massive data sets. When prompted by a user, they can create new content (both text and audiovisual).

Although content generated this way can look very convincing, it can also be inaccurate. This is the result of the AI model attempting to “fill in the gaps” when its training data is inadequate or flawed, and is commonly referred to as “hallucination”.

In some contexts, generative AI hallucination is not a problem. Indeed, it can be seen as an example of creativity.

But if AI hallucinated or created inaccurate content that is then used in legal processes, that’s a problem – particularly when combined with time pressures on lawyers and a lack of access to legal services for many.

This potent combination can result in carelessness and shortcuts in legal research and document preparation, potentially creating reputational issues for the legal profession and a lack of public trust in the administration of justice.

It’s happening already

The best known generative AI “fake case” is the 2023 US case Mata v Avianca, in which lawyers submitted a brief containing fake extracts and case citations to a New York court. The brief was researched using ChatGPT.

The lawyers, unaware that ChatGPT can hallucinate, failed to check that the cases actually existed. The consequences were disastrous. Once the error was uncovered, the court dismissed their client’s case, sanctioned the lawyers for acting in bad faith, fined them and their firm, and exposed their actions to public scrutiny.

Despite adverse publicity, other fake case examples continue to surface. Michael Cohen, Donald Trump’s former lawyer, gave his own lawyer cases generated by Google Bard, another generative AI chatbot. He believed they were real (they were not) and that his lawyer would fact check them (he did not). His lawyer included the cases in a brief filed with the US Federal Court.

Fake cases have also surfaced in recent matters in Canada and the United Kingdom.

If this trend goes unchecked, how can we ensure that the careless use of generative AI does not undermine the public’s trust in the legal system? Consistent failures by lawyers to exercise due care when using these tools has the potential to mislead and congest the courts, harm clients’ interests, and generally undermine the rule of law.

What’s being done about it?

Around the world, legal regulators and courts have responded in various ways.

Several US state bars and courts have issued guidance, opinions or orders on generative AI use, ranging from responsible adoption to an outright ban.

Law societies in the UK and British Columbia, and the courts of New Zealand, have also developed guidelines.

In Australia, the NSW Bar Association has a generative AI guide for barristers. The Law Society of NSW and the Law Institute of Victoria have released articles on responsible use in line with solicitors’ conduct rules.

Many lawyers and judges, like the public, will have some understanding of generative AI and can recognise both its limits and benefits. But there are others who may not be as aware. Guidance undoubtedly helps.

But a mandatory approach is needed. Lawyers who use generative AI tools cannot treat it as a substitute for exercising their own judgement and diligence, and must check the accuracy and reliability of the information they receive.

In Australia, courts should adopt practice notes or rules that set out expectations when generative AI is used in litigation. Court rules can also guide self-represented litigants, and would communicate to the public that our courts are aware of the problem and are addressing it.

The legal profession could also adopt formal guidance to promote the responsible use of AI by lawyers. At the very least, technology competence should become a requirement of lawyers’ continuing legal education in Australia.

Setting clear requirements for the responsible and ethical use of generative AI by lawyers in Australia will encourage appropriate adoption and shore up public confidence in our lawyers, our courts, and the overall administration of justice in this country.The Conversation

(Authors:Michael Legg, Professor of Law, UNSW Sydney and Vicki McNamara, Senior Research Associate, Centre for the Future of the Legal Profession, UNSW Sydney)

(Disclosure Statement:Vicki McNamara is affiliated with the Law Society of NSW (as a member). Michael Legg does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment)

This article is republished from The Conversation under a Creative Commons license. Read the original article.
 

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
Union Cabinet Approves IndiaAI Mission With Rs 10,371 Crore Budget Outlay https://artifex.news/union-cabinet-approves-indiaai-mission-with-rs-10-371-crore-budget-outlay-5196347rand29/ Thu, 07 Mar 2024 17:45:42 +0000 https://artifex.news/union-cabinet-approves-indiaai-mission-with-rs-10-371-crore-budget-outlay-5196347rand29/ Read More “Union Cabinet Approves IndiaAI Mission With Rs 10,371 Crore Budget Outlay” »

]]>

The cabinet meeting was chaired by Prime Minister Narendra Modi. (File)

New Delhi:

As part of the vision of ‘Making AI in India’ and ‘Making AI Work for India,’ the Union Cabinet has approved the comprehensive national-level India Artificial Intelligence mission with a budget outlay of Rs 10,371.92 crore.

Briefing media after a meeting of the union cabinet, Commerce and Industry Minister Piyush Goyal said that IndiaAI mission will establish a comprehensive ecosystem catalyzing AI innovation through strategic programmes and partnerships across the public and private sectors.

“In order to develop the ecosystem of Artificial Intelligence (AI), a framework of India’s AI mission has been prepared in a comprehensive manner. In December 2023, at the Global Partnership on AI summit which concluded in the national capital, the vision of India AI was put forth by Prime Minister Narendra Modi, under which, several aspects including the adoption of AI on a bigger scale, providing innovators with wider opportunities, and skill development in this field, have been included,” Goyal said.

“Over 10,000 Graphics Processing Units (GPUs) will be procured in the public-private partnership to aid a computing system for a high-end AI ecosystem, which will also be conducive to the design of an AI marketplace,” he added.

An official release said that by democratizing computing access, improving data quality, developing indigenous AI capabilities, attracting top AI talent, enabling industry collaboration, providing startup risk capital, ensuring socially impactful AI projects and bolstering ethical AI, the mission will drive responsible, inclusive growth of India’s AI ecosystem.

The cabinet meeting was chaired by Prime Minister Narendra Modi.

The release said that the mission will be implemented by ‘IndiaAI’ Independent Business Division (IBD) under Digital India Corporation (DIC) and has various components like IndiaAI Compute Capacity, IndiaAI Innovation Centre, IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI FutureSkills, IndiaAI Startup Financing and Safe & Trusted AI.

The release said IndiaAI Compute Capacity, the compute pillar, will build a high-end scalable AI computing ecosystem to cater to the increasing demands from India’s rapidly expanding AI start-ups and research ecosystem.

“The ecosystem will comprise AI compute infrastructure of 10,000 or more Graphics Processing Units (GPUs), built through a public-private partnership. Further, an AI marketplace will be designed to offer AI as a service and pre-trained models to AI innovators. It will act as a one-stop solution for resources critical for AI innovation,” the release said.

IndiaAI Innovation Centre will undertake the development and deployment of indigenous Large Multimodal Models (LMMs) and domain-specific foundational models in critical sectors.

The IndiaAI Datasets Platform will streamline access to quality non-personal datasets for AI Innovation. A unified data platform will be developed to provide a one-stop solution for seamless access to non-personal datasets to Indian Startups and researchers.

The release said that the IndiaAI Application Development Initiative will promote AI applications in critical sectors for the problem statements sourced from central ministries, state departments, and other institutions. The initiative will focus on developing/scaling/promoting the adoption of impactful AI solutions with the potential for catalyzing large-scale socio-economic transformation.

IndiaAI FutureSkills is conceptualized to mitigate barriers to entry into AI programmes and will increase AI courses in undergraduate, master-level, and PhD programmes. Further, Data and AI Labs will be set up in Tier 2 and Tier 3 cities across India to impart foundational-level courses.

The IndiaAI Startup Financing pillar is conceptualized to support and accelerate deep-tech AI startups and provide them streamlined access to funding to enable futuristic AI Projects.

Recognizing the need for adequate guardrails to advance the responsible development, deployment, and adoption of AI, the Safe Trusted AI pillar will enable the implementation of Responsible AI projects including the development of indigenous tools and frameworks, self-assessment checklists for innovators, and other guidelines and governance frameworks.

The release said that the approved IndiaAI Mission will propel innovation and build domestic capacities to ensure the tech sovereignty of India.

“It will also create highly skilled employment opportunities to harness the demographic dividend of the country. IndiaAI Mission will help India demonstrate to the world how this transformative technology can be used for social good and enhance its global competitiveness,” the release said.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>
This AI Worm Can Steal Data, Break Security Of ChatGPT And Gemini https://artifex.news/this-ai-worm-can-steal-data-break-security-of-chatgpt-and-gemini-5173985/ Mon, 04 Mar 2024 10:29:56 +0000 https://artifex.news/this-ai-worm-can-steal-data-break-security-of-chatgpt-and-gemini-5173985/ Read More “This AI Worm Can Steal Data, Break Security Of ChatGPT And Gemini” »

]]>

The researchers also warned about “bad architecture design” within the AI system.

As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, researchers are now developing AI worms which can steal your confidential data and break security measures of the generative AI systems, as per a report in Wired.

Researchers from Cornell University, Technion-Israel Institute of Technology, and Intuit created the first generative AI worm called ‘Morris II’ which can steal data or deploy malware and spread from one system to another. It has been named after the first worm which was launched on the internet in 1988. Ben Nassi, a Cornell Tech researcher, said, “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,”

The AI worm can breach some security measures in ChatGPT and Gemini by attacking a generative AI email assistant with the intent of stealing email data and sending spam, as per the outlet.

The researchers used an “adversarial self-replicating prompt” to develop the generative AI worm. According to them, this prompt causes the generative AI model to generate a different prompt in response. To execute it, the researchers then created an email system that could send and receive messages using generative AI, adding into ChatGPT, Gemini, and open-source LLM. Further, they discovered two ways to utilise the system- by using a self-replicating prompt that was text-based and by embedding the question within an image file.

In one case, the researchers took on the role of attackers and sent an email with an adversarial text prompt. This “poisons” the email assistant’s database by utilising retrieval-augmented generation, which allows LLMs to get more data from outside their system. According to Mr Nassi, the retrieval-augmented generation “jailbreaks the GenAI service” when it retrieves an email in response to a user inquiry and sends it to GPT-4 or Gemini Pro to generate a response. This eventually results in the theft of data from the emails.

“The generated response containing the sensitive user data later infects new hosts when it is used to reply to an email sent to a new client and then stored in the database of the new client,” he added.

For the second method, the researcher mentioned, “By encoding the self-replicating prompt into the image, any kind of image containing spam, abuse material, or even propaganda can be forwarded further to new clients after the initial email has been sent.”

A video showcasing the findings shows the email system repeatedly forwarding a message. The researchers claim that they could also obtain email data.”It can be names, it can be telephone numbers, credit card numbers, SSN, anything that is considered confidential,” Mr Nassi said.

The researchers also warned about “bad architecture design” within the AI system. They also reported their observations to Google and OpenAI. “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered,” a spokesperson for OpenAI told the outlet. Further, they mentioned that they are working to make systems “more resilient” and developers should “use methods that ensure they are not working with harmful input.” 

Google declined to comment on the subject.

Waiting for response to load…



Source link

]]>
China Calls For Global Cooperation At AI Summit, Elon Musk Wants “Referee” https://artifex.news/china-calls-for-global-cooperation-at-ai-summit-elon-musk-wants-referee-4536494/ Wed, 01 Nov 2023 16:30:06 +0000 https://artifex.news/china-calls-for-global-cooperation-at-ai-summit-elon-musk-wants-referee-4536494/ Read More “China Calls For Global Cooperation At AI Summit, Elon Musk Wants “Referee”” »

]]>

The summit is focused on highly capable general-purpose models called “frontier AI”.

Bletchley Park, England:

China said on Wednesday it wanted to work with international partners to manage the oversight of artificial intelligence as political leaders and technology executives gathered at an inaugural AI Safety Summit in Britain to plot the way forward.

Some tech bosses and political leaders have warned the rapid development of AI poses an existential threat to the world, sparking a race by governments and international institutions to design safeguards and regulation for the future.

In a first for Western efforts to manage the safe development of AI, a Chinese vice minister joined leaders from the United States and the European Union, alongside tech bosses such as Elon Musk and ChatGPT’s Sam Altman.

“China is willing to enhance our dialogue and communication in AI safety with all sides, contributing to an international mechanism with global participation in governance framework that needs wide consensus,” Wu Zhaohui said at the start of the summit, according to an official translation of his remarks.

“Countries regardless of their size and scale have equal rights to develop and use AI,” he added.

Elon Musk, who has warned about the risks of AI, said the summit wanted to establish a “third-party referee” for companies developing the technology, so it could sound the alarm when risks develop, and so instil confidence in the public.

The meeting – held at Bletchley Park, home of Britain’s World War Two code-breakers – is the brainchild of Prime Minister Rishi Sunak. He wants to carve out a role for Britain as an intermediary between the economic blocs of the United States, China and the EU.

The summit is focused on highly capable general-purpose models called “frontier AI”

Participating governments produced a “Bletchley Declaration”, with 28 countries and the EU agreeing the need for transparency and accountability from actors in frontier AI technology, including how they will measure, monitor and mitigate potentially harmful capabilities.

A collective plan

British digital minister Michelle Donelan said it was an achievement just to get so many key players in one room.

“For the first time, we now have countries agreeing that we need to look not just independently but collectively at the risk around frontier AI,” she told reporters.

China is a key participant, given the country’s role in developing AI technology. However, some British lawmakers have questioned whether it should be there given the low level of trust between Beijing, Washington and many European capitals when it comes to Chinese involvement in technology.

The United States made clear on the eve of the summit that the call to Beijing had very much come from Britain, with its Ambassador to London, Jane Hartley, telling Reuters: “This is the UK invitation, this is not the US”.

US Vice President Kamala Harris also spoke in London on Wednesday, away from the summit, setting out her government’s response to AI, after US President Joe Biden signed an executive order on Monday.

The timing and location of her speech has raised eyebrows among some in the UK’s governing Conservative Party who suggest Washington is trying to overshadow Sunak’s summit – a charge denied by British officials who say they want as many voices as possible. Kamala Harris will meet Rishi Sunak later on Wednesday and attend the summit’s second day on Thursday.

US Secretary of Commerce Gina Raimondo used the summit to announce the launch of a US. AI Safety Institute, and said it would cooperate with Britain’s recently announced institute.

Canada’s minister of innovation, science and industry Francois-Philippe Champagne said AI would not be constrained by national borders, and therefore interoperability between different regulations being put in place was important.

“The risk is that we do too little, rather than too much, given the evolution and speed with which things are going,” he told Reuters.

On the agenda are topics like how AI systems might be used by terrorists to build bioweapons and the technology’s potential to outsmart humans and wreak havoc on the world.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
Government Says Working On Using AI To Reduce Pendency Of Consumer Cases https://artifex.news/government-says-working-on-using-ai-to-reduce-pendency-of-consumer-cases-4404087/ Tue, 19 Sep 2023 13:13:49 +0000 https://artifex.news/government-says-working-on-using-ai-to-reduce-pendency-of-consumer-cases-4404087/ Read More “Government Says Working On Using AI To Reduce Pendency Of Consumer Cases” »

]]>

Consumer Affairs Ministry said it is working on using AI to reduce the pendency of cases

New Delhi:

The Consumer Affairs Ministry today said it is working on using artificial intelligence (AI) to reduce the pendency of cases in various consumer courts in the country.

The ministry also said the National Consumer Dispute Redressal Commission (NCDRC) has successfully resolved 854 cases during August, the highest disposal rate in the current year.

This was possible due to proactive steps taken by the NCDRC, streamlined processes and advanced technology like E-daakhil, which helped resolve cases faster than ever before, it added.

“In furtherance of keeping the same momentum of disposal of cases, the Department has made filing of cases through E-daakhil in consumer commissions compulsory and soon going to launch the feature of VC (video conference) on E-daakhil,” the ministry said in a statement.

As the scope of artificial intelligence is increasing rapidly, the ministry is “also working on using the AI facilities in reducing the pendency of cases in the National, State and District Consumer Commissions,” it added.

The case filed in the Consumer Commissions will be analysed through AI and will generate the summary of the case and many more actions will be done through AI in resolving the case, the statement said.

According to the ministry, NCDRC has significantly improved the disposal of consumer cases in the commission in 2023.

NCDRC and the Department of Consumer Affairs successfully resolved 854 consumer cases in August, whereas the filing of cases during the same period was 455, making it the highest disposal rate of 188 per cent this year, it said.

This achievement underscores the NCDRC’s unwavering commitment to safeguarding consumer rights and ensuring prompt redressal of grievances, the ministry said.

Even the regular monitoring of consumer cases by the ministry and holding various one-day regional workshops in Guwahati and Chandigarh helped expedite the process.

The ministry also conducted sector-specific brainstorming sessions on insurance and real estate to reduce the pendency in the consumer commissions.

State-specific meetings in various states like Jharkhand, Madhya Pradesh, Karnataka, Rajasthan, Bihar, Maharashtra and Kerala were also held, the statement added.
 

Waiting for response to load…



Source link

]]>