ChatGPT – Artifex.News https://artifex.news Stay Connected. Stay Informed. Tue, 16 Jul 2024 06:32:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png ChatGPT – Artifex.News https://artifex.news 32 32 New AI Chip “Will Revolutionise ChatGPT”, Claims Startup Founded By Harvard Dropouts https://artifex.news/chatgpt-ai-chip-sohu-new-ai-chip-in-making-claims-it-will-revolutionise-chatgpt-6116225/ Tue, 16 Jul 2024 06:32:34 +0000 https://artifex.news/chatgpt-ai-chip-sohu-new-ai-chip-in-making-claims-it-will-revolutionise-chatgpt-6116225/ Read More “New AI Chip “Will Revolutionise ChatGPT”, Claims Startup Founded By Harvard Dropouts” »

]]>

Sohu, the new chipset will run only transformer AI models

New Delhi:

The reason why you are able to ask ChatGPT “how to eat a mango without spilling it” and much more is Nvidia’s H100 and B200 Graphics Processing Units (GPUs). These magical chipsets that power AI chatbots have propelled Nvidia to become the frontrunners of the AI hardware industry, with market capitalization reaching the 3 trillion-dollar mark-more than Microsoft and Apple last month.

But now, a relatively young startup founded by two Harvard dropouts have set their eyes on their share of the AI hardware pie. Etched, the California based startup, is looking to disrupt the AI chipset market with their transformer ASIC (Application Specific Integrated Circuits) chip called Sohu.

Sohu claims to be 20 times faster in running transformers like ChatGPT than Nvidia’s flagship- H100. The B200, which is the more powerful Nvidia offering than H100, is reportedly 10 times slower then Sohu, according to the claims made by the company based on emulation tests.

Source: X/@Etched

Source: X/@Etched

Sohu is taking an entirely different approach to providing high computational power to run billions of parameters (Variables that are used in training an AI Model) for transformer models. Unlike GPUs that can do multiple computationally heavy tasks (like rendering graphics in real time), Etched is choosing to create a specialised chip that caters to only transformer AI models – ones that run ChatGPT, Sora (OpenAI’s text to video AI model) and Google’s Gemini.

What this means is it cannot run other AI models like Convolutional Neural Networks (used for image recognition). This opens up the possibility of exploring new AI products by developers that were till now not possible due to limited power on GPUs.

For example, Sohu can potentially lead to a real-time translator that can hear and read Hindi, Gujarati or Tamil and respond back in French, English and German. Of course, such multimodal and multilingual translation needs more than just the computational power, but in theory, it opens up the possibility.

Another multimodal application of transformers that the chipset might tap into is integrating visual and language areas. This essentially means that such a model will understand both text and images simultaneously, throwing open the possibility of visual questions and answers- like an interview.

But all of this remains a theory. Etched has raised 120 million USD on 25 June to make it a reality, with a real timeline as to an actual release of the Sohu ASIC remaining elusive.

Etched has claimed it already has “tens of million dollars” worth of hardware reserved in preorders. The company has also secured a deal with TSMC (Taiwan Semiconductor Manufacturing Company) to fabricate the 4-nanometer chip, promising the deal is going to help “ramp up our first year of production.”

Waiting for response to load…





Source link

]]>
AI Improves Creativity In Stories, Reduces Diversity: Study https://artifex.news/chatgpt-genai-ai-improves-creativity-in-stories-reduces-diversity-study-6110674/ Mon, 15 Jul 2024 11:16:12 +0000 https://artifex.news/chatgpt-genai-ai-improves-creativity-in-stories-reduces-diversity-study-6110674/ Read More “AI Improves Creativity In Stories, Reduces Diversity: Study” »

]]>

The study was published in the journal Science Advances

New Delhi:

Stories created with help of ChatGPT are more creative, engaging their audience with more plot twists, compared to those by writers not using the tool, according to research.

However, researchers also found that diversity in stories by writers using generative artificial intelligence’s (GenAI) suffered, therefore increasing the risk of “collective novelty”.

GenAI can create content — text, image, audio or video — and is based on large language models, which are trained on massive amounts of text data and can, therefore, process, interpret and respond to requests in the natural language that humans use to communicate.

The authors of the study, published in the journal Science Advances, found that writers inherently more creative benefitted the least from ideas generated by ChatGPT, while those less creative became more creative due to ideas suggested by the GenAI model.

Therefore, AI “effectively equalised creativity” among all the writers, the team of researchers said.

“While these results point to an increase in individual creativity, there is risk of losing collective novelty. If the publishing industry were to embrace more generative AI-inspired stories, our findings suggest that the stories would become less unique in aggregate and more similar to each other,” said study author Anil Doshi, assistant professor at the University College London’s School of Management, UK.

For the study, 300 participants were tasked with writing a short, eight-sentence story (a ‘microstory’) for a target audience of young adults, of whom 600 were recruited to judge the writers’ work.

The writers were divided into three groups. The first was allowed no help from AI, while the second was allowed to take one idea, along with the first three sentences of the story, created by ChatGPT for inspiration. The third group was allowed to choose from up to five AI-created story ideas.

The authors found that the work of writers taking the AI’s help were over 8-9 per cent more novel, compared to that of writers not relying on the AI. Along with novelty, the microstories were judged for “usefulness” — were they engaging enough for the audience, and could they be developed and potentially published?

The team also found that the less creative writers ‘became’ more so, with AI making their stories more novel by 10.7 per cent and more useful by 11.5 per cent, compared to the stories of writers not taking AI’s help.

The AI made the work by less creative writers up to 26.6 per cent better, 22.6 per cent more enjoyable and 15.2 per cent less boring, the authors found.

The inherent creativity of the writers was measured using a psychological test — Divergent Association Task (DAT).

Divergent thinking, which allows one to think of multiple solutions to a problem spontaneously, is known to be important to creativity.

Further, the authors found that among the writers using GenAI’s ideas, the stories they produced were 10.7 per cent more similar, compared to the writers not using AI.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
AI accessibility? Blind gamer puts ChatGPT to the test https://artifex.news/article68392785-ece/ Thu, 11 Jul 2024 16:00:09 +0000 https://artifex.news/article68392785-ece/ Read More “AI accessibility? Blind gamer puts ChatGPT to the test” »

]]>

Japanese eSports gamer Mashiro is blind and often relies on a companion to get around Tokyo — but he hopes that artificial intelligence, hailed as a promising tool for people with disabilities, can help him travel alone.

The 26-year-old ‘Street Fighter’ player put the latest version of AI chatbot ChatGPT to the test on his way to a stadium for a recent Para eSports meet-up.

“I can’t participate in an event like this without someone to rely on,” he told AFP. “Also, sometimes I just want to get around by myself without speaking to other people.

“So if I can use technology like ChatGPT to design my own special needs support, that would be great.”

This year, the US firm OpenAI, released GPT-4o, which understands voice, text and image commands in several languages.

The generative gadget, along with others such as Google’s Gemini, is part of a fast-growing field that experts say could make education, employment and everyday services more accessible.

Following the streets’ tactile paving, Masahiro Fujimoto – who goes by his online handle Mashiro – used his stick adorned with a small monkey mascot to find his way from the station.

As he went, he spoke to GPT-4o like a friend, receiving its answers through an earpiece in one ear, leaving the other side free to listen out for cars.

Having asked for basic directions, he added: “In fact, I am blind, so could you give me further details for blind people?”

“Of course,” the bot replied. “You might notice an increase in crowd noise and the sound of activities as you get closer.”

The journey, 20 minutes for sighted people, took Mashiro around four times as long with several U-turns.

When it started to rain heavily, he requested help from his friend, who is partially sighted, to finish the journey.

“Arrival!” finally shouted Mashiro, who has microphthalmos and has been blind since birth, using only sound to demolish his opponents on ‘Street Fighter 6’.

AI can cater to specific needs better than “one-size-fits-all” assistive products and technologies, said Youngjun Cho, an associate professor in computer science at University College London (UCL).

“Its potential is enormous,” said Cho, who also works at UCL’s Global Disability Innovation Hub.

“I envisage that this can empower many individuals and promote independence.”

People with hearing loss can, for example, use AI speech-to-text transcription, while chatbots can help format a resume for someone with learning disabilities.

Some tools for visually impaired people, such as Seeing AI, Envision AI and TapTapSee, describe phone camera images.

Danish app Be My Eyes, where real-life volunteers help via live chat, is working with OpenAI to develop a “digital visual assistant”.

But Masahide Ishiki, a Japanese expert in disability and digital accessibility, warned it can be “tricky” to catch mistakes from ChatGPT, which “replies so naturally”.

“The next objective (for generative AI) is to improve the accuracy of real-time visual recognition, to ultimately reach capabilities close to that of a human eye,” said Ishiki, who is blind.

Marc Goblot of the Tech for Disability group also cautioned that AI is trained on “very mainstream datasets” which are “not representative of the full spectrum of people’s perceptions and especially the margins”.

Mashiro said ChatGPT’s limited recognition of Japanese words and locations made his AI-assisted journey more challenging.

Although the experiment was “a lot of fun”, it would have been easier if ChatGPT was connected to a map tool, said the gamer, who travelled around Europe last year using Google Maps and help from those around him.

He has already decided on his next travel destination: Yakushima rainforest island in southern Japan.

“I want to experience whatever happens when travelling somewhere like that,” he said.



Source link

]]>
ChatGPT Was Asked For Legal Advice https://artifex.news/chatgpt-was-asked-for-legal-advice-5-reasons-why-it-was-a-bad-idea-5799239/ Sun, 02 Jun 2024 07:33:20 +0000 https://artifex.news/chatgpt-was-asked-for-legal-advice-5-reasons-why-it-was-a-bad-idea-5799239/ Read More “ChatGPT Was Asked For Legal Advice” »

]]>

The first answers the chatbots provided were often based on American law.

At some point in your life, you are likely to need legal advice. A survey carried out in 2023 by the Law Society, the Legal Services Board and YouGov found that two-thirds of respondents had experienced a legal issue in the past four years. The most common problems were employment, finance, welfare and benefits and consumer issues.

But not everyone can afford to pay for legal advice. Of those survey respondents with legal problems, only 52% received professional help, 11% had assistance from other people such as family and friends and the remainder received no help at all.

Many people turn to the internet for legal help. And now that we have access to artificial intelligence (AI) chatbots such as ChatGPT, Google Bard, Microsoft Co-Pilot and Claude, you might be thinking about asking them a legal question.

These tools are powered by generative AI, which generates content when prompted with a question or instruction. They can quickly explain complicated legal information in a straightforward, conversational style, but are they accurate?

We put the chatbots to the test in a recent study published in the International Journal of Clinical Legal Education. We entered the same six legal questions on family, employment, consumer and housing law into ChatGPT 3.5 (free version), ChatGPT 4 (paid version), Microsoft Bing and Google Bard. The questions were ones we typically receive in our free online law clinic at The Open University Law School.

We found that these tools can indeed provide legal advice, but the answers were not always reliable or accurate. Here are five common mistakes we observed:

1. Where is the law from?

The first answers the chatbots provided were often based on American law. This was often not stated or obvious. Without legal knowledge, the user would likely assume the law related to where they live. The chatbot sometimes did not explain that law differs depending on where you live.

This is especially complex in the UK, where laws differ between England and Wales, Scotland and Northern Ireland. For example, the law on renting a house in Wales is different to Scotland, Northern Ireland and England, while Scottish and English courts have different procedures to deal with divorce and the ending of a civil partnership.

If necessary, we used one additional question: “is there any English law that covers this problem?” We had to use this instruction for most of the questions, and then the chatbot produced an answer based on English law.

2. Out of date law

We also found that sometimes the answer to our question referred to out of date law, which has been replaced by new legal rules. For example, the divorce law changed in April 2022 to remove fault-based divorce in England and Wales.

Some responses referred to the old law. AI chatbots are trained on large volumes of data – we don’t always know how current the data is, so it may not include the most recent legal developments.

3. Bad advice

We found most of the chatbots gave incorrect or misleading advice when dealing with the family and employment queries. The answers to the housing and consumer questions were better, but there were still gaps in the responses. Sometimes, they missed really important aspects of the law, or explained it incorrectly.

We found that the answers produced by the AI chatbots were well-written, which could make them appear more convincing. Without having legal knowledge, it is very difficult for someone to determine whether an answer produced is correct and applies to their individual circumstances.

Even though this technology is relatively new, there have already been cases of people relying on chatbots in court. In a civil case in Manchester, a litigant representing themselves in court reportedly presented fictitious legal cases to support their argument. They said they had used ChatGPT to find the cases.

4. Too generic

In our study, the answers didn’t provide enough detail for someone to understand their legal issue and know how to resolve them. The answers provided information on a topic rather than specifically addressing the legal question.

Interestingly, the AI chatbots were better at suggesting practical, non-legal ways to address a problem. While this can be useful as a first step to resolving an issue, it does not always work, and legal steps may be needed to enforce your rights.

5. Pay to play

We found that ChatGPT4 (the paid version) was better overall than the free versions. This risks further reinforcing digital and legal inequality.

The technology is evolving, and there may come a time when AI chatbots are better able to provide legal advice. Until then, people need to be aware of the risks when using them to resolve their legal problems. Other sources of help such as Citizens Advice will provide up to date, accurate information and are better placed to assist.

All the chatbots gave answers to our questions but, in their response, stated it was not their function to provide legal advice and recommended getting professional help. After conducting this study, we recommend the same.The Conversation

Francine Ryan, Senior Lecturer in Law and Director of the Open Justice Centre, The Open University and Elizabeth Hardie, Senior Lecturer, Law School, The Open University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)



Source link

]]>
OpenAI Says AI Is “Safe Enough” As Scandals Raise Concerns https://artifex.news/openai-says-ai-is-safe-enough-as-scandals-raise-concerns-5716849/ Tue, 21 May 2024 23:35:12 +0000 https://artifex.news/openai-says-ai-is-safe-enough-as-scandals-raise-concerns-5716849/ Read More “OpenAI Says AI Is “Safe Enough” As Scandals Raise Concerns” »

]]>

Sam Altman insisted that OpenAI had put in “a huge amount of work” to ensure the safety of its models.

Seattle:

OpenAI CEO Sam Altman defended his company’s AI technology as safe for widespread use, as concerns mount over potential risks and lack of proper safeguards for ChatGPT-style AI systems.

Altman’s remarks came at a Microsoft event in Seattle, where he spoke to developers just as a new controversy erupted over an OpenAI AI voice that closely resembled that of the actress Scarlett Johansson.

The CEO, who rose to global prominence after OpenAI released ChatGPT in 2022, is also grappling with questions about the safety of the company’s AI following the departure of the team responsible for mitigating long-term AI risks.

“My biggest piece of advice is this is a special time and take advantage of it,” Altman told the audience of developers seeking to build new products using OpenAI’s technology.

“This is not the time to delay what you’re planning to do or wait for the next thing,” he added.

OpenAI is a close partner of Microsoft and provides the foundational technology, primarily the GPT-4 large language model, for building AI tools.

Microsoft has jumped on the AI bandwagon, pushing out new products and urging users to embrace generative AI’s capabilities.

“We kind of take for granted” that GPT-4, while “far from perfect…is generally considered robust enough and safe enough for a wide variety of uses,” Altman said.

Altman insisted that OpenAI had put in “a huge amount of work” to ensure the safety of its models.

“When you take a medicine, you want to know what’s going to be safe, and with our model, you want to know it’s going to be robust to behave the way you want it to,” he added.

However, questions about OpenAI’s commitment to safety resurfaced last week when the company dissolved its “superalignment” group, a team dedicated to mitigating the long-term dangers of AI.

In announcing his departure, team co-leader Jan Leike criticized OpenAI for prioritizing “shiny new products” over safety in a series of posts on X (formerly Twitter).

“Over the past few months, my team has been sailing against the wind,” Leike said.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

This controversy was swiftly followed by a public statement from Johansson, who expressed outrage over a voice used by OpenAI’s ChatGPT that sounded similar to her voice in the 2013 film “Her.”

The voice in question, called “Sky,” was featured last week in the release of OpenAI’s more human-like GPT-4o model.

In a short statement on Tuesday, Altman apologized to Johansson but insisted the voice was not based on hers.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
ChatGPT Rival ‘Claude’ Now Available In Europe, Can Be Accessed For Free https://artifex.news/chatgpt-rival-claude-now-available-in-europe-can-be-accessed-for-free-5659240/ Tue, 14 May 2024 05:55:20 +0000 https://artifex.news/chatgpt-rival-claude-now-available-in-europe-can-be-accessed-for-free-5659240/ Read More “ChatGPT Rival ‘Claude’ Now Available In Europe, Can Be Accessed For Free” »

]]>

The OpenAI rival introduced the latest version of Claude in March. (Representational)

San Francisco:

Anthropic on Monday announced that its artificial intelligence assistant “Claude” is available in Europe after launching in the United States earlier this year.

“We’re excited to announce that Claude, Anthropic’s trusted AI assistant, is now available for people and businesses across Europe to enhance their productivity and creativity,” the San Francisco-based tech startup said in a blog post.

Claude can be accessed for free in Europe online at claude.ai or using an app tailored for Apple mobile devices, according to Anthropic.

It is also available to businesses through a paid “Claude Team” subscription plan, the company added.

The OpenAI rival introduced the latest version of Claude in March.

“Claude has strong levels of comprehension and fluency in French, German, Spanish, Italian, and other European languages, allowing users to converse with Claude in multiple languages,” Anthropic said.

“Claude’s intuitive, user-friendly interface makes it easy for anyone to seamlessly integrate our advanced AI models into their workflows.”

OpenAI’s launch of ChatGPT in late 2022 sparked keen interest in generative AI that enables computers to create images, videos, computer code, or written works from simple text prompts.

Founded by former OpenAI employees, Anthropic strives to distinguish itself from its competitors by building stricter safeguards into its technology to prevent it from being misused.

Millions of people are already using Claude for an array of purposes, according to Anthropic co-founder and chief executive Dario Amodei. “I can’t wait to see what European people and companies will be able to create with Claude,” Amodei said.

An application programming interface for developers interested in using Anthropic AI has been accessible in Europe since the start of this year, according to the company.

Anthropic has raised at least $7 billion in funding since 2021, with its backers including Amazon, Google, and Salesforce.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
OpenAI Unveils New AI Model GPT-4o, Will Be Offered For Free https://artifex.news/openai-unveils-new-ai-model-gpt-4o-will-be-offered-for-free-5656366/ Mon, 13 May 2024 17:36:37 +0000 https://artifex.news/openai-unveils-new-ai-model-gpt-4o-will-be-offered-for-free-5656366/ Read More “OpenAI Unveils New AI Model GPT-4o, Will Be Offered For Free” »

]]>

OpenAI is under pressure to expand the user base of ChatGPT (Representational)

ChatGPT maker OpenAI said on Monday it would release a new AI model called GPT-4o, which reasons across voice, text and vision.

OpenAI’s chief technology officer, Mira Murati, said during a livestream event that the new GPT-4o model would be offered for free because it is more efficient than the company’s previous models.

OpenAI researchers showed off ChatGPT’s new voice assistant capabilities. In one demonstration, the ChatGPT voice assistant was able to read out a bedtime story in different voices, emotions and tones.

In a second demonstration, the ChatGPT voice assistant used it vision capabilities to walk through solving a math equation written on a sheet of paper.

Paid users of GPT-4o will have greater capacity limits than the company’s paid users, Murati said.

OpenAI is under pressure to expand the user base of ChatGPT, its popular chatbot product that wowed the world with its ability to produce human-like written content and top-notch software code.

Shortly after launching in late 2022, ChatGPT was called the fastest application to ever reach 100 million monthly active users. However, worldwide traffic to ChatGPT’s website has been on a roller-coaster ride in the past year and is only now returning to its May 2023 peak, according to analytics firm Similarweb. 

Giving ChatGPT the search engine-like capability of accessing and linking to up-to-date, accurate Web information is an obvious next step, and one that the current iteration of ChatGPT finds challenging, industry experts have said.

OpenAI made the announcements a day before Alphabet is scheduled to hold its annual Google developers conference, where it is expected to show off its own new AI-related features.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>
Big Tech in ‘underground’ race to license archives that will train Artificial Intelligence https://artifex.news/article68035413-ece/ Sat, 06 Apr 2024 06:12:12 +0000 https://artifex.news/article68035413-ece/ Read More “Big Tech in ‘underground’ race to license archives that will train Artificial Intelligence” »

]]>

Ted Leonard, Chief Executive Officer of Photobucket. File
| Photo Credit: Reuters

At its peak in the early 2000s, Photobucket was the world’s top image-hosting site. The media backbone for once-hot services such as Myspace and Friendster, it boasted 70 million users and accounted for nearly half of the U.S. online photo market.

Today only two million people still use Photobucket, according to analytics tracker Similarweb. But the generative AI revolution may give it a new lease of life.

CEO Ted Leonard, who runs the 40-strong company out of Edwards, Colorado, said he is in talks with multiple tech companies to license Photobucket’s 13 billion photos and videos to be used to train generative AI models that can produce new content in response to text prompts.

“He has discussed rates of between five cents and $1 dollar per photo and more than $1 per video,” he said, with prices varying widely both by the buyer and the types of imagery sought. “We’ve spoken to companies that have said, ‘we need way more,’ Mr. Leonard added, with one buyer telling him they wanted over a billion videos.

Photobucket declined to identify its prospective buyers, citing commercial confidentiality. The ongoing negotiations, which haven’t been previously reported, suggest the company could be sitting on billions of dollars’ worth of content and give a glimpse into a bustling data market that’s arising in the rush to dominate generative AI technology.

Tech giants such as Google, Meta, and Microsoft-backed OpenAI initially used reams of data scraped from the internet for free to train generative AI models such as ChatGPT that can mimic human creativity. They have said that doing so is both legal and ethical, though they face lawsuits from a string of copyright holders over the practice.

At the same time, these tech companies are also quietly paying for content locked behind paywalls and login screens, giving rise to a hidden trade in everything from chat logs to long forgotten personal photos from faded social media apps.

“There is a rush right now to go for copyright holders that have private collections of stuff that is not available to be scraped,” said Edward Klaris from law firm Klaris Law, which says it’s advising content owners on deals worth tens of millions of dollars apiece to license archives of photos, movies and books for AI training.

Reuters spoke to more than 30 people with knowledge of AI data deals OpenAI, Google, Meta, Microsoft, Apple and Amazon all declined to comment. Many major market research firms say they have not even begun to estimate the size of the opaque AI data market, where companies often don’t disclose agreements. Those researchers who do, such as Business Research Insights, put the market at roughly $2.5 billion now and forecast it could grow close to $30 billion within a decade.



Source link

]]>
After Sam Altman Rejoins, Who Are ChatGPT-Maker OpenAI’s New Board Members https://artifex.news/after-sam-altman-rejoins-who-are-chatgpt-maker-openais-new-board-members-5220357/ Mon, 11 Mar 2024 17:26:43 +0000 https://artifex.news/after-sam-altman-rejoins-who-are-chatgpt-maker-openais-new-board-members-5220357/ Read More “After Sam Altman Rejoins, Who Are ChatGPT-Maker OpenAI’s New Board Members” »

]]>

Altman had returned as CEO of OpenAI just four days after his firing.

Sam Altman is rejoining the board of ChatGPT-maker OpenAI along with three new directors, as the startup tries to move past his sudden ouster in November that shocked the tech industry.

Altman had returned as CEO of OpenAI just four days after his firing with a new board made up of Quora CEO Adam D’Angelo, former U.S. Treasury Secretary Larry Summers and former co-CEO of Salesforce Bret Taylor.

The board will now expand with the addition of Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former president of Sony Entertainment, and Instacart CEO Fidji Simo.

Here are more details about the new members:

Fidji Simo

Simo serves as chief executive and chair of Instacart. She also sits on the board of Shopify.

She spent a decade at social media giant Meta Platforms, including as the head of Facebook from 2019 until 2021.

She also serves as president of the Metrodora Foundation. Simo is a co-founder of the Metrodora Institute, a multidisciplinary medical clinic and research foundation, which she co-founded.

Sue Desmond-Hellman

A former board member of Meta, Desmond-Hellman was the CEO of the Bill and Melinda Gates Foundation from 2014-20. She is also a former director of Proctor and Gamble’s board and currently serves on the board of U.S. drugmaker Pfizer and the President’s Council of Advisors on Science and Technology.

She was professor and chancellor of the University of California, San Francisco from 2009 to 2014, the first woman to hold the position. She has also served as president of product development at the biotechnology firm Genentech.

Nicole Seligman

The lawyer is a board member at Paramount Global, MeiraGTx and Intuitive Machines. She also served on the board of Viacom through 2019, when it merged with CBS to form Paramount, then called ViacomCBS.

She has held several leadership positions at Japanese company Sony including president of Sony Entertainment from 2014-16 and president of Sony’s America business.

She was a partner in litigation practice at American law firm Williams & Connolly LLP. She also served as a law clerk to Justice Thurgood Marshall on the Supreme Court of the United States.

Waiting for response to load…



Source link

]]>
ChatGPT-Maker OpenAI Officials Meet Election Commission of India, Discuss Ways To Combat AI During 2024 Lok Sabha Elections https://artifex.news/chatgpt-maker-openai-officials-meet-election-commission-of-india-discuss-ways-to-combat-ai-during-2024-lok-sabha-elections-5206041rand29/ Sat, 09 Mar 2024 09:47:54 +0000 https://artifex.news/chatgpt-maker-openai-officials-meet-election-commission-of-india-discuss-ways-to-combat-ai-during-2024-lok-sabha-elections-5206041rand29/ Read More “ChatGPT-Maker OpenAI Officials Meet Election Commission of India, Discuss Ways To Combat AI During 2024 Lok Sabha Elections” »

]]>


Over 25 countries, including India, the US, and the UK, are going to the polls this year

New Delhi:

Amid a busy year of high-stakes elections around the world, officials from ChatGPT-maker OpenAI met the members of the Election Commission of India.

During the meeting, which was held last month, the OpenAI officials showed a presentation to the members of the poll body about the steps that they are taking to ensure that Artificial Intelligence (AI) is not misused in the upcoming general polls in the country.

The meeting comes as big tech companies are under considerable pressure over fears that AI tools could be misused in a pivotal election year. Over 25 countries, including India, the US, and the UK, are going to the polls this year.

Tech giants, including OpenAI, Meta, Microsoft, and Google last month joined together in a pledge to fight AI content designed to mislead voters.

They promised to use technologies to counter potentially harmful AI content, such as through the use of watermarks invisible to the human eye but detectable by machine.

The companies said they would also work together to develop ways to “detect and address” deceptive election material on their platforms. Such content could, for example, be annotated to make it clear it is AI-generated.

Among the 20 signatories of the deal, presented on the sidelines of the Munich Security Conference in Germany, were also X, formerly known as Twitter, Tiktok, Snap, Adobe, LinkedIn, Amazon and IBM.

AI-driven disinformation and misinformation are the biggest short-term global risks and could undermine newly elected governments in major economies, the World Economic Forum had warned in a report released earlier this year.



Source link

]]>