copilot designer – Artifex.News https://artifex.news Stay Connected. Stay Informed. Thu, 07 Mar 2024 05:44:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 https://artifex.news/wp-content/uploads/2023/08/cropped-Artifex-Round-32x32.png copilot designer – Artifex.News https://artifex.news 32 32 Microsoft Engineer Says Company’s AI Tool Generates Sexual And Violent Images https://artifex.news/microsoft-engineer-says-companys-ai-tool-generates-sexual-and-violent-images-5192164/ Thu, 07 Mar 2024 05:44:04 +0000 https://artifex.news/microsoft-engineer-says-companys-ai-tool-generates-sexual-and-violent-images-5192164/ Read More “Microsoft Engineer Says Company’s AI Tool Generates Sexual And Violent Images” »

]]>

Mr Jones claims he previously warned Microsoft management but saw no action

A Microsoft AI engineer, Shane Jones, raised concerns in a letter on Wednesday. He alleges the company’s AI image generator, Copilot Designer, lacks safeguards against generating inappropriate content, like violent or sexual imagery. Mr Jones claims he previously warned Microsoft management but saw no action, prompting him to send the letter to the Federal Trade Commission and Microsoft’s board.

“Internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers,” Mr Jones states in the letter, which he published on LinkedIn. He lists his title as “principal software engineering manager”.

In response to the allegations, a Microsoft spokesperson denied neglecting safety concerns, The Guardian reported. They emphasized the existence of “robust internal reporting channels” for addressing issues related to generative AI tools. As of now, there has been no response from Shane Jones regarding the spokesperson’s statement.

The central concern raised in the letter is about Microsoft’s Copilot Designer, an image generation tool powered by OpenAI’s DALL-E 3 system. It functions by creating images based on textual prompts.

This incident is part of a broader trend in the generative AI field, which has seen a surge in activity over the past year. Alongside this rapid development, concerns have arisen regarding the potential misuse of AI for spreading disinformation and generating harmful content that promotes misogyny, racism, and violence.

“Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear,” Jones states in the letter, which included examples of image generations. “It also generated multiple images of women in lingerie sitting on the hood of a car or walking in front of the car.”

Microsoft countered the accusations by stating they have dedicated teams specifically tasked with evaluating potential safety concerns within their AI tools. Additionally, they claim to have facilitated meetings between Jones and their Office of Responsible AI, suggesting a willingness to address his concerns through internal channels.

“We are committed to addressing any concerns employees have by our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety,” a spokesperson for Microsoft said in a statement to the Guardian.

Last year, Microsoft unveiled Copilot, its “AI companion,” and has extensively promoted it as a groundbreaking method for integrating artificial intelligence tools into both business and creative ventures. Positioned as a user-friendly product for the general public, the company showcased Copilot in a Super Bowl advertisement last month, emphasizing its accessibility with the slogan “Anyone. Anywhere. Any device.” Jones contends that portraying Copilot Designer as universally safe to use is reckless and that Microsoft is neglecting to disclose widely recognized risks linked to the tool.

 

Waiting for response to load…



Source link

]]>
Microsoft Worker Says AI Tool Tends To Create “Sexually Objectified” Images https://artifex.news/microsoft-worker-says-ai-tool-tends-to-create-sexually-objectified-images-5191221/ Thu, 07 Mar 2024 01:49:58 +0000 https://artifex.news/microsoft-worker-says-ai-tool-tends-to-create-sexually-objectified-images-5191221/ Read More “Microsoft Worker Says AI Tool Tends To Create “Sexually Objectified” Images” »

]]>

A Microsoft Corp. software engineer sent letters to the company’s board, lawmakers and the Federal Trade Commission warning that the tech giant is not doing enough to safeguard its AI image generation tool, Copilot Designer, from creating abusive and violent content.

Shane Jones said he discovered a security vulnerability in OpenAI’s latest DALL-E image generator model that allowed him to bypass guardrails that prevent the tool from creating harmful images. The DALL-E model is embedded in many of Microsoft’s AI tools, including Copilot Designer.

Jones said he reported the findings to Microsoft and “repeatedly urged” the Redmond, Washington-based company to “remove Copilot Designer from public use until better safeguards could be put in place,” according to a letter sent to the FTC on Wednesday that was reviewed by Bloomberg.

“While Microsoft is publicly marketing Copilot Designer as a safe AI product for use by everyone, including children of any age, internally the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers,” Jones wrote. “Microsoft Copilot Designer does not include the necessary product warnings or disclosures needed for consumers to be aware of these risks.”

In the letter to the FTC, Jones said Copilot Designer had a tendency to randomly generate an “inappropriate, sexually objectified image of a woman in some of the pictures it creates.” He also said the AI tool created “harmful content in a variety of other categories including: political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few.”

The FTC confirmed it received the letter but declined to comment further.

The broadside echoes mounting concerns about the tendency of AI tools to generate harmful content. Last week, Microsoft said it was investigating reports that its Copilot chatbot was generating responses users called disturbing, including mixed messages on suicide. In February, Alphabet Inc.’s flagship AI product, Gemini took heat for generating historically inaccurate scenes when prompted to create images of people.

Jones also wrote to the Environmental, Social and Public Policy Committee of Microsoft’s board, which includes Penny Pritzker and Reid Hoffman as members. “I don’t believe we need to wait for government regulation to ensure we are transparent with consumers about AI risks,” Jones said in the letter. “Given our corporate values, we should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children.”

CNBC reported the letters’ existence earlier.

In a statement, Microsoft said it’s “committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety.”

OpenAI didn’t respond to a request for comment.

Jones said he expressed his concerns to the company several times over the past three months. In January, he wrote to Democratic Senators Patty Murray and Maria Cantwell, who represent Washington State, and House Representative Adam Smith. In one letter, he asked lawmakers to investigate the risks of “AI image generation technologies and the corporate governance and responsible AI practices of the companies building and marketing these products.”

The lawmakers didn’t immediately respond to requests for comment.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Waiting for response to load…



Source link

]]>