Tech

AI ethicist says today’s AI boom will amplify social problems if we don’t act now


AI developers must move quickly to develop and deploy solving systems algorithmic bias, said Kathy Baxter, Lead Architect of Ethical AI Practices at Salesforce. In an interview with ZDNET, Baxter emphasized the need to show diversity in data sets and user research to ensure AI systems are fair and unbiased. She also stressed the importance of making AI systems transparent, understandable, and accountable, while protecting individual privacy. Baxter emphasized the need for cross-disciplinary cooperation, like the model used by the National Institute of Standards and Technology (National Institute of Standards and Technology).NIST), so that we can develop powerful and secure AI systems that benefit everyone.

One of the fundamental questions about AI ethics is to ensure that AI systems developed and implemented without reinforcing existing social biases or creating new ones. To this end, Baxter emphasizes the importance of questioning who benefits and who pays for AI technology. It’s important to review the datasets being used and make sure they represent everyone’s voice. Comprehensiveness in the development process and identification of potential harms through user research is also essential.

Also: ChatGPT’s intelligence is zero, but it’s a revolution in usefulness, says AI expert

“This is one of the fundamental questions we have to discuss,” Baxter said. “Women of color, in particular, asked this question and doing research in this field for many years now. I’m glad to see a lot of people talking about this, especially with the general use of AI. But what we need to do, is basically ask who benefits and who pays for this technology. Whose voice is included?”

Social trends can be fed into AI systems through the datasets used to train them. Non-representative data sets containing biases, such as image datasets that are predominantly one race or lack cultural differences, can lead to biased AI systems . Furthermore, the uneven adoption of AI systems in society may perpetuate existing stereotypes.

To make AI systems transparent and understandable to the average person, prioritizing explainability during development is key. Techniques such as “thought suggestion chains” can help AI systems demonstrate their work and make their decision-making process easier to understand. User research is also important to ensure that explanations are clear and users can identify uncertainties in AI-generated content.

Also: AI Can Automate 25% Of All Jobs Here Are The Most (And Least) Risks

Protecting individuals’ privacy and ensuring responsible use of AI requires transparency and consent. Sales force tracking A guide to responsible reproductive AI, including respecting the origin of data and only using customer data with consent. Allowing users to opt-in, opt-out, or have control over the use of their data is critical to privacy.

“We only use our customers’ data with their consent,” Baxter said. “It’s really important to be transparent when you’re using someone’s data, allowing them to opt-in and allowing them to come back and say when they don’t want to put their data in anymore.”

As competition for innovation in general AI intensifies, maintaining human control and autonomy More and more autonomous AI systems are more important than ever. Empowering users to make informed decisions about the use of AI-generated content and human tracking can help maintain control.

Ensuring AI systems are secure, reliable and usable is critical; Industry-wide collaboration is critical to achieving this. Baxter praises AI Risk Management Framework created by NIST, with the participation of more than 240 experts from different fields. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.

Failure to address these AI ethics issues could have serious consequences consequenceas seen in the cases wrongfully caught due to facial recognition error or create harmful images. Investing in protections and focusing on the present, instead of just focusing on potential harm in the future, can help mitigate these problems and ensure the development and use of systems AI responsibly.

Also: How ChatGPT Works

While the future of AI and the capabilities of artificial intelligence in general are hot topic, Baxter emphasizes the importance of focusing on the present. Ensuring responsible use of AI and addressing today’s societal biases will help prepare society better for future AI advancements. By investing in ethical AI practices and cross-industry collaboration, we can help create a safer, more inclusive future for AI technology.

“I think the timeline is very important,” Baxter said. “We really have to invest in the here and now and create this muscle memory, create these resources, create regulations that allow us to keep moving forward but do it safely.” .”

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button