Tech

Mozilla says the leading challenge to the health of the Internet is AI power disparity and harm


Aug 21, 2019 San Francisco/CA/USA - A close-up of the Mozilla stylized sign (moz://a) at their office building in the SOMA district;  Mozilla is a free software community
Image: Sundry Photography / Adobe Stock

The leading challenge to the health of the Internet is the power disparity between who benefits from AI and who is harmed by AI, Mozilla New 2022 Internet Health disclosure.

Once again, this new report puts AI in the spotlight for how companies and governments use the technology. The Mozilla report scrutinized the nature of the AI-driven world, citing real-life examples from different countries.

TechRepublic spoke with Solana Larsen, Mozilla’s Internet Health report editor, to shed light on the concept of “responsible AI from the start”, black box AI, the future of regulations, and how some projects AI leads.

UNDERSTAND: Ethical policy on artificial intelligence (TechRepublic Premium)

Larsen explains that AI systems should be built from the ground up with ethics and accountability in mind, not dealt with at a later date when harms start to emerge.

“It sounds logical, but it really doesn’t happen enough,” Larsen said.

As Mozilla finds, centralizing influence and control over AI does not benefit most people. Given the aspects that AI technology is using, as AI is accepted worldwide, this issue has become a top concern.

Follow the marketAI’s report on AI disruption shows how big AI is. 2022 opens with over $50 billion in new opportunities for AI companies, and the sector is expected to grow to $300 billion by 2025.

Adoption of AI at all levels is now inevitable. Thirty-two countries have adopted an AI strategy, more than 200 projects with over $70 billion in public funding have been announced in Europe, Asia and Australia, and startups are raising billions of dollars. in thousands of deals around the world.

More importantly, AI applications have moved from rule-based AI to data-driven AI, and the data these models use is personal data. Mozilla recognizes the potential of AI but warns that it is causing harm every day around the globe.

“We need AI builders from different backgrounds who understand the complex interplay of data, AI, and how it can affect different communities,” Larsen told TechRepublic. She called for regulations to ensure AI systems are built to help, not harm.

Mozilla’s report also focuses on the data problem of AI, where large and frequently reused datasets are put to work, although it does not guarantee the results that smaller, specially designed for a project, do.

The data used to train machine learning algorithms is usually obtained from public websites like Flickr. The organization warns that many of the most popular datasets are made up of content scavenged from the internet, “overwhelmingly reflecting words and images that distort English, American, white, and male. “

Black Bock AI: Artificial Intelligence unravels

AI seems to be eliminating much of the harm it causes thanks to its reputation for being too technical and advanced for anyone to understand. In the AI ​​industry, when an AI uses a machine learning model that cannot be understood by humans, it is called a Black Box AI and is tagged for lack of transparency.

Larsen says that to unravel AI, users need to be transparent about what the code is doing, the data it is collecting, the decisions it is making, and who is benefiting from it.

“We really need to disprove the notion that AI is too advanced for people to have an opinion unless they are data scientists,” says Larsen. “If you’re experiencing harm from a system, you know something about it that probably not even its own designer.”

Companies like Amazon, Apple, Google, Microsoft, Meta and Alibaba, topped the list of those that reap the most benefits from AI-based products, services and solutions. But other fields and applications such as military, surveillance, computational propaganda – used in 81 countries by 2020 – and disinformation, such as the medical, financial and legal sectors AI bias and discrimination are also raising red flags because of the harm they create.

Tuning AI: From Talk to Action

Big tech companies are known for often going against regulations. Military and government-run AI also operates in an unregulated environment, often in conflict with human rights and privacy activists.

Mozilla believes that regulations can be a barrier to innovation helping to build trust and level the playing field.

“It’s good for businesses and consumers,” Larsen said.

Mozilla supports regulations like the DSA in Europe and adheres closely to the EU AI Act. The company also supports bills in the US to make AI systems more transparent.

Data privacy and consumer rights are also part of the regulatory landscape that could help pave the way for a more responsible AI. But regulations are only part of the equation. Without enforcement, regulations are nothing but words on paper.

“A huge number of people are calling for change and accountability, and we need AI builders who put people before profits,” says Larsen. “Currently, a large part of AI research and development is funded by big tech companies, and we need alternatives here as well.”

UNDERSTAND: Metaverse Scam Table: Everything You Need to Know (Free PDF) (TechRepublic)

Mozilla’s report links AI projects to harm to several companies, countries and communities. The organization cites AI projects that are affecting contract workers and their working conditions. This includes the invisible army of low-wage AI-trained workers on sites like Amazon Mechanical Turk, with average wages as low as $2.83 per hour.

“In real life, over and over again, AI harms insignificantly to those who do not benefit from global power systems,” Larsen said.

The organization is also actively taking action.

An example of their actions is Mozzila’s RegretsReporter browser extension. It turns everyday YouTube users into Youtube’s watchdog, providing community information on how the platform’s recommendation AI works.

Working with tens of thousands of users, Mozilla investigation reveals YouTube algorithm recommend videos that violate the platform’s own policies. The investigation had good results. YouTube is now more transparent about how the recommendation AI works. But Mozilla has no plans to stop there. Today, they continue to study in different countries.

Larsen explains that Mozzila believes it’s paramount to unravel and document the AI ​​as it works in shady conditions. In addition, the organization calls for dialogue between technology companies with the goal of understanding problems and finding solutions. They also contact regulators to discuss the rules that should be used.

AI leads by example

While Mozilla’s Internet Health 2022 report paints a rather bleak picture of AI, exaggerating the problems the world has always had, the company also highlights AI projects built and designed for a reason. legitimate purpose.

For example, the work of Drivers Cooperative in New York City, an app used – and owned – by more than 5,000 car-sharing drivers, helping contract workers get real dealerships in the car-sharing industry.

Another example is a black-owned business in Maryland called Melody it’s crowdsourced images of dark skin to better detect cancer and other skin problems in response to severe racial bias in dermatology machine learning.

“There are many examples around the world of AI systems being built and used in trusted and transparent ways,” Larsen said.



Source link

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button