Tech

Can governments turn AI safety talk into action?


AI is depicted on the screen

Andriy Onufriyenko/Getty Images

In Asia Tech x Singapore 2024 summit, several speakers are ready for high-level discussions and raise awareness about the importance of artificial intelligence (AI) safe to turn into action. Many are looking to prepare everyone from organizations to individuals with the tools to properly deploy this technology.

Also: How to use ChatGPT to analyze PDF for free

“Practical and practical action,” said Ieva Martinekaite, head of research and innovation at Telenor Group, who spoke to ZDNET on the sidelines of the summit. That’s what’s missing.” Martinekaite is a board member of the Norwegian Open AI Lab and a member of the Singapore Advisory Council on the Ethical Use of AI and Data. She also served as an Expert Member in the European Commission’s High-Level Expert Group on AI from 2018 to 2020.

Martinekaite noted that senior officials are also starting to recognize this problem.

Delegates at the conference, including top government ministers from various countries, quipped that they were simply burning jet fuel by attending high-level meetings about the summit. The peak in AI safety, most recently in South Korea and the UK, because they haven’t shown much yet. in terms of specific steps.

Martinekaite said it is time for governments and international organizations to start implementing playbooks, frameworks and benchmarking tools to help businesses and users ensure they are deploying and using AI wisely. safe. She added that continued investment is also needed to facilitate those efforts.

Deepfake created by AIin particular, carries significant risks and could affect critical infrastructure, she warned. Today they have become a reality: photos and videos of politicians, public figures and even Taylor Swift have emerged.

Also: More profound truths about politics exist than you think

Martinekaite added that technology is more complex today than it was a year ago, making it increasingly difficult to identify deepfakes. Cybercriminals can exploit this technology to help them steal credentials and illegally access systems and data.

“Hackers aren’t hacking, they’re logging in,” she said. This is a significant problem in some sectors, such as telecommunications, where deepfakes can be used to penetrate critical infrastructure and amplify cyberattacks. Martinekaite noted that employee IDs can be faked and used to access data centers and IT systems, adding that if this inertia remains unaddressed, the world risks a devastating attack.

Users need to be equipped with the necessary training and tools to identify and combat those risks, she said. Technology to detect and prevent such AI-generated content, including text and images, also needs to be developed, such as digital watermarking and media forensics. Martinekaite believes these should be done in conjunction with law and international cooperation.

However, she noted that regulatory frameworks should not regulate the technology, otherwise innovation in AI could be hindered and impact potential advances in healthcare, for example.

Instead, regulations should address where deepfake technology has the greatest impact, such as critical infrastructure and government services. Martinekaite said, then, requirements such as watermarking, source authentication and placing safeguards around data access and tracing can be rolled out to high-risk sectors and operators. Provide relevant technology.

According to Microsoft’s director of responsible AI, Natasha Crampton, the company has seen an increase in deepfakes, non-consensual images, and cyberbullying. During the summit discussion, she said Microsoft is focusing on tracking online fraud. content surrounding the electionespecially with some election taking place this year.

Stefan Schnorr, state secretary of the German Federal Ministry for Digital and Transport, said deepfake has the potential to spread misinformation and mislead voters, leading to loss of trust in democratic institutions.

Also: What TikTok’s content credentials mean for you

Protecting against this also involves a commitment to protecting personal data and privacy, Schnorr adds. He emphasized the need for technology companies and international cooperation to comply with cyber laws put in place to promote the safety of AI, e.g. EU AI Act.

Zeng Yi, director of the Brain-inspired Cognitive Intelligence Laboratory and the International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences, said, if allowed to continue to exist unhindered, deepfakes can influence decision-making.

Also emphasizing the need for international cooperation, Zeng suggested establishing a worldwide deepfake “observatory” facility to promote better understanding and exchange of information on disinformation to efforts to prevent such content from spreading across countries.

A global infrastructure that checks facts and misinformation could also help inform the public about deepfakes, he said.

Singapore updates its new generation AI governance framework

Meanwhile, Singapore has released the final version of the governance framework for innovative AIexpands on the existing AI governance framework, first introduced in 2019 and finally Updated in 2020.

The Sample AI governance framework for GenAI sets out a “systematic and balanced” approach that Singapore says will balance the need to address GenAI concerns and promote innovation. It covers nine aspects, including incident reporting, content provenance, security, testing and assurance, and provides recommendations on initial steps to take.

At a later stage, AI Verify, the team behind the framework, will add more detailed guidance and resources across the nine dimensions. To support interoperability, they will also map the governance framework to international AI principles, such as the G7 Hiroshima Principles.

Also: Apple’s AI features and Nvidia’s AI training speed top the Innovation Index

Josephine Teo, Singapore’s Minister of Communications and Information, and Minister for Smart Nation and Cyber ​​Security, said: Good governance is as important as innovation in delivering on Singapore’s vision. Singapore in AI and can help drive sustainable innovation. important conference.

“We need to acknowledge that addressing the harmful effects of AI is one thing, but preventing them from happening in the first place is another…through appropriate design and upstream measures,” Teo said. . She added that risk mitigation measures are needed and that new “evidence-based” regulations can deliver more meaningful and effective governance of AI.

Besides establishing AI governance, Singapore is also looking to develop its governance capacity, such as building an advanced technology hub for online safety, focusing on harmful online content created by AI.

Users also need to understand the risks. Teo notes that it is in the public interest of organizations using AI to understand its advantages as well as its limitations.

Teo believes that businesses should equip themselves with the right mindset, capabilities and tools to do that. She added that Singapore’s model AI governance framework provides practical guidance on what should be implemented as safeguards. It also sets out the basic requirements for implementing AI, regardless of a company’s size or resources.

According to Martinekaite, for Telenor, AI governance also means monitoring the use of new AI tools and reassessing potential risks. The Norwegian telecommunications company is currently testing Microsoft co-pilotbuilt on OpenAI’s technology, against Telenor’s own ethical AI principles.

When asked if OpenAI recent controversy Martinekaite said the Voice Mode involvement has affected her confidence in using the technology, large businesses that operate critical infrastructure such as Telenor have sufficient capacity and checks to ensure they are deploying trusted AI tools, including third-party platforms like OpenAI. This also includes working with partners like cloud providers and smaller solution providers to understand and learn about the tools they are using.

Telenor established a task force last year to monitor the responsible adoption of AI. Martinekaite explains that this entails establishing principles that its employees must adhere to, creating rules and tools to guide the use of AI, and setting standards that its partners they, including Microsoft, must comply.

These are to ensure the technology the company uses is legal and safe, she added. Telenor also has an internal team reviewing governance and risk management structures to consider the use of GenAI. Martinekaite noted that it will evaluate the tools and remedies needed to ensure it has the right governance structure in place to manage its use of AI in high-risk areas.

Also: Enterprise cloud security failures are ‘worrying’ as AI threats accelerate

As organizations use their own data to train and fine-tune large language models and smaller AI models, Martinekaite thinks businesses and AI developers will have more discussions about how to use AI. Use and manage this data.

She also said that the need to comply with new laws, such as the EU’s AI Act, will further spur such conversations, as companies work to ensure they meet additional requirements for Deploying AI is high risk. For example, they will need to know how their AI training data is managed and traced.

There is a lot of scrutiny and concern from organizations, who will want to take a closer look at their contractual arrangements with AI developers.

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button