Home > World News > The UN calls for a moratorium on the use of AI that endangers human rights

The UN calls for a moratorium on the use of AI that endangers human rights

GENEVA: UN High Commissioner for Human Rights calls for a moratorium on the use of artificial intelligence technologies that pose a serious risk to human rights, including face-scanning systems that track people in public.
Michelle Bachelet, The UN High Commissioner for Human Rights, also said on Wednesday that countries should explicitly ban AI applications that do not comply with international human rights.
Applications that should be banned include government “social scoring” systems that assess people based on their behavior and certain AI-based tools that categorize people into clusters such as ethnicity or gender.
AI-based technologies can be a force for good, but they can also “have negative, even catastrophic, effects if used without sufficient consideration of how they affect human human rights,” Bachelet said in a statement.
Her comments came in conjunction with a new UN report examining how countries and companies have rushed to apply AI systems that affect people’s lives and livelihoods without putting in place proper safeguards to prevent discrimination and other harm.
“This is not about not having AI,” Peggy Hicks, the head of the Thematic Engagement Office told reporters when she presented the report in Geneva. “It is about realizing that if AI is to be used in these human rights – very critical – areas of function, it must be done in the right way. And we have simply not put in place a framework that ensures that this happens. ”
Bachelet did not demand a direct ban on facial recognition technology, but said governments should stop scanning people’s features in real time until they can show that the technology is accurate, non-discriminatory and meets certain privacy and data protection standards.
While countries were not mentioned by name in the report, China has been among the countries that have launched facial recognition technology – especially for surveillance in the western region of Xinjiang, where many of its minority Uyghs live. The key authors of the report said that naming specific countries was not part of their mission and that doing so could even be counterproductive.
“In the Chinese context, as in other contexts, we are concerned about transparency and discriminatory applications aimed at specific communities,” says Hicks.
She cited several lawsuits in the United States and Australia where artificial intelligence had been misapplied.
The report also expresses caution about tools that try to derive people’s emotional and mental states by analyzing their facial expressions or body movements, and says that such technology is susceptible to bias, misinterpretation and lacks scientific basis.
“The use of sensitivity recognition systems by public authorities, for example to designate individuals for police stops or arrests or to assess the accuracy of statements during questioning, risks undermining human rights, such as the rights to privacy, liberty and a fair trial,” he said. it says in the report.
The report’s recommendations reflect the thinking of many political leaders in Western democracies, who hope to exploit AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and provide recommendations on who gets access to jobs, loans and educational opportunities.
European regulators have already taken steps to curb the most risky AI applications. Proposed rules outlined by EU officials this year would ban certain uses of AI, such as scanning facial features in real time and closely monitoring others that could threaten human safety or rights.
US President Joe Biden’s administration has expressed similar concerns, although it has not yet described a detailed approach to curbing them. A newly formed group called Trade and Technology Council, jointly led by US and European officials, has sought to work together to develop common rules for AI and other technology policies.
Efforts to limit the most risky uses of AI have been supported by Microsoft and other US technology giants who hope to guide the rules that affect technology. Microsoft has worked with and provided funding to the UN Office for the Coordination of Human Rights to improve the use of technology, but funding for the report came from the Office of the Regular’s regular budget, Hicks said.
Western countries have been at the forefront of expressing concern about the discriminatory use of AI.
“If you think about how AI can be used in a discriminatory way, or to further reinforce discriminatory tendencies, it’s pretty scary,” said the US Secretary of Commerce. Gina Raimondo during a virtual conference in June. “We have to make sure we do not let that happen.”
She spoke with Margrethe Vestager, the European Commission’s executive vice president for the digital age, who suggested that certain AI uses should be completely restricted in “democracies like ours.” She cited social points, which can turn off someone’s privileges in society, and the “broad, comprehensive use of remote biometric identification in public space.”


Leave a Reply

Your email address will not be published. Required fields are marked *