International organizations are campaigning for governments and tech companies to adopt a declaration meant to protect human rights in the age of artificial intelligence and machine learning.
Amnesty International and Access Now published a new draft this month of their proposed "Toronto Declaration," also endorsed by Human Rights Watch and Wikimedia Foundation. It first surfaced in May at Canada's RightsCon Toronto, an international conference on human rights in the digital age.
The declaration, drafted by researchers and experts in human rights and technology, applies to people's rights to equality and non-discrimination. Though not legally binding, it would call for international human rights standards and data ethics to be applied to the development and use of systems that rely on machine learning, or ML, which is part of artificial intelligence, or AI.
It urges governments and tech companies to prevent machine learning systems from violating international human rights laws. These kinds of systems, such as self-driving cars and translation software, give computers an ability to learn and improve from experience without direct programming.
"We must urgently address how these technologies will affect people and their rights. In a world of machine learning systems, who will bear accountability for harming human rights?"
Proponents argue that human rights must always be considered as technology transforms our lives, changing everything from transportation and manufacturing to healthcare and education. The many uses of AI include remote sensing, medical imaging and educational and industrial robots.
"As machine learning systems advance in capability and increase in use, we must examine the impact of this technology on human rights," the declaration begins.
"We acknowledge the potential for machine learning and related systems to be used to promote human rights, but are increasingly concerned about the capability of such systems to facilitate intentional or inadvertent discrimination against certain individuals or groups of people," it says. "We must urgently address how these technologies will affect people and their rights. In a world of machine learning systems, who will bear accountability for harming human rights?"
Amnesty International says in a statement that current human rights laws and standards already provide solid foundations for developing ethical frameworks for machine learning, including provisions for accountability and means for remedy.
But one of the most biggest risks with machine learning is that bias and discrimination against certain groups can creep in and become amplified when technology uses historical data about people without adequate safeguards, argues Sherif Elsayed-Ali, Amnesty's director of global issues.
"When Amnesty started examining the nexus of artificial intelligence and human rights, we were quickly struck by two things: the first was that there appeared to be a widespread and genuine interest in the ethical issues around AI, not only among academics, but also among many businesses," Elsayed-Ali wrote this month in OpenGlobalRights, an online platform hosted in the United States by the University of Minnesota. He also has been posting similar stories in other online publications.
"This was encouraging — it seemed like lessons were learned from the successive scandals that hit social media companies and there was a movement to proactively address risks associated with AI," he wrote. "The second observation was that human rights standards were largely missing from the debate on the ethics of AI; often there would be mention of the importance of human rights, but usually nothing more than a passing reference."
Taking control of the context
Human Rights Watch says machine learning and artificial intelligence "both impact a broad array of human rights, including the right to privacy, freedom of expression, participation in cultural life, the right to remedy, and the right to life."
"Fueled by access to large data sets and powerful computers, machine learning and artificial intelligence can offer significant benefits to society," the organization says in a statement. "At the same time, left unchecked, these rapidly expanding technologies can pose serious risks to human rights by, for example, replicating biases, hindering due process and undermining the laws of war.
The declaration has three main sections. The first part lists the duties of nations to prevent discrimination by identifying risks, ensuring transparency and accountability, enforcing oversight and promoting equality.
The second part lists the duties of tech companies and others in the private sector that use machine learning systems. That means mapping and assessing risks, taking effective action, and ensuring transparency with technical information and data.
The third part says people harmed by such systems have a right to an effective remedy and those responsible for abuses should be held accountable.
Machine learning for development aid
Artificial intelligence, including machine learning, is also being studied as a potential tool for sustainable development and poverty reduction by a consortium of international organizations, tech giants and academics.
The United Nations Development Program, or UNDP, which operates in 177 of the U.N.’s 193 member nations, is joining the Partnership on Artificial Intelligence founded by Amazon, DeepMind, Facebook, Google, IBM and Microsoft two years ago. Since then, others such as Accenture, eBay, Human Rights Watch, Intel, UNICEF and the University of Oxford have become part of the group.
They aim to ensure that artificial intelligence is used for safe, ethical and transparent purposes, and to advance public understanding of AI, formulate best practices, and serve as an open platform for discussion and engagement about AI and its influences on people and society.
Among the uses of artificial intelligence that UNDP says it already has adopted are drones and remote sensing for collecting data that, for example, is helping the Maldives to better prepare for disasters and Uganda to create better places for refugees to live.
Another twist on machine learning is its potential to be put to use in the service of human rights advocacy.
Megan Price, executive director of the Human Rights Data Analysis Group, argues that the same machine learning methods used by businesses to learn more about their customers, or to improve speech recognition and identify the faces of pets, can be applied to questions about conflict violence.
Price designs strategies and methods for statistical analysis of human rights data in places like Colombia, Guatemala and Syria, where the Office of the U.N. High Commissioner of Human Rights, or OHCHR, commissioned her group to estimate the number of people killed in the Syrian War. Her group found 191,369 documented, identifiable people killed in Syria between March 2011 and April 2014. The latest U.N. estimate, which is two years old, put the death toll at 400,000.
"And if you think about violence, and conflict violence, it should rapidly become obvious that that's a subset of all victims, because not every victim is identified. Not every victim's story is told right away," she told Stanford University's 2017 Global Women in Data Science Conference. "It may be days, or weeks, or months, or years before we hear certain stories from the conflict."
Price, who also is a member of a technical advisory board for the prosecutor's office of the International Criminal Court based at The Hague, Netherlands, describes her San Francisco-based group as the "behind the scenes scientists" for other human rights advocacy organizations.
"Our job as scientists is to estimate what we don't know," she told the 2016 Strata + Hadoop World conference in San Jose, California.
"AI for Good"
The idea that artificial intelligence, including machine learning systems, can transform society for the better was the theme of a 2017 conference at Geneva hosted by the U.N.'s International Telecommunication Union, or ITU, in partnership with the Silicon Valley-based X Prize Foundation.
The conference focused on how AI can be put to use for sustainable development, eliminating poverty and hunger, while also protecting the environment. U.N. Secretary-General António Guterres told the conference that AI has the potential to accelerate progress for everyone, and it is time for the world to come to terms with how it will affect the future.
The World Health Organization's former director-general Margaret Chan said that making better use of data in health can speed up results, add precision to predictions, and reduce health care costs, though she allowed that machines would never able to replicate human compassion.
The advent of artificial intelligence and machine learning presents "a fork in the road and have a clear choice in front of us," Salil Shetty, Amnesty's director-general, said at the conference.
"In the future, we could have artificial intelligence systems that detect and correct bias in data, rather than doubling down on human bias; we have automation that takes people out of dangerous and degrading jobs, but also educational and economic policies that create opportunities for dignified and fulfilling jobs," he said. "Governments could ban fully automated weapons systems — so that killer robots never come into existence."
When nations signed the U.N.'s 1948 Universal Declaration of Human Rights, he said, they were not simply reflecting the world they lived in but an aspirational world that would stand up for and protect every human being’s dignity.
"We must today challenge ourselves to be aspirational again as we prepare for a future world where AI and technology are integrated into every aspect of people’s lives," he said. "Governments have binding human rights obligations and corporations have a responsibility to respect human rights. We strongly believe that enshrining AI ethics in human rights is the best way to make AI a positive force in our collective future."