Updates in the Governance of Artificial Intelligence
Updated: Mar 17, 2020
May has been a big month for Artificial Intelligence (AI) and the development of rules to govern it. As we wrap up the month, it's worth reviewing recent important actions in AI governance and discuss what's next.
Just a few days ago, on May 29, the World Economic Forum (WEF) held its first meeting of the newly established AI Council. The purpose of the Council is to "focus on creating international standards for artificial intelligence." This is an important topic to tackle given the decades-long interest and investment in AI with little to no oversite.
The rapid and expansive development of AI in recent years, and concerns for its potential misuse in both known and currently unknown scenarios, has spurred a flurry of activity on the governance front. For a list of activities regarding AI governance through March 2019, See Wrestling With AI Governance Around The World on Forbes.com from March 27, 2019. However, the tipping point seems to have occurred recently with the release of several new reports.
In April of this year, the European Commission released its Ethics Guidelines for Trustworthy AI, which outlines "seven requirements that should be met" for the "implementation and realization of trustworthy AI." Following on the heels of the European Commission report, the Organisation for Economic Cooperation and Development (OECD) published the Recommendation of the Council on Artificial Intelligence. The report covers "five complementary values-based principles for the responsible stewardship of trustworthy AI" and "five recommendations to policy-makers pertaining to national policies and international co-operation for trustworthy AI."
As of this writing, only 33 of 193 United Nations member states have adopted unified national AI plans, but many individual efforts have been in progress, to some degree. President Trump signed an executive order in February 2019 that, among other things, declared that "the United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people." Additionally, the US has pledged support for the recent OECD principles. Also in May of this year, the government of New Zealand began examining an ethical framework and action plan for "the impacts of artificial intelligence." In July, the United Kingdom will consider guidelines (including rules for ethics, governance, development, and deployment, and operations) for acquiring AI systems for government use.
Of course, this is just the tip of the iceberg. Future posts will cover additional AI governance topics we didn't get to, including US-specific initiatives, ethics, initiatives of other countries, and defense-related policies.
There is a lot to unpack, but the experts at GP Nichols & Company are here to help. Find out more at www.gpncompany.com and follow us on social media.