By Federico Zecchi
Artificial Intelligence (AI) plays nowadays a crucial role in various industries, spanning healthcare, finance, retail, and manufacturing. With a foreseen global expenditure of $110 billion only for this year, AI business leverage on its ability to swiftly process data, cut down production costs and expedite research and development. Nonetheless, concerns have been raised in regards of these powerful, yet cryptic systems that might cause social harms.
Private companies employ AI software to make decisions in areas such as health, employment, credit assessments, and criminal justice. Typically, there is not clear accountability for addressing biases that might be encoded in the algorithms, therefore posing risks if not guided by ethical principles. Corporations exploiting AI tools are centralized in a handful of countries and they are predominantly male-dominated teams, which could result in a lack of cultural and gender diversity.
The risks associated with AI include embedded biases, contributions to climate degradation, and threats to human rights, worsening current disparities and negatively impacting disadvantaged communities. An example of discrimination can be found in machine learning algorithms showing varying performance for different sub-groups, as evidenced in a case of diabetic retinopathy diagnostics [1] for which there was a higher detection accuracy for lighter-skin individuals.
To mitigate these risks, it is imperative for AI actors to prioritize social justice, fairness, and non-discrimination, adopting an inclusive approach to ensure equitable access to AI benefits. Furthermore, there is a need for Member States and businesses to evaluate the environmental impact of AI systems, considering factors like energy consumption and raw material extraction. For instance, the energy-intensive process of training large AI models can result in a significant carbon footprint [2].
Recognizing these challenges, with the guidance of UNESCO’s Director-General Audrey Azoulay, the international community developed the most encompassing framework for AI technology. The Recommendation on the Ethics of Artificial Intelligence, adopted by 193 Member States in November 2021, outlines values aligned with the rights and dignity of individuals, and the preservation of the environment. It emphasizes principles such as openness, responsibility, and adherence to online legal standards.
The Recommendation’s applicability is further enhanced by its extensive Policy Action Areas, providing policymakers with a roadmap to convert fundamental principles into concrete political measures. This framework aims to guarantee that trustworthy AI is not only lawful and moral but also robust in both technical and social regards.
Bibliography
[1] Burlina P., Joshi, N., Paul, W., Pacheco K. D., and Bressler, N.M. Addressing artificial intelligence bias in retinal diagnostics. Transl Vis Sci Technol. 10(2), 13 (2021).
[2] Strubell, E., Ganesh, A., and McCallum, A. Energy and policy considerations for modern deep learning research. AAAI 34(9), 13693-13696 (2020).
[3] G. Ramos. UNESCO’s Recommendation on the Ethics of Artificial Intelligence: key facts (2023).