Implications of algorithmic fairness in artificial intelligence
DOI:
https://doi.org/10.31381/perfilesingenieria.v21i22.7056Keywords:
Algorithmic, algorithmic fairness, artificial intelligenceAbstract
The use of Artificial Intelligence algorithms is not limited, as is sometimes assumed, to effective procedures; the use of this vocabulary raises several conceptions, interpretations and problems. In order not to get distracted in this linguistic labyrinth, we have taken a position on a very common characterization in the psychological sense that consists of conceiving it as a capacity possessed by certain organisms/mechanisms to adapt to new situations using for this purpose the knowledge acquired in the course. . from previous adaptation processes. The emergence of artificial intelligence (AI) is increasingly integrated into society and is generally used to make timely decisions that affect society and therefore people, in different areas. In the development of AI algorithms, systemic and repeatable errors can occur in a computer system that create unfair results, such as privileging an arbitrary group of users over others. These so-called biased algorithms are generally characterized by the existence of biases or distortions in the training data. The scientific community and government institutions have launched proposals to combat these risks that seek to reduce their negative impact on society. It is imperative to resolve aspects that go against Ethics, justice, Transparency and Equity of data, algorithms and their predictions. The present work aims to raise awareness that the development of algorithms to make decisions must meet three requirements: first, guarantee the balance between the set of data used and the programming of the algorithm with fairness that avoids discrimination and bias, second, guarantee conditions of transparency in the results, that is, the result obtained must be explainable to any user in a clear and simple way. The regulation of requirements for the development and use of AI should not be ignored, it must be aligned with the non-affecting of fundamental human rights
Downloads
References
ASLAM. (20213). Solución para medir la ética, equidad e integridad de los algoritmos de IA. Obtenido de https://aslan.es/solucion-para-medir-la-etica-equidad-e-integridad-de-los-algoritmos-de-ia/
BBVA. (2024). Equidad algorítmica, clave para crear una inteligencia artificial responsable. Obtenido de https://www.bbva.com/es/innovacion/accesibilidad-neuromarketing-e-inteligencia-artificial-startups-que-ayudan-a-mejorar-la-experiencia-de-usuario/
Bernal, C. (2023). Justicia algoritmica y sus limitaciones. Obtenido de https://quantil.co/es/blog/justicia-algoritmica-y-sus-limitaciones-un-teorema-de-imposiblidad/
CEPAL. (2024). https://repositorio.cepal.org/server/api/core/bitstreams/a5fcd682-bdec-4b63-9621-693d36c497f8/content. Obtenido de https://repositorio.cepal.org/server/api/core/bitstreams/a5fcd682-bdec-4b63-9621-693d36c497f8/content
EUROPEA, U. (2023). https://digital-strategy.ec.europa.eu/es/policies/european-approach-artificial-intelligence#:~:text=El%20enfoque%20de%20la%20UE,seguridad%20y%20los%20derechos%20fundamentales. Obtenido de https://digital-strategy.ec.europa.eu/es/policies/europeanapproachartificialintelligence#:~:text=El%20enfoque%20de%20la%20UE,seguridad%20y%20los%20derechos%20fundamentales.
Frabellas, A. (2023). Han desaparecido realmente los problemas de discriminación y sesgo que plagaban la IA temprana? Obtenido de https://dobetter.esade.edu/es/algoritmos-desigualdad
Grigore, A. (2022). Derechos humanos e inteligencia artificial. Obtenido de https://revistascientificas.us.es/index.php/ies/article/view/19991/18602
Higuera, C. (2024). IA responsable, retos, avances y oportunidades.
https://hub.laboratoria.la/10-casos-donde-la-inteligencia-artificial-jugo-en-contra-de-la-diversidad. (2023).
Inneraraty, D. (2023). Women evolution. Obtenido de https://womenevolution.es/igualdad-algoritmica/
Jose, I. (2023). La discriminación algorítmica y su impacto en la dignidad de la persona y los derechos humanos. 12. Obtenido de https://djhr.revistas.deusto.es/article/view/2910
Montesinos, A. (2023). INTELIGENCIA ARTIFICIAL EN LA JUSTICIA. Obtenido de https://revista-aji.com/wp-content/uploads/2024/07/AJI21_Art20.pdf
(2024). Seminario de Integracion II. Obtenido de https://seminarioiiuntref.wordpress.com/2022/04/12/hay-equidad-algoritmica/
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Augusto Parcemon Cortez Vasquez, Maria Manyari Monteza, Gilberto Salinas Azaña , Jorge Luis Chávez Soto
This work is licensed under a Creative Commons Attribution 4.0 International License.
In the event that the manuscript is approved for its next publication, the authors retain the copyright and assign to the journal the right of publication, edition, reproduction, distribution, exhibition and communication in the country of origin, as well as in the abroad, through print and electronic media in different databases. Therefore, it is established that after the publication of the articles, the authors may make other types of independent or additional agreements for the non-exclusive dissemination of the version of the article published in this journal (publication in books or institutional repositories), provided that it is explicitly indicated that the work has been published for the first time in this journal.
To record this procedure, the author must complete the following forms: