AI impacts everyone, but marginalized groups – and particularly indigenous voices – are not sufficiently represented in debates about the technology and its governance.
Authors:
- Edson Prestes, Institute of Informatics Federal University of Rio Grande do Sul, Brazil and lead RAM expert for Brazil
- Lutiana Valadares Fernandes Barbosa, Federal Public Defender's Office, Brazil
- Viviane Ceolin Dallasta Del Grossi, Federal Public Defender's Office and University of São Paulo Law School, Brazil
- Cynthia Picolo Gonzaga de Azevedo, Laboratory of Public Policy and Internet - LAPIN, Brazil
- Gustavo Macedo, Institute of Advanced Studies, University of São Paulo, Brazil
- Renan Maffei, Institute of Informatics, Federal University of Rio Grande do Sul, Brazil
Artificial intelligence (AI) is increasingly present in almost all aspects of human life and can potentially impact the global community in different ways. Several groups affected by AI are not represented in teams developing these technologies, or the standards or legislative processes that govern them. This is the case for indigenous and traditional communities, such as quilombos. These relevant voices, traditionally silenced by prevailing dynamics of power, remain at the margins of the AI revolution without having their values, customs, and traditions embedded in those AI systems, severely compromising the exercise of their rights and, consequently, democracy as a whole. Indigenous peoples and traditional communities not only have the right to be involved in the debates about AI development and regulation, but their participation is also essential to ensure that the diversity of human beings and communities is reflected in AI systems and governance. The lack of participation risks maintaining or worsening discriminatory structures that are already being reproduced in AI-based systems, which could lead to the replication of this discriminatory bias in government strategies, public policies and other areas. This paper claims a reflection for an inclusive decolonial artificial intelligence, in line with UNESCO´s Recommendation on the Ethics of AI, on the urgent need for a global effort to include diverse voices in AI debate and governance.
According to the Brazilian Institute of Geography and Statistics (IBGE), in 2022, the number of indigenous people living in Brazil was 1,693,535. Although Portuguese is the official language, there are 274 indigenous languages in Brazil that are not considered official languages, according to Article 13 of the Brazilian Constitution. With a significant indigenous population, these groups are still not fully represented in decision-making processes, even with the election of an indigenous federal deputy in 2022 and the creation of a Ministry of Indigenous Peoples in 2023, for example.
Such a problem is no different in the debates involving AI development and regulation. This is particularly concerning as traditional populations often have diverse cosmovisions that must be considered in such discussions, or their democratic representation and legitimacy are at risk. The current processes for developing public policies and the future law for AI in Brazil exemplify the silencing of these voices.
Today, Brazil has a National AI Strategy (EBIA, 2021) and a main bill under discussion that intends to regulate the development and use of AI in the country - Bill n. 2338/2023 (PL2338, 2023). The Brazilian AI Strategy, launched in 2021 by the Ministry of Science and Technology, represents a major milestone in the national public policy strategy. It was developed through a complex process involving specialized consultancy, national and international benchmarking, and public consultation. Although the team behind the drafting of this Strategy was balanced in terms of the number of men and women, racial and cultural diversity and the participation of traditional people, such as indigenous people, were not considered. The result of the lack of representation is clear: there is no mention of indigenous peoples in the document.
Likewise, discussions about AI in the Brazilian National Congress have not consistently included traditional groups. Mainly since 2021, Brazilian parliamentarians have been debating the regulation of AI, holding public hearings and creating special commissions, but maintaining the deep-rooted silencing of historically excluded voices. It is worth mentioning that the only participation of an indigenous person in a public hearing was possible thanks to the efforts of the organized civil society. In October 2023, two years after the parliamentary discussions began, Time’i Assurini, from the Xingu region, State of Pará, expressed his concerns about the advancement of AI before the Federal Senate.
In the context of raising awareness about the democratic relevance of participation of indigenous peoples in AI debates, the National School of the Federal Public Defender´s Office, through its research group Ethics, Human Rights and Artificial Intelligence (EDHIA), in dialog with the Brazilian Conference at Harvard University, held in April 2023 the event Invisible Voices and AI in which Time’i Assurini discussed the relevance of indigenous participation.
In 2023, our group, under the leadership of Professor Edson Prestes, was invited by UNESCO to apply the Readiness Assessment Methodology in Brazil, which is a tool of the Recommendation on the Ethics of Artificial Intelligence to assess a state’s readiness to implement AI ethically and responsibly to benefit all their citizens. The research group conducted public hearings and provided recommendations. In the call for the public hearings, a great effort was made to foster the participation of traditionally silenced voices, and representatives of the indigenous and Quilombola communities.
Despite the nascent effort, Brazil, through government, academia, civil society, and the private sector, has a long path to foster and ensure the participation of indigenous, Quilombola, and other communities in AI regulatory debates, standards and system development. By a very illustrative example, the way in which the registration of artisanal and traditional fishers has been conducted and demanded violates ILO Convention 169 on Indigenous and Tribal Peoples. The criminalization and marginalization of vulnerable groups can be seen in the implementation of measures to modernize means of work, with exclusionary policies that disregard the organizations of artisanal fishermen and fisherwomen in the debate on the re-registration of fishing communities. Among the various obstacles to re-registration using SisRGP 4.0, the issue of facial recognition deserves attention. This system does not recognize the artisanal fishing population, made up of people from different ethnic groups, especially quilombolas and indigenous people, confirming a flagrant violation of access to rights on the grounds of race, a posture of institutional racism, which cannot be tolerated in a democratic state. In this context, in November 2022, the Federal Public Defender's Office recommended to the Brazilian Government, among other measures:
In terms of international frameworks, in November 2021, all UNESCO's Member States signed the Recommendation on the Ethics of Artificial Intelligence, the first global soft law normative instrument on AI. This document reinforces the importance of inclusive participation on AI-related issues, stressing that Member States should implement policies to promote and increase diversity and inclusiveness that reflect their populations in AI development teams and training datasets. UNESCO has also released a report entitled Indigenous Peoples: Perspectives from Latin America and the Caribbean, which provides recommendations for multiple stakeholders on how to implement its Recommendation on the Ethics of AI with a focus on indigenous peoples. Nonetheless, the report does not address the Brazilian situation and only cites Brazil while providing a list of indigenous organizations.
As per the right to self-determination and autonomy, the United Nations Declaration on the Rights of Indigenous Peoples highlights in its articles 4 and 19 that states have the duty to “consult and cooperate in good faith with the indigenous peoples concerned through their own representative institutions in order to obtain their free, prior and informed consent before adopting and implementing legislative or administrative measures that may affect them”. The International Labor Organization Convention, 1989 (n. 169) also declares indigenous peoples' right to previous and informed consultations when states take measures that can affect them.
Based on these considerations, the principle of democratic participation, present in the above-mentioned instruments, is a corollary of democratic states based on the rule of law and human rights. It requires the possibility of individuals and communities being able to effectively participate in state choices and public policies, which can occur either directly or indirectly, for example, by voting, participating in public consultations, and other decision-making spaces. For the development and use of AI not to violate or threaten the principle of democratic participation, it is necessary to foster and ensure effective participation. AI systems can empower and strengthen the voices of these communities but, without adequate safeguards, might also be used as another tool of oppression, reproducing colonial patterns of dominance.
At the event UNGPs lens to managing human rights risks from Generative AI - Forum on Business and Human Rights 2023, the panelist Mohamad Najem raised the issue about the Western data languages of Generative AI. In his words, from the point of view of the global South, or global majority, there is a lot to be done; for example, AI and generative AI are trained mainly on Western data languages (mainly English), thus some of these models might be missing the context of the global majority region.
At the Invisible Voices and Artificial Intelligence event organized by the Brazilian Federal Public Defender in April 2023, the indigenous leader Time’i of the Awaete people said: "I, Time'i, try to seek dialogue. Because we have always been in our territory, and suddenly many different people begin to arrive with tractors and other materials with the task of destroying the forest and rivers, but not only that... . also polluting people's minds with false illusions. Today, human desire is something that we cannot control. (...) We, indigenous people, who do not speak Portuguese, are suffering, and deforestation continues. There is so much digital technology, but for what and for whom? What are we building with this? We want a tool that will strengthen us, that guarantees the forest remains standing. We don't just want to keep reporting that the forest is ending, otherwise we will never safeguard our memory. (...) Unfortunately, [after contact with non-indigenous peoples] the disease began to reach us in different ways - the disease that was not part of our people. We always had the cure within our territory, but with the arrival of technology in these years, a fragile relationship was created… A culture, a tree, a river, it lasts for centuries, but today, they are threatened by various types of aggression such as mining, logging, and also new ideas... Our science has always worked, but now that a new type of digital technology is coming in, they are getting confused. Our [indigenous and non-indigenous] sciences are important. And often these two sciences do not come together because they are arguing over who is more, who is God. We are not God, we are people."
Only a plural, diverse, and inclusive AI has a place in a democratic society. Diversity is one of the main principles supporting innovation and social resilience. Social resilience is also promoted by decentralization, i.e., the implementation of AI technologies adapted to the cultural context and particular needs of different regions. Therefore, when regulating AI, the perspective of traditional communities must be taken into account, as it is widely guaranteed in international human rights declarations, combating any form of discrimination and, above all, ensuring and promoting effective participation in the processes of developing and using AI.
Brazil, like many other countries in the global South, is debating AI and how to regulate it. However, it is necessary to reflect from a decolonial perspective on the mechanisms of power that surround the development, use and regulation of AI, so that adopted practices and future legislation take into account the values and dynamics of global diversity, thus preventing technical and governmental processes from reproducing mechanisms of domination of the global North that silence, even if as a side effect, the global South.
Brazil is a multicultural nation that carries with it the values of ethnic multiplicity. It needs to take into account indigenous and other traditional communities. Adopting foreign technology and imported regulations could lead to new forms of dependence and technological colonialism that will silence all the voices that matter and deserve to be heard in the debate.
The ideas and opinions expressed in this article are those of the author and do not necessarily represent the views of UNESCO. The designations employed and the presentation of material throughout the article do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, city or area or of its authorities, or concerning its frontiers or boundaries.
URL: https://www.unesco.org/en/articles/ai-and-brazils-indigenous-populations-call-participation