Opinion & Analysis

How to help AI developers understand the societal implications of their creations

Artificial intelligence presents unique challenges related to bias and discrimination. AI systems are only as good as the data they are trained on. Biased input data can perpetuate and amplify existing societal inequalities and discriminatory behaviour. Vishal Rana and Peter Woods propose using philosopher Paulo Freire’s critical consciousness framework to educate AI developers on the societal implications of what they create.

The intricate algorithms of generative AI have dramatically transformed how we approach learning and teaching, creating avenues for personalised learning experiences, global classrooms, and a drastic reduction in administrative tasks.

However, this transformation is not without its challenges. The integration of AI in education necessitates examining the potential for digital oppression, the widening of the digital divide and the risk of fostering passive processing and consumption of information. These challenges, deeply intertwined with societal and educational structures, emphasise the importance of adopting a critical consciousness approach while navigating the educational landscape in the AI era.

Paulo Freire’s concept of critical consciousness, or “conscientização,” is a pertinent framework for such an examination. The Brazilian educator and philosopher’s theory has been central to many pedagogical strategies and models, underlining the importance of developing a critical awareness and understanding of one’s societal context to challenge and change oppressive elements within it.

In Freire’s perspective, education isn’t the mere act of depositing knowledge; instead, it’s a dialogical process that fosters critical thinking, allowing individuals to question, critique, and take action to transform their world. Applying this to the AI context implies that the use of AI in education should not involve the passive acceptance of new technology but a critical engagement that evaluates its potential impact on learners, educators, and society at large, who must ask fundamental questions. Who does this technology serve? Who is left behind? What are the implications for privacy, control, accessibility, and quality of education? Can AI become an agent of oppression, imposing the values of its developers, or could it be a tool to democratise education and foster critical consciousness?

Due to its capacity to learn from its data inputs, AI also presents unique challenges related to bias and discrimination. AI systems are only as good as the data they are trained on. If the input data is biased, the output will also be biased, which can perpetuate and amplify existing societal inequalities and discriminatory behaviour and attitudes based on biases. That’s why the importance of critical consciousness extends beyond educators and learners to include AI developers, who must critically understand the societal implications of their creations.

The advent of AI in education is indeed transformative, but this transformation must be steered by critical consciousness. By actively recognising, questioning, and addressing the potential challenges posed by AI, educators, learners, and developers can ensure that the technology serves as a tool to foster critical consciousness and societal transformation, instead of becoming an instrument of digital oppression. The core of this engagement is for learners to develop a deep, critical understanding of our societal structures and a persistent commitment to challenging and changing these structures towards a more equitable and inclusive future.

AI vs human consciousness
Despite its transformative capabilities and increasing complexity, AI fundamentally lacks the depth of human consciousness. As a product of programming and algorithms, its capabilities are confined by the borders of its coding, a condition starkly contrasting with the richness and complexity inherent in human consciousness. In human cognition, the dynamic interaction of perception, emotions, morality, and subjective experiences leads to a rich tapestry of consciousness. While human consciousness originates organically from a complex confluence of thoughts, feelings, values and experiences, AI’s consciousness is a mere reflection of the algorithmic inputs provided by its human creators. Humans possess an innate capacity for philosophising, introspection, personal reflexivity, and a profound understanding of socio-political contexts. These elements combine to form the basis of critical consciousness, a capability exclusive to human cognition and emotion. Herein lies a key distinction between AI and human consciousness.

Can AI experience critical consciousness?
Currently, although AI possesses the capacity to learn, reason, and even create, it does not experience emotions (yet), choose values or possess subjectivity. The decisions made by AI are not motivated by intrinsic desires or innate ethical considerations but are determined by programmed algorithms and learned patterns. Thus, while AI can replicate or mimic certain aspects of human intelligence, it is fundamentally different from human consciousness.

This difference is pivotal when considering AI’s role in fostering critical consciousness. Critical consciousness, according to Freire, is not just about problem-solving within given parameters—it requires questioning the parameters themselves. It involves challenging the status quo, scrutinising underlying assumptions, and taking action towards socio-political transformation. These are uniquely human capabilities that AI, with its current technology, is incapable of achieving.

AI systems are inevitably influenced by the perspectives, biases, and values of their creators. The information they provide, the patterns they recognise, and the recommendations they make are all reflections of the input they receive. In this sense, AI is not an independent entity, but a mirror reflecting the intentions, assumptions, and biases of those who create and control it.

AI can certainly support educational processes, streamline administrative tasks, and personalise learning experiences. However, its role in fostering critical consciousness in learners should not be overestimated. The task of promoting critical consciousness—encouraging students to question, challenge, and change oppressive structures, especially those that currently affect them—still fundamentally arises through a dialogue between educators, learners and broader society. The advent of AI makes the role of educators in co-creating educational experiences with learners even more critical.

Digital literacy
Informed by Freire’s philosophy, integrating AI into education calls for a paradigm shift in our understanding of literacy. It mandates the cultivation of digital literacy, extending the conventional understanding of literacy beyond reading and writing to include the critical comprehension, interrogation, and navigation of digital technologies.

Freire’s concept of ‘reading the world’ gains a new dimension in the context of AI. In the digital age, ‘reading the world’ necessitates an understanding of the digital landscape, its tools, language and dynamics. The challenge is not only to acquire technical skills but also to develop a critical understanding of the digital world’s operations, potentials, and threats.

Just as Freire emphasised the importance of questioning and challenging oppressive structures, digital literacy involves interrogating the digital tools we use, including AI. This means understanding the mechanics of these tools, the assumptions and biases they might carry, and the implications and limitations of their use. It also involves understanding how these tools might be used to uphold or challenge oppressive structures, and how they can conversely be used to promote social justice.

The responsibility for fostering this critical digital literacy lies with educators. As facilitators of learning in the AI era, educators need to navigate the challenge of integrating AI into education, while fostering critical consciousness. This involves not only using AI tools to support learning but also helping learners to understand, interrogate, and critically engage with these tools.

Towards a just integration
As we continue to welcome the increased influence of AI in the education sector, it is imperative that we approach this transformation through the lens of critical consciousness. Echoing Freire’s philosophy, a critical evaluation of AI’s integration into education is crucial to ensure its responsible and equitable deployment. The democratisation of AI in education extends beyond merely providing access to digital resources. It involves acknowledging and countering the potential oppressive structures in AI, fostering inclusivity, and recognising the diversity of global perspectives.

Democratising AI in education requires the active participation of diverse, global stakeholders. The voices and perspectives of those from the Global South, as well as underrepresented groups in the Global North, must be included in the development, deployment and evaluation of AI in education. This ensures that AI is not simply another tool to reinforce the existing socio-cultural hegemony, but a catalyst for a global dialogue, fostering mutual understanding and respect for diversity.

In conclusion, a just integration of AI into the education sector demands a critical examination of AI’s influence. Drawing upon Paulo Freire’s philosophy, this involves fostering a global critical consciousness that transcends borders and challenges oppressive structures. The democratisation of AI in education calls for the integration of diverse, global perspectives, promoting inclusivity and critical engagement. It urges us to view AI not merely as a tool for learning, but as a subject of learning and a platform for global dialogue, truly embodying the spirit of Paulo Freire’s pedagogy.

About the author

Vishal Rana is a Lecturer in the Department of Business, Strategy, and Innovation at Griffith University, Australia.

Peter Woods is an Associate Professor at Griffith Business School, where he leads the EQUIS accreditation process.

Access the original publication here