It looks like you're new here. If you want to get involved, click one of these buttons!
Much of my work is at the nexus of feminist decolonial, Indigenous, and queer STS; African studies; and socio-legal studies. We are currently working on a project that conducts a discourse analysis of how media sources directed at and by audiences across the continent of Africa are articulating and giving meaning to technologies of artificial intelligence and machine-learning. In reading across several sets of literature for this project, I have been considering two questions:
How does the field of critical code studies draw upon and diverge from feminist decolonial, Indigenous, and queer STS? I wonder what intellectual geneaologies are informing the "critical" in critical code studies?
How might the study of source code enable new understandings of the human and the normative body, while also offering possbilities for building alternative ways of knowing and becoming?
In our research, we have found that media sources, as producers of culture, promote an understanding of AI technologies through a language of modernity and progress. For example, media sources promise that AI will be fast, efficient, productive, adaptable, certain, and precise. How does this inform our conceptions of what it means to be valued as human and/or devalued as less than human?
What would an alternative vision of AI technologies look like? Is it possible to imagine a more inclusive AI future with source code that enables slower movement, sideways thinking (Puar), queer use (Ahmed), situated knowledges (Haraway), queer failure (Halberstam)?
Comments
"How does this inform our conceptions of what it means to be valued as human and/or devalued as less than human?"
The supremacy of human intelligence (in which I do not believe, I do not think humans are the most intelligent beings) has been measured by the human's ability to control the machine (plow, factory, algorythm) and increasingly, our humanity is growing to be measured by our ability to become at One with the machine. Embodiment is changing.
The thing is that although embodiment changes, the rules around it most likely will retain existing power relations.
Our social compass in the West has been centered around notions of Human Rights. Long before the UN formalized stipulations of Human Rights, this term was coined by a person of color, George Washington Williams, in a letter written in 1890 to King Leopold of Belgium, asking the King to cease the enslavement, murder, maiming, and labor malpractices happening in the "congo free state", Leopold's personal colony. It's one of the earliest examples we have of image leaks as we know it today; and one of the earliest examples of photographs changing policy (See Sealy Harris photography collection).
So, when we talk about human value we are really talking about classifications of humans and what is ascribed to each classification. Bodies racialized as non-white were enslaved and exploited despite being as human as white aristocrats and kings. I'm bringing all this up because inevitably, hierarchies are formed and that is where I situate the beginning of any injustice. So, to question how humans are valued or devalued within a social superstructure that views AI as "fast, efficient, productive, adaptable, certain, and precise"—each of these are concepts/assessments based on a hierarchic structure. Can humans every escape hierarchic structures as a mode of relations?
If we abandon the orders we know and form smaller groups of decentralized order based on shared affinities/goals/processes, will that increase or decrease how much value is ascribed to each human? Would this "break" from Dominant Order create an opportunity to define value outside of post-industrial/capitalistic value systems (fast, efficient, productive) or will it lessen our humanity, and lower our adaptability, our capacity for precision?
Suggested reading regarding the Congo Free Reform: King Leopold's Ghost, Hochchild, 1999.