Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Participants: Derya Akbaba * Ben Allen * Natalia-Rozalia Avlona * Kirill Azernyi * Erin Kathleen Bahl * Natasha Bajc * Lucas Bang * Tully Barnett * Ivette Bayo * Eamonn Bell * John Bell * kiki benzon * Liat Berdugo * Kathi Berens * David Berry * Jeffrey Binder * Philip Borenstein * Gregory Bringman * Sophia Brueckner * Iris Bull * Zara Burton * Evan Buswell * Ashleigh Cassemere-Stanfield * Brooke Cheng* Alm Chung * Jordan Clapper * Lia Coleman * Imani Cooper * David Cuartielles * Edward de Jong * Pierre Depaz * James Dobson * Quinn Dombrowski * Amanda Du Preez * Tristan Espinoza * Emily Esten * Meredith Finkelstein * Caitlin Fisher * Luke Fischbeck * Leonardo Flores * Laura Foster * Federica Frabetti * Jorge Franco * Dargan Frierson * Arianna Gass * Marshall Gillson * Jan Grant * Rosi Grillmair * Ben Grosser * E.L. (Eloisa) Guerrero * Yan Guo * Saksham Gupta * Juan Gutierrez * Gottfried Haider * Nabil Hassein * Chengbo He * Brian Heim * Alexis Herrera * Paul Hertz * shawné michaelain holloway * Stefka Hristova * Simon Hutchinson * Mai Ibrahim * Bryce Jackson * Matt James * Joey Jones * Masood Kamandy * Steve Klabnik * Goda Klumbyte * Rebecca Koeser * achim koh * Julia Kott * James Larkby-Lahet * Milton Laufer * Ryan Leach * Clarissa Lee * Zizi Li * Lilian Liang * Keara Lightning * Chris Lindgren * Xiao Liu * Paloma Lopez * Tina Lumbis * Ana Malagon * Allie Martin * Angelica Martinez * Alex McLean * Chandler McWilliams * Sedaghat Payam Mehdy * Chelsea Miya * Uttamasha Monjoree * Nick Montfort * Stephanie Morillo * Ronald Morrison * Anna Nacher * Maxwell Neely-Cohen * Gutierrez Nicholaus * David Nunez * Jooyoung Oh * Mace Ojala * Alexi Orchard * Steven Oscherwitz * Bomani Oseni McClendon * Kirsten Ostherr * Julia Polyck-O'Neill * Andrew Plotkin * Preeti Raghunath * Nupoor Ranade * Neha Ravella * Amit Ray * David Rieder * Omar Rizwan * Barry Rountree * Jamal Russell * Andy Rutkowski * samara sallam * Mark Sample * Zehra Sayed * Kalila Shapiro * Renee Shelby * Po-Jen Shih * Nick Silcox * Patricia Silva * Lyle Skains * Winnie Soon * Claire Stanford * Samara Hayley Steele * Morillo Stephanie * Brasanac Tea * Denise Thwaites * Yiyu Tian * Lesia Tkacz * Fereshteh Toosi * Alejandra Trejo Rodriguez * Álvaro Triana * Job van der Zwan * Frances Van Scoy * Dan Verständig * Roshan Vid * Yohanna Waliya * Sam Walkow * Kuan Wang * Laurie Waxman * Jacque Wernimont * Jessica Westbrook * Zach Whalen * Shelby Wilson * Avery J. Wiscomb * Grant Wythoff * Cy X * Hamed Yaghoobian * Katherine Ye * Jia Yu * Nikoleta Zampaki * Bret Zawilski * Jared Zeiders * Kevin Zhang * Jessica Zhou * Shuxuan Zhou

Guests: Kayla Adams * Sophia Beall * Daisy Bell * Hope Carpenter * Dimitrios Chavouzis * Esha Chekuri * Tucker Craig * Alec Fisher * Abigail Floyd * Thomas Forman * Emily Fuesler * Luke Greenwood * Jose Guaraco * Angelina Gurrola * Chandler Guzman * Max Li * Dede Louis * Caroline Macaulay * Natasha Mandi * Joseph Masters * Madeleine Page * Mahira Raihan * Emily Redler * Samuel Slattery * Lucy Smith * Tim Smith * Danielle Takahashi * Jarman Taylor * Alto Tutar * Savanna Vest * Ariana Wasret * Kristin Wong * Helen Yang * Katherine Yang * Renee Ye * Kris Yuan * Mei Zhang
Coordinated by Mark Marino (USC), Jeremy Douglass (UCSB), and Zach Mann (USC). Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Code Critique: Artistic Convergence, AI + Critical Code + Critical Data Studies

Title: Artistic Convergence, AI + Critical Code + Critical Data Studies
Author: Imani Cooper
Language: No specific code just an invitation to think with me on AI, Code, and Data :)

Hi all, I wanted to start a thread to invite you to consider the role of code in contemporary gallery/ museum spaces as it is being used to further AI innovations.

Background info––

I am currently a PhD student and I study code and algorithms as they intersect with experimental writing, and creative technologies that center notions of ancestral knowledge, movement, and self-becoming within black diasporas. My dissertation project examines 5 artistic projects that convey a genealogy of creative and critical approaches to data (both analog and digital) through a politics of gender and race. The analysis ends on a case study of a current art project using data and algorithmic driven technology, specifically Recursive Neural Networks (RNN) to make an AI sculpture engendered through data on three generations of black women experiences. In short, I encounter code the most in galleries, museums, artists’ studios, and creative community based workshops. The code critique I am posting is concerning an art exhibition I went to at the Barbican Center in London called "AI More than Human". While initially I contemplated my experience from a Critical Data Studies perspective, I am most certainly interested in how a Critical Code Studies perspective can advance this line of thought!

Code Critique / Inquiry––

In the exhibition AI: More than Human (2019) interactive digital art captivated the audience. Imagine, a large dimly lit crescent shaped room. Inside, a mild discordant hum of computer processing units (CPUs), flashing touch screens with various visual materials, these are some of the elements that characterized the breadth of artwork on display. An array of works by corporations and artist were presented including Affectiva, the leader in Human Perception AI, Mario Klingemann’s "Circuit Training", and Sony CSL’s "Kreyon City". For these select works ( however not limited to) the exhibition was a site to further train the neural networks of AI and large databases with participants as data. I went to this exhibition to examine the speech patterns of “Not The Only One” by Stephanie Dinkins (after its latest training session) but was struck by the exhibition as a whole. I read the moments of audience participation and their data contributions (some known and unknown) as a transformation of the Barbican gallery space into a kind of information research lab, and I am curious about this approach to data collection for AI, and use of code.

This line of inquiry is part of a larger ongoing thought process, but I would love to incite a generative discussion considering: the role of code and data in the 21st century gallery/ museum space (especially the collaboration/juxtaposition of corporate and independent artist use)? How is code being enacted in this nuanced mode of artistic driven data collection? Would programming in Indigenous languages change the ethical terms of this data collection (especially for underserved communities) ?

Feel free to add and/or further along the questions!

Comments

  • Regardless of the means of collection, I believe it unethical to train an AI off of human data without the human in question consenting. While AI art exhibits are cool and I understand creators wanting to get a diverse group of people, it reminds me of Google's attempt to fix their bias problem with facial recognition by sending out hired contractors to get pictures of people's faces without them knowing what it was for (or even knowing that their face was being recorded - some thought they were just playing a game. Others couldn't even properly consent). Here's a good podcast episode about that. Using artistic exhibitions to train an AI seems like a way to limit access to that exhibit by people who don't want their data collected as well.

    Like in the Google case, I think the focus on Indigenous languages might make it even more unethical. @joncorbett explained how some communities want to keep their languages strictly to themselves. This type of experiment could lead to exploitation or corruption of that by outside data.

    These are just my thoughts! I think this is a very interesting set of questions.

  • @KalilaShapiro said:
    Regardless of the means of collection, I believe it unethical to train an AI off of human data without the human in question consenting.

    I agree that it is unethical to artificially learn without consent in some cases. However, as a blanket statement this concerns me. Privacy, intellectual property, and even the sacred are all good values, but they are only some values -- they should be balanced in consideration with other values, such as the common good, or justice.

    For example, in cases of difference of power, those in power may decline to let the subaltern learn from them. Many political and corporate entities would prefer that you not observe their behavior, and would benefit greatly from a learning-enclosure movement that, in the name of privacy, privatizes everything apprehensible about people (and corporations, which are "people") after the model of likeness rights. That principle could limit accessing shared reality to those who can afford to orchestrate consent (and who have the legal resources to do it correctly) and it might also limit participation in the construction of shared reality to those who can afford to give consent or who can afford to participate in the infrastructure through which consent is permissibly collected.

    While an AI is a very different thing from a person, I believe that this is also important for thinking about the ethics of future AI agency. I myself am an intelligence who was trained off of human data without explicit consent -- observing people around me, including strangers, really helped me learn to walk, talk, eat, dress myself, play games, dance, and so forth. This makes me want to insist that it cannot be categorically unethical to learn from observing people without their explicit consent, as our current idea of socialization often depends on it.

  • I'd also like to think more about questions of leaning from human data without consent. Because of the requirement for large amounts of data it feels to me that machine learning has pushed data scraping ethics. The recent articles in the New York Times about Clearview AI have brought these questions to everyday uses of Twitter and Facebook. The numbers of academics that are also involved in this sort of scraping and--in what might be even more of a problem--in the publication of these datasets is also concerning. ImageNet is an obvious example but there are many others. This group at UC Irvine, for example, http://archive.ics.uci.edu/ml/datasets.php provides almost 500 datasets and it is hard to imagine that many of these that contain data from observed humans were produced with consent to share in this form.

  • This thread brings up such an important, complex question, and it really resonates with the work Feminist Search is trying to do with their "data donation" model (in the main thread and code thread for week 3).

    I also make AI-driven participatory artwork, sometimes in gallery settings, but felt icky about the way the Barbican was advertised (I didn't get to see it, however). Particularly the large models like GPT2 that scrape wide social media sources raise questions about consent, publication, public/private, and authorship.

    In my own show recently, the works were each explicit about being community-built through data contributed through interaction with the work. The intention is to engage users and their data with tools like GPT2 or DeepSpeech to bring awareness and engagement to these tensions. And I frame my practice as artistic research that will go on to incorporate that data in future iterations, inviting them to collaborate in what's being researched and built.

    However, with the blurriness of boundaries in the questions mentioned above, obviously data is often collected and used in ways not originally intended and trust in these systems is problematic, different for different groups, with good reason (which leads to the issues of access you mention). I'm not sure how to solve these tensions, but try to lay them bare in the work, be explicit about what's weird and uncomfortable about them to me, try to use them toward some alternative purpose, approach them with an ethics of care and interconnectedness with the audience.

  • @Imani.Cooper -- have you considered reaching out to any of the "AI more than Human" participants and asking them if they would share their code with you?

  • @jeremydouglass No I haven't, but that is a brilliant idea! Definitely interested in the response i'll get but also what the code could potentially say. Thanks for the suggestion!

Sign In or Register to comment.