Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Participants: Derya Akbaba * Ben Allen * Natalia-Rozalia Avlona * Kirill Azernyi * Erin Kathleen Bahl * Natasha Bajc * Lucas Bang * Tully Barnett * Ivette Bayo * Eamonn Bell * John Bell * kiki benzon * Liat Berdugo * Kathi Berens * David Berry * Jeffrey Binder * Philip Borenstein * Gregory Bringman * Sophia Brueckner * Iris Bull * Zara Burton * Evan Buswell * Ashleigh Cassemere-Stanfield * Brooke Cheng* Alm Chung * Jordan Clapper * Lia Coleman * Imani Cooper * David Cuartielles * Edward de Jong * Pierre Depaz * James Dobson * Quinn Dombrowski * Amanda Du Preez * Tristan Espinoza * Emily Esten * Meredith Finkelstein * Caitlin Fisher * Luke Fischbeck * Leonardo Flores * Laura Foster * Federica Frabetti * Jorge Franco * Dargan Frierson * Arianna Gass * Marshall Gillson * Jan Grant * Rosi Grillmair * Ben Grosser * E.L. (Eloisa) Guerrero * Yan Guo * Saksham Gupta * Juan Gutierrez * Gottfried Haider * Nabil Hassein * Chengbo He * Brian Heim * Alexis Herrera * Paul Hertz * shawné michaelain holloway * Stefka Hristova * Simon Hutchinson * Mai Ibrahim * Bryce Jackson * Matt James * Joey Jones * Masood Kamandy * Steve Klabnik * Goda Klumbyte * Rebecca Koeser * achim koh * Julia Kott * James Larkby-Lahet * Milton Laufer * Ryan Leach * Clarissa Lee * Zizi Li * Lilian Liang * Keara Lightning * Chris Lindgren * Xiao Liu * Paloma Lopez * Tina Lumbis * Ana Malagon * Allie Martin * Angelica Martinez * Alex McLean * Chandler McWilliams * Sedaghat Payam Mehdy * Chelsea Miya * Uttamasha Monjoree * Nick Montfort * Stephanie Morillo * Ronald Morrison * Anna Nacher * Maxwell Neely-Cohen * Gutierrez Nicholaus * David Nunez * Jooyoung Oh * Mace Ojala * Alexi Orchard * Steven Oscherwitz * Bomani Oseni McClendon * Kirsten Ostherr * Julia Polyck-O'Neill * Andrew Plotkin * Preeti Raghunath * Nupoor Ranade * Neha Ravella * Amit Ray * David Rieder * Omar Rizwan * Barry Rountree * Jamal Russell * Andy Rutkowski * samara sallam * Mark Sample * Zehra Sayed * Kalila Shapiro * Renee Shelby * Po-Jen Shih * Nick Silcox * Patricia Silva * Lyle Skains * Winnie Soon * Claire Stanford * Samara Hayley Steele * Morillo Stephanie * Brasanac Tea * Denise Thwaites * Yiyu Tian * Lesia Tkacz * Fereshteh Toosi * Alejandra Trejo Rodriguez * Álvaro Triana * Job van der Zwan * Frances Van Scoy * Dan Verständig * Roshan Vid * Yohanna Waliya * Sam Walkow * Kuan Wang * Laurie Waxman * Jacque Wernimont * Jessica Westbrook * Zach Whalen * Shelby Wilson * Avery J. Wiscomb * Grant Wythoff * Cy X * Hamed Yaghoobian * Katherine Ye * Jia Yu * Nikoleta Zampaki * Bret Zawilski * Jared Zeiders * Kevin Zhang * Jessica Zhou * Shuxuan Zhou

Guests: Kayla Adams * Sophia Beall * Daisy Bell * Hope Carpenter * Dimitrios Chavouzis * Esha Chekuri * Tucker Craig * Alec Fisher * Abigail Floyd * Thomas Forman * Emily Fuesler * Luke Greenwood * Jose Guaraco * Angelina Gurrola * Chandler Guzman * Max Li * Dede Louis * Caroline Macaulay * Natasha Mandi * Joseph Masters * Madeleine Page * Mahira Raihan * Emily Redler * Samuel Slattery * Lucy Smith * Tim Smith * Danielle Takahashi * Jarman Taylor * Alto Tutar * Savanna Vest * Ariana Wasret * Kristin Wong * Helen Yang * Katherine Yang * Renee Ye * Kris Yuan * Mei Zhang
Coordinated by Mark Marino (USC), Jeremy Douglass (UCSB), and Zach Mann (USC). Sponsored by the Humanities and Critical Code Studies Lab (USC), and the Digital Arts and Humanities Commons (UCSB).

Code Critique: Possibility and Injustice/Bias in AI Transfer Learning

Author/s: TensorFlow
Language/s: Python, NumPy, TensorFlow, Keras, TensorFlow Hub
Year/s of development: Current Learning Materials at tensorflow.org
Location of code: https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub

Overview:

In machine learning with neural networks, the practice of transfer learning uses one network trained for one task on another related but different task, without fully retraining the original network. This approach works in practice and it seems to mirror how humans use previous experience to learn new tasks. The learning of the new task in ML could then be thought of as “emergent” behavior, as the network classifies new input data under new categories.

There are lots of possibilities for philosophical reflection on this emergent behavior, although, transfer learning can also more clearly demonstrate how machine learning can be biased in potentially dangerous or unjust ways. In fact, in some of the early papers on multi-task and transfer learning, learning outcomes improve when ML learns with “bias”, a fact wholly accepted by the authors (see for example, R. Caruana, “Multitask Learning”. Machine Learning, vol. 28, 1997, p. 44).

That some bias is necessary in learning or knowledge production is an insight that philosophers have also come to understand through science studies and much contemporary thought. But this does not remove the potential danger or injustice. Consider an example of a transfer learning of possibility in multilingual language learning. Similarly, consider a transfer learning of injustice in facial recognition. These are possibilities and injustices that are present in neural networks in general. The code I would like to consider more fully dramatizes this bias however.

The code to consider comes from an ML tutorial on the TensorFlow.org website. TensorFlow is a higher level programming framework for neural network-based ML. Interestingly, this tutorial uses TensorFlow Hub, a repository of reusable, pre-trained models. In some ways, this repository provides a central gesture of transfer learning made into a new software platform.

To demonstrate the disconnect and potential bias between the classifying task of the originally trained network and the transfer network applied to a new, related task, consider first of all that the pre-trained model from this repository of models is loaded and configured with just 3 lines of code:

classifier_url ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2" #@param {type:"string"}

IMAGE_SHAPE = (224, 224)

classifier = tf.keras.Sequential([
    hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,))
])

Secondly, in the beginning of the tutorial, a photograph of Grace Hopper, the famous woman in technology, is fed in as test data after establishing the transfer-learned network. Bias of the original network is shown by the fact that the new network classifies the image as “military uniform” (That Dr. Hopper is wearing) rather than “Grace Hopper” or “Important early computer scientist”, etc.

Following through the tutorial, a pre-trained network is loaded for a second example and its network weights are explicitly set not to be trained further (the whole point of transfer learning is to not have to retrain them):

feature_extractor_layer.trainable = False

Only, the final layer of the network is removed and a classifying layer (a “classification head”) for recognizing flower species is configured as the penultimate set of nodes.

model = tf.keras.Sequential([
  feature_extractor_layer,
  layers.Dense(image_data.num_classes, activation='softmax')
])

Until in the tutorial the network is modified further, adding the classification head, the flower detection is not as “accurate”. So in this case, the neural network is potentially unjust until accurate. That the recognition is first inaccurate in this case shows the limitations of transfer learning. But after adding the classification head, the network identifies most of the flowers accurately. Herein lies transfer learning’s philosophical possibility, but this possibility has its own limitations.

Questions

How does the TensorFlow Hub model repository both demonstrate an AI of possibility as well as injustice?

Does transfer learning effect a truer “reuse”, much more powerful than traditional reusable software components and libraries?

If ML/AI can be unjust, despite the role of bias in all learning, is this because it is tied to software and legal policy that impinges upon democracy (by excluding some members of a commonwealth)?

Are there limitations to looking at transfer learning as emergent, given that a learning network, in not being a simulacra of a human brain, could nevertheless be a mirror of “Reason” in a way that would be problematic for 20th Century critiques of Enlightenment (i.e. Adorno, Horkheimer et. al.)?

Comments

  • Thank you for sharing this very interesting example. Being able to walk through the tutorials and experiment with it hands-on really adds something to the example.

    Could you articulate a bit more what "just" vs "unjust" behavior looks like for you in the context of these particular models / algorithms? For context, one well-known example of algorithmic bias in popular culture is racist face recognition -- the face recognition camera tracks a white face, but not a black face, so the model leads the application to offer features (unlocking, autofocus, et cetera) unequally, offering services to some kinds of bodies rather than others. A laptop can unlock when it sees its owner, but this works frequently for white bodies and seldom for black bodies; this is inequitable and unjust.

    In your initial example, is Dr. Hopper not recognized, but Alan Turing is recognized? Or is Hopper recognized as a military uniform, while men in military uniforms are recognized in additional ways that women are not? Or is it the prioritization of recognition categories over others (for example, systematically identifying object categories such as military uniforms rather than identities) being done in an even-handed way, but evidence of a cultural bias about priorities of what counts to recognize?



    A secondary location for the code -- not executable, but with some attribution / blame and version history -- is here:

    https://github.com/tensorflow/docs/blob/66d51334e055b08affd272bcbb204c368fc57be7/site/en/tutorials/images/transfer_learning_with_hub.ipynb

    I'm not sure that this gets to the bottom of who the tutorial author(s) are -- from a quick glance it seems like much of the material was perhaps imported at some point from a previous source? -- but apart from being a product of the organization, it might have been predominantly written by only a few specific people. Documentation often is.

  • @jeremydouglass , thank you for the comments and the Github link.

    With this particular model, the “unjust” behavior stems from priorities of what counts to recognize. For example, I don’t think this particular pre-trained model would recognize Alan Turing either, as it likely doesn’t have a facial recognition dataset (let’s hope not!)

    My point in contrasting the label of the uniform versus the identity was to suggest that the algorithm decides in seemingly arbitrary fashion - because the model/network classifies input images, acting as a black box or where the original training images are unknown to the programmer.

    I could see this network labeling Dr. Turing as “mathematician”, or “professor” instead of his style of clothing. This would point to gender bias in the original training data, in which men had been more often assigned these labels by a researcher who created the reusable model/network.

    As regards facial recognition harboring bias, it is not out of the question that transfer learning (though not necessarily the network of the tutorial) could be implicated here too. If the number of faces entered into a model was enormous, and with proper noun/identity labels, the technical construct of transfer learning and pre-training data - would work as an algorithmic approach. It would in this case be possible to reuse the network for facial recognition in a variety of human communities - albeit problematic as surveillance, etc.

    Note that this is a different facial recognition problem than tracking a face for unlocking a device (unlocking the device is a binary classification, for example).

    As far as “just”, I wonder if the tutorial only approaches this when it turns the model into a classifier for flowers, which is where the “possibility” comes in (since it largely identifies them). That is, it does less violence once the researcher has actually decided to do something specific. Maybe machine learning applications need to heed to the cautions of science studies and stick to particulars and local knowledge. This imperative certainly helps reduce injustice/bias/violence in other types of research production.

Sign In or Register to comment.