Zuckerberg and the Master and Slave AI

First published on nettime mailing list on January 7, 2024 in reply to Olia Lialina: “I’ve built a simple AI”. Remembering the last AI spring https://pad.profolia.org/s/I_ve_built_a_simple_AI (read first)

Dear Olia,

thank you so much for your pointed observations. I would like to extend on both Sophia and Jarvis. Both seem to be driven by control and automation phantasies on the one hand and anthropmorphization on the other. “I’ve built a simple AI” already uses this “a AI”, ascribing agency (with the implication it would be human-like agency) to it, obscuring the human labor behind it and the machinic limitations it necessarily has.

The Team behind Sofia, for instance, consists of the CEO, the animation and interaction designers, the software enginieers, the content developers, robotic face architect, robotics engineers, a production supervisor, a language processing A”I” engineer, IT-Manager, office staff, events coordinator and freelance project workers according to their website. A team of at least 23 humans to get an elaborated deception, a version of a mechanical turk, running.

Interestingly Sofia is this embodied, female slave, while Zuckerbergs male voiced Jarvis is disembodied and omnipresent in his house.

Jarvis AI by Mark Zuckerberg

I also liked your point behind marketing A”I” as a _product_ as in the Jarvis case because I didn’t see it from this angle. I rewatched this demo video which pretty much addresses almost all phantasies a master could expect of his slave AI, waking him up gently, providing a schedule of the day, preparing a toast, surveilling and teaching the child, surveilling and providing access at the door (for Zuckerbergs parents). What is excluded in this narration is of course the master’s violence against the slave, the brutal availability of sexual services, but yes we are in advertisement, I get it.

The video has a humorous moment built in, where Mark asks Jarvis to play “the best Nickelback songs” and Jarvis answers passive agressively “I’m sorry Mark, I’m afraid I can’t do that, there are no good Nickelback songs”. Intented as a funny moment, it anticipates the “sorry, I’m afraid” we read in every redacted or censored GPT answer today.

But it also has this moment of disobedience, which Mark diffuses by answering (while his wife sits silently on the sofa besides him, swinging gently the baby, failing even the simplest Bechdel test): “Okay, how about just play some songs our whole family likes”. To which a children song follows. It is intriguing, how Mark dominates Jarvis, as well as Priscilla and their first daughter Maxima just by his masters voice.

Screenshot from Jarvis Video

But this short moment of non-function or disobedience also ties in with some of our (humans) largest fears when it comes to automation: being dominated by robots as the other side of dominating them. This fear of being dominated that may turn into the pleasure of being dominated as in a proper SM relationship. In this sense the Jarvis product demo contains all tropes of countless science fiction books and movies.

Having had Covid twice in winter, I re-watched lot of the star treck series, a guilty pleasure. I apologize. And it stuck with me, to what large extend this series (one example of many) deals with the human-machine domination subdomination theme: Data, Seven of Nine, the Emergency Medical Hologram, the caretakrer, Vic Fontaine are constantly involved in these tropes.

I agree with you that A”I” has become an object to be marketed. We are dealing with a double dimension here. In the imaginary space A”I” is also a phantasmatic object of domination and submission, tied to the dimensions of sadism and masochism and the master-slave logic.

I would love to see tiktok edits crossing the Jarvis product demo with Star Treck tropes, addressing this double dimension.

Warm regards, Francis

→ author: Francis Hunger, published on: 2024-Feb-06

Spamming the Data Space – CLIP, GPT and synthetic data

Francis Hunger, December 7, 2022


For the last time in human history the cultural-data space has not been contaminated. In recent years a new technique to acquire knowledge has emerged. Scraping the Internet and extracting information and data has become a new modus for companies and for university researchers in the field of machine learning. One of the currently largest publicly available training data sets to combine images and labels (which shall describe the images content), is Laion-5B, with 5,85 billion image-text pairs (Ilharco, Gabriel et al. 2021).[1]
The scope of scraping internet resources has become so all-encompassing, that researcher Eva Cetinic has proposed to call this form ‘cultural snapshot’: “By encoding numerous associations which exist between data items collected at a certain point in time, those models therefore represent synchronic assemblages of cultural snapshots, embedded in a specific technological framework. Metaphorically those models can be considered as some sort of encapsulation of the collective (un)conscious […]” (Cetinic 2022).[2] The important suggestion which Cetinic makes, is that these data collections are temporally anchored. The temporal dimension of these snapshots suggests that digital cultural snapshots taken at different times document different states of (online-)culture. So how will a 2021 snapshot differ from a 2031 cultural snapshot?


Multi-modal models, like CLIP, trained on large-scale data sets, such as LAION-5B provide the statistical means to generate images from text prompts. In the CLIP Model, pre-trained models merge two embedding spaces, one for images and one for text-descriptions which with mathematical methods get layered together, so that the vectors in the one space, the image domain, align with vectors in the other space, the text domain, assuming there is a similarity between both, and one can translate into the other. In three short examples I’ll discuss some of the consequences of the underlying data for large-scale models from the perspective of cultural snapshots.

1.) Data Bias: A critical discussion of these large-scale multi-modal models for instance, has pointed out how they are culturally skewed and reproduce sexist and racist biases. Researchers Fabian Offert and Thao Phan, for instance, describe how the company Open AI decided not to mitigate the problem of whiteness by changing the model’s underlying data. Instead, Open AI added certain invisible keywords to users’ prompts to have more people of color included, without changing the model. Obviously, the calculations for creating these models or even curating the underlying data are so tremendous that for economic reasons even major problems cannot be corrected in the embedding space itself. Discussing the prevalent ‘whiteness’ in these models further, Offert and Phan suggest to turn to humanities in order to “identify the different technical modes of whiteness at play, and understand the reconceptualization and resurrection of whiteness as a machinic concept” (Offert and Phan 2022, 3).[3]

2.) Uneven spatial distribution: Users of large-scale multi-modal models have tested their limits when generating images. ‘Crungus’, and ‘Loab’ are two examples. ‘Loab’, the image of a women appeared when AI artist Supercomposite looked for the negative of a prompt: “DIGITA PNTICS skyline logo::-1”. Loab appears to be a consistent pixel accumulation, which repeatedly emerges in different configurations and cannot easily be traced back to a single origin.[4] The creator/discoverer of ‘Loab’ felt during intensive testing, that Loab might exist in its own pocket, because it was relatively reproducible, compared to other prompts, as if it was populating a certain statistical region within the larger latent space. Another, similar phenomenon of uneven spatial distribution in latent space is ‘Crungus’, basically a phantasy word which as a prompt nevertheless created results: a snarling, zombie-like figure with shoulder-long hair, which could be part of a horror movie.[5]

Both examples demonstrate that the cultural snapshots also contain material which cannot be easily identified or traced back and they demonstrate, how the latent space is an uneven spatial distribution by design. Since the models are built by a process called zero shot learning in difference to for instance the supervised learning used in ImageNet, there are no longer intentional ontologies used in the knowledge creation of these models. The human involvement involves the uncoordinated captioning of images by users online, and the setting up the scraping algorithms and excluding certain domains from being scraped by researchers.

3.) Data Spam: Looking at the history of spam it has emerged whenever a business case of creating large amount of messages using copy-and-paste could be made. Email spam, forum spam, comment spam, video spam on YouTube has been common and consistent over the past decades. Hand in hand with spam goes Search Engine Optimization (SEO), which optimizes content for discoverability by knowledge aggregators, namely search engines. The text-generator like GPT-3 has already proven to be an annoyance when users of one of the central online forums for programmers Stack Overflow began to flood it with automated comments. It turned out, that many generated answers proved incorrect but not easily discernable: “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce” (Stack Overflow moderators in: Vincent 2022). This is only one example of many, and it will extend from text, image and video generation and will become a major problem on Instagram, Flickr, Pinterest, and many other visual platforms. Possible applications for data spam are fake-news, subversive messages, or advertisement and so on.

Further, synthetic text spam or synthetic image spam using statistical tools like GPT, or CLIP produces results which will be evaluated by the same or similar machine learning architectures, and therefore may be more conform to the mathematical models than organically human produced content.

All in all, this poses the question, how to assess any online content after 2021.

Data Ecologies

While some may argue that generated text and images will save time and money for businesses, a data ecological view immediately recognizes a major problem: AI feeds into AI. To rephrase it: statistical computing feeds into statistical computing. In using these models and publishing the results online we are beginning to create a loop of prompts and results, with the results being fed into the next iteration of the cultural snapshots. That’s why I call the early cultural snapshots still uncontaminated, and I expect the next iterations of cultural snapshots will be contaminated. In the long term this may lead to a deterioration of the quality of the appropriated data. It also opens the opportunity for data spamming. Spammers or search engine optimizers may decide to create huge amounts of picture and captions to create a stronger presence for a certain product or cause.

These are the conditions under which such large image collections become available at all: the extraction of the unpaid labor of those who published the images originally online. Both the extractive nature and the very likely future contamination of cultural snapshots will make this approach untenable and unsustainable in the long run.

Addendum Feb 3, 2024

The point where search engines become unusable is approaching with astonishing speed, for they get polluted with generative content. My hunch is that communities which keep their information space free from AI contamination will thrive. It would be strategically wise to discuss now how to establish/nurture these places.


Baio, Andy. 2022. “AI Data Laundering – How Academic and Nonprofit Researchers Shield Tech Companies from Accountability.” Blog. Waxy.Org (blog). September 30, 2022. https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/.

Birhane, Abeba, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. “Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes.” arXiv. https://doi.org/10.48550/arXiv.2110.01963.

Cetinic, Eva. 2022. “The Myth of Culturally Agnostic AI Models.” Arxiv, November, 4. https://doi.org/10.48550/arXiv.2211.15271.

Ilharco, Gabriel, Wortsman, Mitchell, Carlini, Nicholas, Taori, Rohan, Dave, Achal, Shankar, Vaishaal, Namkoong, Hongseok, et al. 2021. “OpenCLIP.” Hamburg: Laion e.V. Zenodo. https://doi.org/10.5281/ZENODO.5143773.

Kelly [@Brainmage], Guy. 2022. “Well I REALLY Don’t like How Similar All These Pictures of ‘Crungus’, ….” Tweet. Twitter. https://twitter.com/Brainmage/status/1538111384390619136.

Lavoipierre, Ange. 2022. “There’s a Woman Haunting the Internet. She Was Created by AI. Now She Won’t Leave.” ABC News, November 25, 2022. https://www.abc.net.au/news/2022-11-26/loab-age-of-artificial-intelligence-future/101678206.

Offert, Fabian, and Thao Phan. 2022. “A Sign That Spells: DALL-E 2, Invisual Images and The Racial Politics of Feature Space.” ArXiv:2211.06323 [Cs], October. http://arxiv.org/abs/2211.06323.

Supercomposite [@supercomposite]. 2022. “🧵: I Discovered This Woman, Who I Call Loab, in April. ….” Tweet. Twitter. https://twitter.com/supercomposite/status/1567162288087470081.

Vincent, James. 2022. “AI-Generated Answers Temporarily Banned on Coding Q&A Site Stack Overflow.” The Verge. December 5, 2022. https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers.

Weisbuch, Max, Sarah A. Lamer, Evelyne Treinen, and Kristin Pauker. 2017. “Cultural Snapshots – Theory and Method.” Social and Personality Psychology Compass 11 (9). https://doi.org/10.1111/spc3.12334.

[1] LAION is organized as an independent German research association. This division of labor between smaller and larger actors, who shift responsibility away from the large companies which use the models based on these data collections has been criticized by in AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability (Baio 2022).

[2] Cetinic borrows this concept from social and cultural psychology studies, referring to Cultural snapshots – Theory and method (Weisbuch et al. 2017).

[3] C.f. Multimodal datasets: misogyny, pornography, and malignant stereotypes (Birhane, Prabhu, and Kahembwe 2021).

[4] C.f. (Supercomposite [@supercomposite] 2022; Lavoipierre 2022). Note: I wasn’t able to reproduce Loab with my installation of Stable Diffusion v1.

[5] It was first produced using DALL-E mini in June 2022 by actor and comedian Guy Kelly, first reported on twitter (Kelly [@Brainmage] 2022).

Continue reading
→ author: Francis Hunger, published on: 2022-Dec-07

Artificial ‘Intelligence’

Interviews with Magda Tyzlik-Carver (curator), Adam Harvey (artist), Ulises Mejias & Nick Couldry (sociologists) and Matteo Pasquinelli (philosopher)

addressing questions of data, curating, artificial intelligence, data colonialism and face recognition data sets.

In the frame of »Training the Archive« (2020–2023), a research project that explores the possibilities and risks of Artificial Intelligence in relation to the automated structuring of museum collection data to support curatorial practice and artistic production.

Huge thanks to Inke Arns and Dominik Bönisch, who make it possible. https://trainingthearchive.ludwigforum.de/

Englisch mit deutschen Untertiteln

→ author: Francis Hunger, published on: 2022-May-03

Seeing Through Clouds

Ein Gespräch mit Francis Hunger und Nelly Y. Pinkrah von Uta M. Reindl und Ellen Wagner im Kunstforum März/April 2022

»Ich bin inzwischen so trainiert, dass ich, wenn ich ›Cloud‹ höre, riesengroße Datencenter sehe, die Strom verbrauchen, eine Kühlung brauchen, in denen Leute arbeiten und viel technisches Gerät steht. Und auf diesem technischen Gerät laufen wiederum Datenbanken.«

»Im aktuellen Diskurs wird oft so getan, als würde mit dem Algorithmus ein autonomes Subjekt existieren, das in der Lage ist, Erkenntnis zu schaffen. Wenn man Leute, die von Algorithmen sprechen, auffordert, nenn mir doch mal einen – dann kann das niemand.«

»Wenn es eine visuelle Metapher für das Digitale gäbe – aber damit zu arbeiten, würde sich der Kunstbetrieb nicht antun – wäre das die Tabelle.«

Mit Abo online unter https://www.kunstforum.de/artikel/seeing-through-clouds/
Ansonsten schreib mir und ich sende Dir gern das PDF (francis.hunger@irmielin.org)

→ author: Francis Hunger, published on: 2022-Apr-03

Transaktionsverarbeitung in relationalen Datenbanken – Zur Materialität von Daten aus Perspektive der Transaktion.

In: Friedrich Balke, Bernhard Siegert, Joseph Vogl (Hg.): Kleine Formen. Archiv für Mediengeschichte. Vorwerk 8, 2021

→ author: Francis Hunger, published on: 2021-Nov-03

“Why so many windows?” – Wie die Bilddatensammlung ImageNet die automatisierte Bilderkennung historischer Bilder beeinflusst.

Training The Archive, Working Paper Series 2, June 2021 https://zenodo.org/record/4742621 (PDF, open DL)

Wie das in zeitgenössischen Bilderwelten verankterte ImageNet auf zeitgenössische und historischer Kunstwerke einwirkt, erläutert der Text, indem er 1.) die Abwesenheit der Klassifikation ‚Kunst‘ in ImageNet untersucht, 2.) die fehlende Historizität von ImageNet hinterfragt und 3.) das Verhältnis von Textur und Umriss in automatisierter Bilderkennung mit ImageNet diskutiert.

Diese Untersuchung ist wichtig für die genealogische, kunsthistorische und programmiertechnische Verwendung von ImageNet in den Feldern des Kuratierens, der Kunstgeschichte, der Kunstwissenschaften und der Digital Humanities.

English Version currently under peer review at DAHJ

→ author: Francis Hunger, published on: 2021-Nov-03

Kybernetik war nicht alles – Die langen Ketten bürokratischer Praktiken im DDR Sozialismus

Kybernetik war nicht alles – Die langen Ketten bürokratischer Praktiken im DDR Sozialismus (lecture, 30 min in German)

Beyond Cybernetics – The Long Chains Of Bureaucratic Practices In GDR Socialism

This lecture unfolds a discussion whether Cybernetics is an appropriate framing for looking at digitalisation history. Although the control of economic processes in the GDR relied on automatic machine processing, it was still dependent on long chains of bureaucratic practices.

Berlin, May 2, 2021 at Haus der Statistik in the frame of the exhibition and symposium Calculating Control

→ author: Francis Hunger, published on: 2021-May-03

Curation and its Statistical Automation by means of Artificial ‘Intelligence’.

Working Paper Series 3, November 2021 https://zenodo.org/record/5705769 (PDF, open DL)

The concept of post-AI curating discussed in this Working Paper explores curation as a knowledge-creation process, supported by Pattern Recognition and weighted networks as technical tools of artificial ‘intelligence’.

It then examines several projects as case studies that approach curation using artificial ‘intelligence’: The Next Biennial Should Be Curated by a Machine from UBERMORGEN, Leonardo Impett and Joasia Krysa (2021) as a meta-artwork about curation and biennials; Tillmann Ohm’s project Artificial Curator (2020), which resulted in an automatically curated exhibition; and #Exstrange by Rebekah Modrak and Marialaura Ghidini et. al. (2017), which presents artworks as data objects on the eBay online platform.

German Version:

Kuratieren und dessen statistische Automatisierung mittels Künstlicher ‘Intelligenz’. Training The Archive, Working Paper Series 3, Oktober 2021 https://zenodo.org/record/5589930 (PDF, open DL)

→ author: Francis Hunger, published on: 2021-Feb-20

Unhype AI

Le corps halluciné

This Thursday Jan 21, 2021 at 7pm (CET) / 1pm (ET) I’ll talk about how to Unhype AI. What originally would have been a trip to Paris, on invitation of Marie Lechner and Gaîté Lyrique, is now an online event.

I’ll talk for about 40 min about how using a different language could aim at de-hyping “Artificial Intelligence”, or as I call it: Statistics. Then I’ll show some examples of how AI get’s hacked by artists and scientists, I’ll talk shortly about the cowtuna (depicted above), along my own project (with Flupke): https://adversarial.io

To make your time worthwhile Fabian Offert will be the other guest to talk, which I think is even more reason to join.


→ author: Francis Hunger, published on: 2021-Jan-21