The following discussion of computational capital takes the electronic database, an infrastructure for storing in-formation, as vantage point. Following a brief look into how database systems serve in-formation desires, the notion of ‘database as discourse’ by Mark Poster is explored and further developed. Database as discourse establishes a machinic agency, directed towards the individual in a specific mode of hailing. This mode of hailing in turn leads to a scattered form of subjectivity, that is identified with Manuela Ott and Gerald Raunig as dividual. How does dividualization emerge from database infrastructure? What is the specific quality of data, that is produced by and being harvested from in/dividuals into databases, and what are the consequences of such a shifted view?
Pattern Recognition (or so called Artificial Intelligence) can be tricked. An overview.
Do you aim to become a luddite? Here is your guide to hacking pattern recognition and disturbing the technocratic wet dreams of engineers, managers, businesses and government agencies.
Most of the current processes attributed to Artificial Intelligence are actually pattern recognition, and artists and scientists have begun to work with adversarial patterns either to test the existing techniques or to initiate a discussion of the consequences of the so called Artificial Intelligence. They create disturbances and misreading for trained neural networks that get calculated against incoming data.
Do neural networks dream of sheep?
Janelle Shane looks into how neural networks just mis-categorize information. In her article Do neural nets dream of electric sheep? she discusses some mis-categorizations of Microsofts’ Azure Computer Vision API, used for creating automatic image captions. Shane points out, that the underlying training data seems to be fuzzy, since in many landscape pictures sheep got detected, where are actually none. »Starting with no knowledge at all of what it was seeing, the neural network had to make up rules about which images should be labeled ›sheep‹. And it looks like it hasn’t realized that ›sheep› means the actual animal, not just a sort of treeless grassiness.«
The author then looks into, how this particular pattern recognition API can be further tricked, pointing out that the neural network looks only for sheep where it actually expects it, for instance in a landscape setting. »Put the sheep on leashes, and they’re labeled as dogs. Put them in cars, and they’re dogs or cats. If they’re in the water, they could end up being labeled as birds or even polar bears. … Bring sheep indoors, and they’re labeled as cats. Pick up a sheep (or a goat) in your arms, and they’re labeled as dogs«, Shane mocks the neural network. I’ll call it the abuse scope method. It applies, whenever you can determine or reverse-engineer (aka guess) the scope and domain to which a neural network is directed, and insert information that is beyond the scope. The abuse scope method could be used for photo collages that trick a neural network, while maintaining relevant information to humans.
Shane went further and asked twitter followers for images depicting sheep. Richard Leeming came up with a photo taken in the English country side. Orange dyed sheep shall deter rustlers from stealing the animals.
This photo is fucking with the neural networks’ expectations and leads to a categorization as »a group of flowers in a field« (Shane 2018). Continue reading
Panel at European Media Art Festival 2018, 20. April in Osnabrück
Moderation by Tobias Revell
Participants: Luba Elliot, Anna Ridler, Francis Hunger, Igor Schwarzmann
The ability of computers to fake reality convincingly is going to become more and more of a critical problem as hackers, extremist news organisations and politicians seek to control the media narrative through increasingly convincing visuals. The presentation includes the video ‘Synthesizing Obama’, which demonstrated the ability to synthesize a life-like rendering of Obama in real time.
Organized in collaboration with the Impakt Festival, the Netherlands / www.impakt.nl
Yuk Hui, Francis Hunger, Jussi Parikka, Ana Teixeira Pinto
Moderated by Jussi Parikka
Transmediale 2018, Berlin, 04.02.2018
As the mystification of artificial intelligence (AI) and fantasies of transhumanism continue to appear in fictions and speculations on possible futures, concerns arise about the biases and forms of discrimination that tomorrow’s systems might involve. These troubling aspects are exemplified by the the Neoreactionary Movement’s interest in AI, which is based on the belief that technology can only serve humanity to its fullest if it is liberated from democratic standards. In order to critically examine the build-up of symbolic mystifications and real infrastructures of futuristic liberatory discourses, the speakers of this panel will speculate on the changes that AI can bring to territories, cultures, or groups of people, and discuss emerging political counter-fictions and imaginaries.
exhibition at Kunstraum D21 Leipzig, curated by Lena Brüggemann, Fabian Reiman and Francis Hunger (Dec 27 2017–Jan 27 2018)
Information in relation to computers can be described in at least two ways. The most popular notion of information stems from the 1940s’ Norbert Wiener concept, rooted in cybernetics. Information appears as a statistical property, where a time-series of measurement is created as mathematical entities. This supports a perception of information, where everything is calculable and can be expressed through a model. The model is the decision about which part of reality get’s included as data and which part get’s discarded. In this sense data is, what get’s included or what gets excluded.
However, the cybernetic view of information and data comprises the idea of a black-box, where input and output can be observed, but the inner workings remain unknown – which in turn allowed for a problematic conceptualization of the computer being analogous to the human brain. The black-box concept again served as the basis for a control-machinery that could employ feedback-mechanisms to control processes.
In contrast Markus Krajewski (Krajewski 2007; Krajewski 2011) developed a spatial notion of information, where information consists of data placed in a spatial dimension accessible through diagrammatic operations of the human brain. He starts out from the librarians’ folio in the 1700’s which gets cut into single sheets (or later cards) to hold information on each book, diverted through the use of space-between and typography specific to the data. The form of the table further evolves into the punched card, from where tape and disk-memory as spatial organization emerged. Since this historical process brings data in formation, information with Krajewski is actually in-formation.
While Wieners notion of information is closer situated to the command-and-control structures of war effort and to the cybernetic idea of feedback loops that should turn towards a state of equilibrium, Krajewskis in-formation is more rooted in bio-political techniques of statistical data collection and evaluation in bureaucratic and managerial practices. Both positions describe two ends of a continuum: While Wieners notion of signal, data, model and information refers to the machinic organization within today’s computing machinery, Krajewski’s notion of in-formation leans towards the the medial usage, that is shaped through algorithms, code and database usage. The latter notion of in-formation has the advantage that data and in-formation is not just simply there, as is the signal that is on or off, rather more the in-formation object is something that has been created through intertwined human labour and machnic agency.
For a critical examination of the black box concept in cybernetics see Galloway, Alexander R. (2011): »Black Boxes, Schwarzer Block«, in: Hörl, Erich (2011): Die technologische Bedingung. Beiträge zur Beschreibung der technischen Welt. Berlin: Suhrkamp Verlag, p. 267–280. ; English online at http://cultureandcommunication.org/galloway/pdf/Galloway,%20Black%20Box%20Black%20Bloc,%20New%20School.pdf
Cf. Galison, Peter. 1994. »The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vison.« Critical Inquiry 21 (1): 228–66.
Krajewski, Markus: „In Formation – Aufstieg und Fall der Tabelle als Paradigma der Datenverarbeitung“, Nach Feierabend: Zürcher Jahrbuch für Wissensgeschichte: Datenbanken, Diaphanes 2007, S. 37–55.
Krajewski, Markus: Paper machines – about cards & catalogs, 1548-1929, Cambridge, Mass: MIT Press 2011 (History and foundations of information science).
Part I: There is no Artificial Intelligence.
It’s pattern recognition, stupid!
A friend of mine recently exclaimed that since her Siri speech recognition became much better, compared to speech recognition ten years ago, Artificial Intelligence (AI) now has the potential to rule the world. What if there is no Artificial Intelligence at all? What if the so called AI revolution is indeed an enhanced form of pattern recognition? I agree that todays pattern recognition shows better quality in recognizing patterns in language, image, orientation and similar fields. But is pattern recognition equal to intelligence, even to human intelligence?
Pattern recognition is about perception, and it is about statistical interference with a body of data. These are two areas that have become increasingly better over the past decade. Not only have businesses (like Amazon or Google) developed new techniques for distributed large scale computing using consumer hardware in large quantity. They have also developed decentralized, large scale solutions for data storage, labeled Big Data, that forms the base for more successful statistical interference. We see how both these quantitative changes have turned into a perceived new quality of enhanced pattern recognition (EPR).
Better algorithms to search unstructured information
Three factors play into the overall growth in automation. First of all, search engine technology has grown and become better in sifting through large amounts of structured and unstructured data, especially since Google introduced tools such as Mapreduce, Bigtable in the Mid-2000s, and since new Open Source Software for data mining in unstructured information collections such as Hadoop became available. Continue reading
The artist Francis Hunger presents his video installation Deep Love Algorithm at the recent exhibition »Mood Swings – On Mood Politics, Sentiment Data, Market Sentiments and Other Sentiment Agencies«. In conversation with curator Sabine Winkler he tells, why we should no longer talk fearfully of algorithms.
Sabine Winkler: Your video essay Deep Love Algorithm reconstructs the evolution and history of databases as a love story between cyborg and writer Margret and Jan, a journalist. Margret embodies a kind of resistant position emerging from history, and also a linkage between the human and technology. The relation of human / database (technology) told through failing love story is an unique approach. How did this topic evolve, or rather how is this relationship structured, and why does it fail?
Francis Hunger: Margret is not necessarily a cyborg, actually it is only implied that she lives longer, compared to her appearance. This doesn’t, however, exclude that she is a cyborg. The original idea for Margret was, to create a figure who travels through the times. A figure, who beyond the ahistorical Samantha from the movie Her, or the movie figure Adaline, was and is part of political fights.
»These essays constitute an added value, providing an extended account of databases’ historical development, informing the reader about a strategic past that is often overlooked.« Read the full review on neural.it.
lecture at the Kulturen des Kuratorischen in May 2017 for the seminar »Leaking Bodies (and Machines)« by Julia Kurz and Anna Jehle. The session was joined by students of Peggy Buth who work about the topic of Big Data and Post-Internet art.
While the seminar was developed around reading Updating to Remain the Same: Habitual New Media (MIT Press) by Wendy Hui Kyong Chun, Francis Hunger was invited to add a more materialist perspective. Two lectures were delivered, the first developed a historical perspective on the development of computing technology in general and discussed relational databases in closer detail. It tried to give an overview about the state of art in computing technology and interconnect this with the social dimension of labor and cultural questions such as the crisis of the public/private and the cultural function of the archive. The second lecture introduced basic concepts regarding electronic infrastructure as developed by Bowker, Ruhleder and Star and ended with a discussion of data centers, the ideology of the »cloud«, and the function and meaning of data in »big data«. The following discussion evolved into a rather broad and general argument about how the mediocene emerged and influences current social relations.