Digital Aesthetics

Digital aesthetics on a visual level is often equated with the pixel, and any kind of pixelated structure. While on a first glance the pixel appears to be a working, appliable metaphor for »the digital«, a sketchy view reveals a few other possible candidates, I’ll draft here.

When talking about the digital, the notion of digital aesthetics refers to things created with computer, narrowing the digital realm down to the electronic computer. The notion of the digital allows to differentiate the digitized, distinguishable, separated and discreet and calculable from the notion of the analog, that is continuous and varying over time. The digital  in contrast to the analogue continuity, is a sequence of discrete units.

Looking into the history of output devices for digital calculation, diagrammatically organized rows and columns of blinking lights in early computers, such as the Zuse 3 or the UNIVAC, electromechanical printers, also diagrammatically organized through their monospaced fonts, or the tabular structure of the punch card have informed digital aesthetics even before the usage of pixelated cathode-ray-tube (CRT) monitors. While these output devices may have been used for artistic creation, to my knowledge these have been marginalities.

Yet there are other devices that shaped very early digital aesthetics. Mathematicians Georg Nees, as well as Frieder Nake both from the mid 1960s on used a plotter to generate vector graphics and named the genre »Generative Computer Art«. One early plotter was the Zuse Z64 Graphomat, reading data from a punched tape (which connects it to the above mentioned punched card). The plotter as a device draws points and lines (or vectors, or geographic coordinates), so the programmed print definitions consisted mainly of start and end points, which were to be connected by a line, and only to a small extend of points. Making a long argument short, the vector-oriented plotter was a digital output even before the pixel-oriented raster CRT monitor came into widespread use for computer graphics. The raster CRT monitor only appeared, when computers had enough capacity to actually calculate each pixel on a screen raster. Prior to that CRT monitors have been used with vector descriptions, which  describe only the start and end point and possibly a third value for amplitude, saving scarce memory. In his Sketchpad dissertation 1963, Ivan Sutherland notes that 1 point needs 20 bit to be described on the MIT Lincoln Laboratory TX-2 computer CRT display, and that points, lines and circles (or the parts of circles) can be drawn on screen.

So the aesthetics of the line dominated early digital aesthetics up to the mid 1980s [cf. Blobel/Schneider/Wegener: Prints & Plots: Computerkunst ’86. Gladbeck, 1986].

Frieder Nake: Hommage a Paul Klee, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966,

Frieder Nake, Zufälliger Polygonzug, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966,

Frieder Nake Rechteckschraffuren, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966,

It may be necessary to introduce another distinction at this point. (Since this is only a blog post, I take the freedom of not looking intensively into this and very likely Frieder Nake or Georg Trogemann [Code und Material. Springer 2010] are be able to clarify.) The distincition is this: Some of the prints of that time have been generative in that sense, that algorithms with variables have been used to generate an image. These variables often were based on a re-calculation of the former value of the same variable, an iteration. The resulting aesthetics of the early generative art can be described as modernist, often structured, often ornamental or organic, to a large extend non-figurative and iterative. [Compare:]

Other prints have been printed from descriptive data, where the the coordinates of the pen, were not calculated but described through digital data, either entered directly as a stream of numbers or digitized using a scanner. These descriptive rasterizations along with scanner devices have given way to pixel aesthetics, which we today simplifying identifiy with digital aesthetics. Another vector CRT in use was the 1963 PDP-1 Precision CRT Display (Type 30). However, coupled with a Type 33 Symbol Generator unit, it became possible to address single points in such a way, that they would form pixelated letters, thus pathing the way towards the rasterized CRT display.  [ PDP-1 Manual, 1963, pages 33–36,]

With pixels realism, as in photo-realism and three-dimensional spatial illusion, returned to digital aestehtics. And so returned indexicality, where the depicted image would refer to an existing image. However, indexicality was soon diluted, when the usage of filter algorithms on pixelated images was explored. Here also start the variations, which Lev Manovich has explored in The Language of New Media, 2001. One early example may be Waldemar Cordeiros work, from 1969 on, in Brazil. []


However, it would be to narrowing, to portrait the pixel as a mode of indexicality and realism only. The pixel-oriented display or matrix-printer allowed for the usage of generative vectors as well, only that before being displayed they had to be re-calculated into pixels. Some time later, in the mid-1980s color CRT displays opened up new ways of generative art, maybe best portrayed through the iterative fractal graphics hype of the late 1980s [].

Mandelbrot Set Image by Wikipedia User Binette228, CC-BY


→ author: Francis Hunger, published on: 2018-Oct-15

Summerschool – Center for Digital Cultures, Lueneburg

September 16-19, 2018

Faculty: Monika Dommann, Thomas Haigh, Ben Peters, Claus Pias, Daniela Wentz

The CDC summerschool discussed issues along a set of questions

1. concepts and theories

What are the theoretical models that are able to contribute to a better understanding of the history and historiography of digital cultures? But also: How do digital cultures affect and shape common and current theoretical models of (media) historiographies?

2. methods and methodologies

What are the methods that meet the challenge of bridging digital media technologies with the field of history? How do the methods of the digital humanities affect the methodology of historic research?

3. critical revision of the so-called digital history

Does the source under digital conditions also change the construction of history and the rhetorics of its narration? Which “politics of the archive” can be observed in the course of or as a result of digitization?


In this framework I presented an excerpt from the thesis that dealt with the histories of the relational database model. It discussed three actors and their institutional background shaped early developments of what later became know as the relational model. E.F. Codd, C.T. Davies and David Childs in various constellations discussed issues of set theory, machine independence, data independence, time-sharing against the backdrop of the IBM System/360 and its pre-decessors. It aimed at decentering a narrative that is solely concentrated on the person of E.F. Codd putting it into the tension field between university and industry research.

It has been great to meet this particular faculty because they are all very involved in the histories of computing and were able to make very helpful suggestions. Also the fellow phd proposals were great to discuss, ranging from algorithmic structures to the history of object oriented programming to research about



→ author: Francis Hunger, published on: 2018-Sep-20

Epistemic Harvest – The electronic database as means of data production.

Full text:

The following discussion of computational capital takes the electronic database, an infrastructure for storing in-formation, as vantage point. Following a brief look into how database systems serve in-formation desires, the notion of ‘database as discourse’ by Mark Poster is explored and further developed. Database as discourse establishes a machinic agency, directed towards the individual in a specific mode of hailing. This mode of hailing in turn leads to a scattered form of subjectivity, that is identified with Manuela Ott and Gerald Raunig as dividual. How does dividualization emerge from database infrastructure? What is the specific quality of data, that is produced by and being harvested from in/dividuals into databases, and what are the consequences of such a shifted view?

Reality is depicted through a circel, which is cut into two parts by a line, named model. One part is named data, the orther part non-data
→ author: Francis Hunger, published on: 2018-Jul-15

How to Hack Artificial Intelligence

Pattern Recognition (or so called Artificial Intelligence) can be tricked. An overview.

Do you aim to become a luddite? Here is your guide to hacking pattern recognition and disturbing the technocratic wet dreams of engineers, managers, businesses and government agencies.

Most of the current processes attributed to Artificial Intelligence are actually pattern recognition[1], and artists and scientists[2] have begun to work with adversarial patterns either to test the existing techniques or to initiate a discussion of the consequences of the so called Artificial Intelligence. They create disturbances and misreading for trained neural networks[3] that get calculated against incoming data.

Do neural networks dream of sheep?

Janelle Shane looks into how neural networks just mis-categorize information. In her article Do neural nets dream of electric sheep? she discusses some mis-categorizations of Microsofts’ Azure Computer Vision API, used for creating automatic image captions.[4] Shane points out, that the underlying training data seems to be fuzzy, since in many landscape pictures sheep got detected, where are actually none. »Starting with no knowledge at all of what it was seeing, the neural network had to make up rules about which images should be labeled ›sheep‹. And it looks like it hasn’t realized that ›sheep› means the actual animal, not just a sort of treeless grassiness.«[5]

Do neural nets dream of electric sheep? Example of mis-categorized images with no sheep in it, tested by Janelle Shane. (Shane 2018)

The author then looks into, how this particular pattern recognition API can be further tricked, pointing out that the neural network looks only for sheep where it actually expects it, for instance in a landscape setting. »Put the sheep on leashes, and they’re labeled as dogs. Put them in cars, and they’re dogs or cats. If they’re in the water, they could end up being labeled as birds or even polar bears. … Bring sheep indoors, and they’re labeled as cats. Pick up a sheep (or a goat) in your arms, and they’re labeled as dogs«, Shane mocks the neural network. I’ll call it the abuse scope method. It applies, whenever you can determine or reverse-engineer (aka guess) the scope and domain to which a neural network is directed, and insert information that is beyond the scope. The abuse scope method could be used for photo collages that trick a neural network, while maintaining relevant information to humans.

According to Shane, NeuralTalk2 identified these goats in a tree eating Argane nuts as »A flock of birds flying in the Air« and Microsoft Azure as »a group of giraffe standing next to a tree.« (image: Dunn 2015)

Shane went further and asked twitter followers for images depicting sheep. Richard Leeming came up with a photo taken in the English country side. Orange dyed sheep shall deter rustlers from stealing the animals.

Orange Sheep. Ambleside, England (Leeming 2016)

This photo is fucking with the neural networks’ expectations and leads to a categorization as »a group of flowers in a field« (Shane 2018). Continue reading

→ author: Francis Hunger, published on: 2018-May-17

Deep Fake or Rendering the Truth

Panel at European Media Art Festival 2018, 20. April in Osnabrück

Moderation by Tobias Revell

Participants: Luba Elliot, Anna Ridler, Francis Hunger, Igor Schwarzmann

The ability of computers to fake reality convincingly is going to become more and more of a critical problem as hackers, extremist news organisations and politicians seek to control the media narrative through increasingly convincing visuals. The presentation includes the video ‘Synthesizing Obama’, which demonstrated the ability to synthesize a life-like rendering of Obama in real time.

Organized in collaboration with the Impakt Festival, the Netherlands /

Tobias Revell, Francis Hunger, Anna Ridler, Luba Elliot

→ author: Francis Hunger, published on: 2018-Apr-27

Transmediale: Biased Futures

Yuk Hui, Francis Hunger, Jussi Parikka, Ana Teixeira Pinto
Moderated by Jussi Parikka

Transmediale 2018, Berlin, 04.02.2018

As the mystification of artificial intelligence (AI) and fantasies of transhumanism continue to appear in fictions and speculations on possible futures, concerns arise about the biases and forms of discrimination that tomorrow’s systems might involve. These troubling aspects are exemplified by the the Neoreactionary Movement’s interest in AI, which is based on the belief that technology can only serve humanity to its fullest if it is liberated from democratic standards. In order to critically examine the build-up of symbolic mystifications and real infrastructures of futuristic liberatory discourses, the speakers of this panel will speculate on the changes that AI can bring to territories, cultures, or groups of people, and discuss emerging political counter-fictions and imaginaries.




→ author: Francis Hunger, published on: 2018-Mar-14

Rosebuds – Hidden Stories of Things (Exhibition and Symposium)

exhibition at Kunstraum D21 Leipzig, curated by Lena Brüggemann, Fabian Reiman and Francis Hunger (Dec 27 2017–Jan 27 2018)

Rosebuds – Hidden Stories of Things (2017/18)


→ author: Francis Hunger, published on: 2018-Feb-04

Information vs. In-Formation

Information in relation to computers can be described in at least two ways. The most popular notion of information stems from the 1940s’ Norbert Wiener concept, rooted in cybernetics. Information appears as a statistical property, where a time-series of measurement is created as mathematical entities. Shannon/Weaver accordingly discuss a signal, that is passed from sender to receiver. Information is defined as something that flows from A to B, encoded in a signal, that is differentiated from noise.

However, the cybernetic view of information and data comprises the idea of a black-box, where input and output can be observed, but the inner workings remain unknown – which in turn allowed for a problematic conceptualization of the computer being analogous to the human brain. The black-box concept again served as the basis for a control-machinery that could employ feedback-mechanisms to control processes.

The major problem with this in media theory widely used concept of information is, that it rootes information in the signal, and ignores the question of meaning.

In contrast Markus Krajewski (Krajewski 2007; Krajewski 2011) developed a spatial notion of information, where information consists of data placed in a spatial dimension accessible through diagrammatic operations of the human brain. He starts out from the librarians’ folio in the 1700’s which gets cut into single sheets (or later cards) to hold information on each book, diverted through the use of space-between and typography specific to the data. The form of the table further evolves into the punched card, from where tape and disk-memory as spatial organization emerged. Since this historical process brings data in formation, information with Krajewski is actually in-formation.

While Wieners notion of information is closer situated to the command-and-control structures of war effort and to the cybernetic idea of feedback loops that should turn towards a state of equilibrium, Krajewskis in-formation is more rooted in bio-political techniques of statistical data collection and evaluation in bureaucratic and managerial practices. Both positions describe two ends of a continuum: While Wieners notion of signal, data, model and information refers to the machinic organization within today’s computing machinery, Krajewski’s notion of in-formation leans towards the the medial usage, that is shaped through the model, algorithms, code and database usage.

This supports a perception of in-formation, where everything is calculable and can be expressed through a model. The model is the decision about which part of reality get’s included as data and which part get’s discarded. In this sense data is, what get’s included or what gets excluded.

The latter notion of in-formation has the advantage that data and in-formation is not just simply there, as is the signal that is on or off, rather more the in-formation object is something that has been created through intertwined human labour and machnic agency.


For a critical examination of the black box concept in cybernetics see Galloway, Alexander R. (2011): »Black Boxes, Schwarzer Block«, in: Hörl, Erich (2011): Die technologische Bedingung. Beiträge zur Beschreibung der technischen Welt. Berlin: Suhrkamp Verlag, p. 267–280. ; English online at,%20Black%20Box%20Black%20Bloc,%20New%20School.pdf

Cf. Galison, Peter. 1994. »The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vison.« Critical Inquiry 21 (1): 228–66.

Krajewski, Markus: „In Formation – Aufstieg und Fall der Tabelle als Paradigma der Datenverarbeitung“, Nach Feierabend: Zürcher Jahrbuch für Wissensgeschichte: Datenbanken, Diaphanes 2007, S. 37–55.

Krajewski, Markus: Paper machines – about cards & catalogs, 1548-1929, Cambridge, Mass: MIT Press 2011 (History and foundations of information science).

→ author: Francis Hunger, published on: 2018-Jan-25

Artificial Des-Intelligence or Why machines will not take over the world. At least not now.

Part I: There is no Artificial Intelligence.

Illustration by Karen Zack, March 2016,

It’s pattern recognition, stupid!

A friend of mine recently exclaimed that since her Siri speech recognition became much better, compared to speech recognition ten years ago, Artificial Intelligence (AI) now has the potential to rule the world. What if there is no Artificial Intelligence at all? What if the so called AI revolution is indeed an enhanced form of pattern recognition? I agree that todays pattern recognition shows better quality in recognizing patterns in language, image, orientation and similar fields. But is pattern recognition equal to intelligence, even to human intelligence?

Pattern recognition is about perception, and it is about statistical interference with a body of data. These are two areas that have become increasingly better over the past decade. Not only have businesses (like Amazon or Google) developed new techniques for distributed large scale computing using consumer hardware in large quantity. They have also developed decentralized, large scale solutions for data storage, labeled Big Data, that forms the base for more successful statistical interference. We see how both these quantitative changes have turned into a perceived new quality of enhanced pattern recognition (EPR).

Better algorithms to search unstructured information

Three factors play into the overall growth in automation. First of all, search engine technology has grown and become better in sifting through large amounts of structured and unstructured data, especially since Google introduced tools such as Mapreduce, Bigtable in the Mid-2000s, and since new Open Source Software for data mining in unstructured information collections such as Hadoop became available. Continue reading

→ author: Francis Hunger, published on: 2017-Nov-15

Algorithms are made by humans

The artist Francis Hunger presents his video installation Deep Love Algorithm at the recent exhibition »Mood Swings – On Mood Politics, Sentiment Data, Market Sentiments and Other Sentiment Agencies«. In conversation with curator Sabine Winkler he tells, why we should no longer talk fearfully of algorithms.

Sabine Winkler: Your video essay Deep Love Algorithm reconstructs the evolution and history of databases as a love story between cyborg and writer Margret and Jan, a journalist. Margret embodies a kind of resistant position emerging from history, and also a linkage between the human and technology. The relation of human / database (technology) told through failing love story is an unique approach. How did this topic evolve, or rather how is this relationship structured, and why does it fail?

Francis Hunger: Margret is not necessarily a cyborg, actually it is only implied that she lives longer, compared to her appearance. This doesn’t, however, exclude that she is a cyborg. The original idea for Margret was, to create a figure who travels through the times. A figure, who beyond the ahistorical Samantha from the movie Her, or the movie figure Adaline, was and is part of political fights.

Continue reading

→ author: Francis Hunger, published on: 2017-Sep-08