From Table to Database – Teaching a one day workshop at Aarhus University

Teaching

On invitation of Prof. Magda Tyzlik-Carver I gave a one day workshop at the School of Communication and Culture for Digital Design students.

In the morning we looked into individual and common table practices, how ideas, calculations, projects, plans and calenders get tabulated and looked into four specific table categories: mathematical tables, knowledge tables, statistical tables and transaction tables.

The second part was an introduction to SQL (Structured Query Language) demonstrating in practice, how in databases the reading and writing of tables became a formalized and computable practice by querying in-formation.

Looking for a lecturer? Contact me.

 

Hartmut Winkler on tables
→ author: Francis Hunger, published on: 2019-Nov-15

information model–data–algorithm

Originally published on nettime https://nettime.org/Lists-Archives/nettime-l-1910/msg00027.html in reaction to an open call.

 

Hi Hanns and everybody,

> Rather than understanding algorithms as existing and transparent tools,
> the ALMAT Symposium is interested in their genealogical, processual
> aspects and their transformative potential. We seek critical approaches
> that avoid both mystification and commodification, that aim at opening
> the black box of “wonder” that is often presented to the public when
> utilising algorithms.

That’s very much needed. And I think there is a conceptual problem, which this conference shares with many others that talk about “the algorithm”. I agree, that the specialized field of generative art concentrates on algorithms (that generate the visual or auditive experience) and that algorithms on a larger scale matter in optimization (like b-tree sorting, fast gradient step method in pattern recognition).

However from a perspective of “gray media” (Fuller/Goffey), “logistical media” (Rossiter) on the one hand, and “habitual media” (Wendy Hui Kyong Chun) on the other, I think “algorithm” is wrong terminology. Approaching it from a perspective of the database and referring to actual practices of application programming I would argue, that algorithms are a minor issue. Of much more importance is the information model.

The information model is usually the decision, which information and subsequently data, should be included into the processable reality of computing, and what to exclude. In short: data is, what gets included according to the information model. Everything else is non-data or non-existent (under the closed world assumption) to the computer. So if you aim to look into the genealogy of algorithms, you may look into mathematics and maybe operational reserch.

You will however miss out on looking at the genealogy of _data_ and the material qualities of the _information model_. If we for instance look into how bias enters software, we usually won’t find much in algorithms. A b-tree sorting or the training of a neural network is always tied to weights, and actually needs and creates bias.

Since a computer can not understand meaning, meaning needs to be ascribed (through classification), which is done by the mentioned algorithms moving numerical weights towards a certain result that is meaningful to humans. Much more relevant for the question of bias is, how the _information model_ is organized, because it inscribes the reality of the computable.

Much more relevant is the question of how _data_ is collected, curated und used, as we can see in the current projects of Adam Harvey (https://megapixels.cc/) or !Mediengruppe Bitnik (https://werkleitz.de/en/ostl-hine-ecsion-postal-machine-decision-part-1), or the Data Workers Union (https://dataworkers.org/).

I get, that ‘algorithm’ is often used as common notion, in a similar blurry way as is ‘digital’. However a stronger concern for the information model and for data would open up the avenue for a stronger political stance, since it looks into who decides about inclusion and exclusions, and how these decisions are shaped. I’m talking about identifying addressable actors who are being hold responsible. So let’s look further into the trinity: information model–––data–––algorithm (and the infrastructure in and around it).

best Francis

→ author: Francis Hunger, published on: 2019-Oct-10

Database Histories Workshop in Siegen

Thomas Haigh talks about his work on the revised version of A Modern History of Computing with Paul E. Ceruzzi.

On July 4, 2019 at the SFB 1187, Medien der Kooperation at Siegen University a Database Histories workshop took place, initiated by Thomas Haigh. There I presented one part of my Ph.D. research from the chapter »Unified Software Within the Discourse of the GDR as Socialist State.« A fruitful discussion centered around the question: How central are databases for Enterprise Resource Management Systems? The full program: http://www.socialstudiesof.info/issi/dbhist2019/

→ author: Francis Hunger, published on: 2019-Jul-04

Subversion and Infrastructural Inversion of Predictive Assemblages

Lecture at Gesellschaft für Wissenschafts- und Technikforschung (Society for Science and Technology Studies), annual conference

The model distinguishes between what is data and what is non-data. It is a reference to reality, much more than the algorithm.

 

Instead of focusing on »the algorithm«, the trinity of Modell-Data-Algorithm should be taken into account, since the algorithm alone is an indication of nothing, while data is a reference to real world events, and the model is the decision about what get’s included and what get’s excluded from computation.

Prediction from this perspective is statistical correlation. The lecture further demonstrated a few exsamples, how prediction can be tricked by introducing smut or fake data, which is beyond the scope of the model.

Further Info at: http://gwtf.de/ (In German)

Invite me for a lecture!

→ author: Francis Hunger, published on: 2018-Dec-17

Digital Aesthetics

Digital aesthetics on a visual level is often equated with the pixel, and any kind of pixelated structure. While on a first glance the pixel appears to be a working, appliable metaphor for »the digital«, a sketchy view reveals a few other possible candidates, I’ll draft here.

When talking about the digital, the notion of digital aesthetics refers to things created with computer, narrowing the digital realm down to the electronic computer. The notion of the digital allows to differentiate the digitized, distinguishable, separated and discreet and calculable from the notion of the analog, that is continuous and varying over time. The digital  in contrast to the analogue continuity, is a sequence of discrete units.

Looking into the history of output devices for digital calculation, diagrammatically organized rows and columns of blinking lights in early computers, such as the Zuse 3 or the UNIVAC, electromechanical printers, also diagrammatically organized through their monospaced fonts, or the tabular structure of the punch card have informed digital aesthetics even before the usage of pixelated cathode-ray-tube (CRT) monitors. While these output devices may have been used for artistic creation, to my knowledge these have been marginalities.

Yet there are other devices that shaped very early digital aesthetics. Mathematicians Georg Nees, as well as Frieder Nake both from the mid 1960s on used a plotter to generate vector graphics and named the genre »Generative Computer Art«. One early plotter was the Zuse Z64 Graphomat, reading data from a punched tape (which connects it to the above mentioned punched card). The plotter as a device draws points and lines (or vectors, or geographic coordinates), so the programmed print definitions consisted mainly of start and end points, which were to be connected by a line, and only to a small extend of points. Making a long argument short, the vector-oriented plotter was a digital output even before the pixel-oriented raster CRT monitor came into widespread use for computer graphics. The raster CRT monitor only appeared, when computers had enough capacity to actually calculate each pixel on a screen raster. Prior to that CRT monitors have been used with vector descriptions, which  describe only the start and end point and possibly a third value for amplitude, saving scarce memory. In his Sketchpad dissertation 1963, Ivan Sutherland notes that 1 point needs 20 bit to be described on the MIT Lincoln Laboratory TX-2 computer CRT display, and that points, lines and circles (or the parts of circles) can be drawn on screen.

So the aesthetics of the line dominated early digital aesthetics up to the mid 1980s [cf. Blobel/Schneider/Wegener: Prints & Plots: Computerkunst ’86. Gladbeck, 1986].

Frieder Nake: Hommage a Paul Klee, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966, http://bitsavers.informatik.uni-stuttgart.de/pdf/computersAndAutomation/196608.pdf

Frieder Nake, Zufälliger Polygonzug, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966, http://bitsavers.informatik.uni-stuttgart.de/pdf/computersAndAutomation/196608.pdf

Frieder Nake Rechteckschraffuren, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966, http://bitsavers.informatik.uni-stuttgart.de/pdf/computersAndAutomation/196608.pdf

It may be necessary to introduce another distinction at this point. (Since this is only a blog post, I take the freedom of not looking intensively into this and very likely Frieder Nake or Georg Trogemann [Code und Material. Springer 2010] are be able to clarify.) The distincition is this: Some of the prints of that time have been generative in that sense, that algorithms with variables have been used to generate an image. These variables often were based on a re-calculation of the former value of the same variable, an iteration. The resulting aesthetics of the early generative art can be described as modernist, often structured, often ornamental or organic, to a large extend non-figurative and iterative. [Compare: http://dada.compart-bremen.de/browse/artwork]

Continue reading

→ author: Francis Hunger, published on: 2018-Oct-15

Surveillancism

I would like to explore critically the often used notion of »surveillance« when it comes to data collections. We currently see a lot of approaches, that address the surveillance aspect of data collection: For example: Branden Hookway’s Panspectron (Hookway/Kwinter/Mau 1999), Mark Poster’s Super-Panoptikon (Poster 1995, 87), Didier Bigo’s Banopticon (Bigo 2006), Zygmunt Bauman’s Liquid Surveillance (Bauman/Lyon 2013), Shoshana Zuboff’s Information Panopticon and Surveillance Capitalism (Zuboff 2015), and Metahaven’s Black Transparency (Metahaven/Velden/Kruk (Hrsg.) 2015), followed by emerging academic centers and publications on the subject of »surveillance studies«. The discourse about surveillance keeps us busy: In academia, in media art, in mass media and discussing it at home with our partners and friends.

The major argument of surveillanceism follows Michel Foucault’s discussion of Jeremy Bentham’s Panopticon prison building concept and the practices in and around it, that Foucault developed in his book Surveiller et punir (Foucault 1976). A lot of the current discussion regarding data, circles around this concept of Panopticism, demonstrating how strong Foucault’s Panopticon-metaphor is. I can only notice, to what extend these arguments rely on one single theoretic impulse – the Foucauldian theoretization of Bentham’s concept. For time and space constraints I cannot go deeper into a possible critique of these theories and will only bring forward a few arguments, why I’m not using the surveillance metaphor.

Surveillance takes place. Data is part of electronic policing and as basis for politics. Human rights activists, political opponents, victims of police actions know that and everybody whose smartphone got confiscated on a demonstration like it happened here in Leipzig in January 2015, and how it happens all over the world, know it too. Amazon’s warehouse workers and Uber drivers know it was well.

Still I would maintain :

Continue reading

→ author: Francis Hunger, published on: 2018-Sep-30

Summerschool – Center for Digital Cultures, Lueneburg

September 16-19, 2018

Faculty: Monika Dommann, Thomas Haigh, Ben Peters, Claus Pias, Daniela Wentz

The CDC summerschool discussed issues along a set of questions

1. concepts and theories

What are the theoretical models that are able to contribute to a better understanding of the history and historiography of digital cultures? But also: How do digital cultures affect and shape common and current theoretical models of (media) historiographies?

2. methods and methodologies

What are the methods that meet the challenge of bridging digital media technologies with the field of history? How do the methods of the digital humanities affect the methodology of historic research?

3. critical revision of the so-called digital history

Does the source under digital conditions also change the construction of history and the rhetorics of its narration? Which “politics of the archive” can be observed in the course of or as a result of digitization?

 

In this framework I presented an excerpt from the thesis that dealt with the histories of the relational database model. It discussed three actors and their institutional background shaped early developments of what later became know as the relational model. E.F. Codd, C.T. Davies and David Childs in various constellations discussed issues of set theory, machine independence, data independence, time-sharing against the backdrop of the IBM System/360 and its pre-decessors. It aimed at decentering a narrative that is solely concentrated on the person of E.F. Codd putting it into the tension field between university and industry research.

It has been great to meet this particular faculty because they are all very involved in the histories of computing and were able to make very helpful suggestions. Also the fellow phd proposals were great to discuss, ranging from algorithmic structures to the history of object oriented programming to research about salesforce.com.

 

 

→ author: Francis Hunger, published on: 2018-Sep-20

Epistemic Harvest – The electronic database as means of data production.

Full text: http://www.aprja.net/epistemic-harvest-the-electronic-database-as-discourse-and-means-of-data-production/

The following discussion of computational capital takes the electronic database, an infrastructure for storing in-formation, as vantage point. Following a brief look into how database systems serve in-formation desires, the notion of ‘database as discourse’ by Mark Poster is explored and further developed. Database as discourse establishes a machinic agency, directed towards the individual in a specific mode of hailing. This mode of hailing in turn leads to a scattered form of subjectivity, that is identified with Manuela Ott and Gerald Raunig as dividual. How does dividualization emerge from database infrastructure? What is the specific quality of data, that is produced by and being harvested from in/dividuals into databases, and what are the consequences of such a shifted view?

Reality is depicted through a circel, which is cut into two parts by a line, named model. One part is named data, the orther part non-data
→ author: Francis Hunger, published on: 2018-Jul-15

How to Hack Artificial Intelligence

Pattern Recognition (or so called Artificial Intelligence) can be tricked. An overview.

Do you aim to become a luddite? Here is your guide to hacking pattern recognition and disturbing the technocratic wet dreams of engineers, managers, businesses and government agencies.

Most of the current processes attributed to Artificial Intelligence are actually pattern recognition[1], and artists and scientists[2] have begun to work with adversarial patterns either to test the existing techniques or to initiate a discussion of the consequences of the so called Artificial Intelligence. They create disturbances and misreading for trained neural networks[3] that get calculated against incoming data.

Do neural networks dream of sheep?

Janelle Shane looks into how neural networks just mis-categorize information. In her article Do neural nets dream of electric sheep? she discusses some mis-categorizations of Microsofts’ Azure Computer Vision API, used for creating automatic image captions.[4] Shane points out, that the underlying training data seems to be fuzzy, since in many landscape pictures sheep got detected, where are actually none. »Starting with no knowledge at all of what it was seeing, the neural network had to make up rules about which images should be labeled ›sheep‹. And it looks like it hasn’t realized that ›sheep› means the actual animal, not just a sort of treeless grassiness.«[5]

Do neural nets dream of electric sheep? Example of mis-categorized images with no sheep in it, tested by Janelle Shane. (Shane 2018)

The author then looks into, how this particular pattern recognition API can be further tricked, pointing out that the neural network looks only for sheep where it actually expects it, for instance in a landscape setting. »Put the sheep on leashes, and they’re labeled as dogs. Put them in cars, and they’re dogs or cats. If they’re in the water, they could end up being labeled as birds or even polar bears. … Bring sheep indoors, and they’re labeled as cats. Pick up a sheep (or a goat) in your arms, and they’re labeled as dogs«, Shane mocks the neural network. I’ll call it the abuse scope method. It applies, whenever you can determine or reverse-engineer (aka guess) the scope and domain to which a neural network is directed, and insert information that is beyond the scope. The abuse scope method could be used for photo collages that trick a neural network, while maintaining relevant information to humans.

According to Shane, NeuralTalk2 identified these goats in a tree eating Argane nuts as »A flock of birds flying in the Air« and Microsoft Azure as »a group of giraffe standing next to a tree.« (image: Dunn 2015)

Shane went further and asked twitter followers for images depicting sheep. Richard Leeming came up with a photo taken in the English country side. Orange dyed sheep shall deter rustlers from stealing the animals.

Orange Sheep. Ambleside, England (Leeming 2016)

This photo is fucking with the neural networks’ expectations and leads to a categorization as »a group of flowers in a field« (Shane 2018). Continue reading

→ author: Francis Hunger, published on: 2018-May-17

Deep Fake or Rendering the Truth

Panel at European Media Art Festival 2018, 20. April in Osnabrück

Moderation by Tobias Revell

Participants: Luba Elliot, Anna Ridler, Francis Hunger, Igor Schwarzmann

The ability of computers to fake reality convincingly is going to become more and more of a critical problem as hackers, extremist news organisations and politicians seek to control the media narrative through increasingly convincing visuals. The presentation includes the video ‘Synthesizing Obama’, which demonstrated the ability to synthesize a life-like rendering of Obama in real time.

Organized in collaboration with the Impakt Festival, the Netherlands / www.impakt.nl

Tobias Revell, Francis Hunger, Anna Ridler, Luba Elliot

→ author: Francis Hunger, published on: 2018-Apr-27