Bureaucracksy

Workshop at Constant, Brussels / online, Dec 7 – Dec 12, 2020

The work session Bureaucracksy brought together artivist practices around the imaginative re-appropriations of rules and regulations. It was looking into critical attitudes, artistic positions, creative bureaucrats, that re-purpose and re-orientate operators of bureaucratic power.

“This work-session investigates the governance of techno-social systems through the prism of bureaucracy. The execution of rules is an essential element of computation, of digital infrastructures, and of the societies that they operate with. Critical practices of listing, naming, filtering, tool making, care and sharing require that some books are kept” (from the announcement)

I gave a 1hr talk about Table Practices – Knowledge, Statistical, Transactional and Mathematical Tables.

https://constantvzw.org/site/-Bureaucracksy,231-.html?lang=en

→ author: Francis Hunger, published on: 2020-Dec-16

Vergiss den Algorithmus! Daten als Ziel gesellschaftlicher Intervention

Vortrag auf Einladung von Prof. Olia Lialina (Merz Akademie Stuttgart) und Stadtbibliothek Stuttgart, online

Bilder des Digitalen haben keine Referenz auf die Materialität des Digitalen. Ausgehend von dieser These werde ich zeigen, warum alle Bilder der Digitalisierung, die wir haben, vor allem Bilder davon sind, was sich Menschen darunter vorstellen, aber nicht davon, was sich aus einer Materialität des Digitalen ergibt.

Aus dieser Perspektive schält sich heraus, was im Folgenden zu besprechen ist. An historischen Beispielen zeige ich auf, was eine Materialität des Digitalen überhaupt bedeuten kann. Damit wird auch der Begriff der Digitalisierung klarer. Ich spreche über drei wesentliche Konzepte: 1. Daten, 2. In-Formationsmodell 3. Algorithmus.

Indem zu oft vom ›Algorithmus‹ gesprochen wird, bleibt der Blick darauf verstellt, wo ein Punkt politischer Intervention anzusetzen ist. Meiner Meinung nach ist dies das Informationsmodell, also die Entscheidung darüber, wie Realität im Computer modelliert wird.

45 min Vortrag + Q&A

→ author: Francis Hunger, published on: 2020-Dec-02

The Social Dilemma – Reaction

the engineers astonishment in the Social Dilemma how “problem solved” has social repercussions #engineeringcultures

so the “father” has figured out, what Social Dilemma cirumvents, the word #capitalism. Uups. i said it.

dealing in fetishized commodities was a fair game as long as it was hardware sold and surplus value skimmed, while exploiting audience #attention #labor is clearly disgusting

Now some scary, scary truths. Because, #surveillance is something really scary. It affects you. Be scared! People love to hear that. Let’s not mention #collective data ownership. Let’s not even mention #communized social media.

Data is not being harvested or scraped or created or colonized. People and processes are pouring #data out. Data shoves itself towards becoming renvenue and value.

Uuups. I’m just an engineer. Let’s just automate this. No human is responsible. Problem solved. In the end it were those systems on their own that started it. #socialdilemma

Continue reading
→ author: Francis Hunger, published on: 2020-Sep-25

From Table to Database – Teaching a one day workshop at Aarhus University

Teaching

On invitation of Prof. Magda Tyzlik-Carver I gave a one day workshop at the School of Communication and Culture for Digital Design students.

In the morning we looked into individual and common table practices, how ideas, calculations, projects, plans and calenders get tabulated and looked into four specific table categories: mathematical tables, knowledge tables, statistical tables and transaction tables.

The second part was an introduction to SQL (Structured Query Language) demonstrating in practice, how in databases the reading and writing of tables became a formalized and computable practice by querying in-formation.

Looking for a lecturer? Contact me.

 

Hartmut Winkler on tables

→ author: Francis Hunger, published on: 2019-Nov-15

information model–data–algorithm

Originally published on nettime https://nettime.org/Lists-Archives/nettime-l-1910/msg00027.html in reaction to an open call.

 

Hi Hanns and everybody,

> Rather than understanding algorithms as existing and transparent tools,
> the ALMAT Symposium is interested in their genealogical, processual
> aspects and their transformative potential. We seek critical approaches
> that avoid both mystification and commodification, that aim at opening
> the black box of “wonder” that is often presented to the public when
> utilising algorithms.

That’s very much needed. And I think there is a conceptual problem, which this conference shares with many others that talk about “the algorithm”. I agree, that the specialized field of generative art concentrates on algorithms (that generate the visual or auditive experience) and that algorithms on a larger scale matter in optimization (like b-tree sorting, fast gradient step method in pattern recognition).

However from a perspective of “gray media” (Fuller/Goffey), “logistical media” (Rossiter) on the one hand, and “habitual media” (Wendy Hui Kyong Chun) on the other, I think “algorithm” is wrong terminology. Approaching it from a perspective of the database and referring to actual practices of application programming I would argue, that algorithms are a minor issue. Of much more importance is the information model.

The information model is usually the decision, which information and subsequently data, should be included into the processable reality of computing, and what to exclude. In short: data is, what gets included according to the information model. Everything else is non-data or non-existent (under the closed world assumption) to the computer. So if you aim to look into the genealogy of algorithms, you may look into mathematics and maybe operational reserch.

You will however miss out on looking at the genealogy of _data_ and the material qualities of the _information model_. If we for instance look into how bias enters software, we usually won’t find much in algorithms. A b-tree sorting or the training of a neural network is always tied to weights, and actually needs and creates bias.

Since a computer can not understand meaning, meaning needs to be ascribed (through classification), which is done by the mentioned algorithms moving numerical weights towards a certain result that is meaningful to humans. Much more relevant for the question of bias is, how the _information model_ is organized, because it inscribes the reality of the computable.

Much more relevant is the question of how _data_ is collected, curated und used, as we can see in the current projects of Adam Harvey (https://megapixels.cc/) or !Mediengruppe Bitnik (https://werkleitz.de/en/ostl-hine-ecsion-postal-machine-decision-part-1), or the Data Workers Union (https://dataworkers.org/).

I get, that ‘algorithm’ is often used as common notion, in a similar blurry way as is ‘digital’. However a stronger concern for the information model and for data would open up the avenue for a stronger political stance, since it looks into who decides about inclusion and exclusions, and how these decisions are shaped. I’m talking about identifying addressable actors who are being hold responsible. So let’s look further into the trinity: information model–––data–––algorithm (and the infrastructure in and around it).

best Francis

→ author: Francis Hunger, published on: 2019-Oct-10

Database Histories Workshop in Siegen

Thomas Haigh talks about his work on the revised version of A Modern History of Computing with Paul E. Ceruzzi.

On July 4, 2019 at the SFB 1187, Medien der Kooperation at Siegen University a Database Histories workshop took place, initiated by Thomas Haigh. There I presented one part of my Ph.D. research from the chapter »Unified Software Within the Discourse of the GDR as Socialist State.« A fruitful discussion centered around the question: How central are databases for Enterprise Resource Management Systems? The full program: http://www.socialstudiesof.info/issi/dbhist2019/

→ author: Francis Hunger, published on: 2019-Jul-04

Subversion and Infrastructural Inversion of Predictive Assemblages

Lecture at Gesellschaft für Wissenschafts- und Technikforschung (Society for Science and Technology Studies), annual conference

The model distinguishes between what is data and what is non-data. It is a reference to reality, much more than the algorithm.

 

Instead of focusing on »the algorithm«, the trinity of Modell-Data-Algorithm should be taken into account, since the algorithm alone is an indication of nothing, while data is a reference to real world events, and the model is the decision about what get’s included and what get’s excluded from computation.

Prediction from this perspective is statistical correlation. The lecture further demonstrated a few exsamples, how prediction can be tricked by introducing smut or fake data, which is beyond the scope of the model.

Further Info at: http://gwtf.de/ (In German)

Invite me for a lecture!

→ author: Francis Hunger, published on: 2018-Dec-17

Digital Aesthetics

Digital aesthetics on a visual level is often equated with the pixel, and any kind of pixelated structure. While on a first glance the pixel appears to be a working, appliable metaphor for »the digital«, a sketchy view reveals a few other possible candidates, I’ll draft here.

When talking about the digital, the notion of digital aesthetics refers to things created with computer, narrowing the digital realm down to the electronic computer. The notion of the digital allows to differentiate the digitized, distinguishable, separated and discreet and calculable from the notion of the analog, that is continuous and varying over time. The digital  in contrast to the analogue continuity, is a sequence of discrete units.

Looking into the history of output devices for digital calculation, diagrammatically organized rows and columns of blinking lights in early computers, such as the Zuse 3 or the UNIVAC, electromechanical printers, also diagrammatically organized through their monospaced fonts, or the tabular structure of the punch card have informed digital aesthetics even before the usage of pixelated cathode-ray-tube (CRT) monitors. While these output devices may have been used for artistic creation, to my knowledge these have been marginalities.

Yet there are other devices that shaped very early digital aesthetics. Mathematicians Georg Nees, as well as Frieder Nake both from the mid 1960s on used a plotter to generate vector graphics and named the genre »Generative Computer Art«. One early plotter was the Zuse Z64 Graphomat, reading data from a punched tape (which connects it to the above mentioned punched card). The plotter as a device draws points and lines (or vectors, or geographic coordinates), so the programmed print definitions consisted mainly of start and end points, which were to be connected by a line, and only to a small extend of points. Making a long argument short, the vector-oriented plotter was a digital output even before the pixel-oriented raster CRT monitor came into widespread use for computer graphics. The raster CRT monitor only appeared, when computers had enough capacity to actually calculate each pixel on a screen raster. Prior to that CRT monitors have been used with vector descriptions, which  describe only the start and end point and possibly a third value for amplitude, saving scarce memory. In his Sketchpad dissertation 1963, Ivan Sutherland notes that 1 point needs 20 bit to be described on the MIT Lincoln Laboratory TX-2 computer CRT display, and that points, lines and circles (or the parts of circles) can be drawn on screen.

So the aesthetics of the line dominated early digital aesthetics up to the mid 1980s [cf. Blobel/Schneider/Wegener: Prints & Plots: Computerkunst ’86. Gladbeck, 1986].

Frieder Nake: Hommage a Paul Klee, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966, http://bitsavers.informatik.uni-stuttgart.de/pdf/computersAndAutomation/196608.pdf

Frieder Nake, Zufälliger Polygonzug, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966, http://bitsavers.informatik.uni-stuttgart.de/pdf/computersAndAutomation/196608.pdf

Frieder Nake Rechteckschraffuren, 1965, Source: Computers and Automation. Newtonville/Mass. no 8, August 1966, http://bitsavers.informatik.uni-stuttgart.de/pdf/computersAndAutomation/196608.pdf

It may be necessary to introduce another distinction at this point. (Since this is only a blog post, I take the freedom of not looking intensively into this and very likely Frieder Nake or Georg Trogemann [Code und Material. Springer 2010] are be able to clarify.) The distincition is this: Some of the prints of that time have been generative in that sense, that algorithms with variables have been used to generate an image. These variables often were based on a re-calculation of the former value of the same variable, an iteration. The resulting aesthetics of the early generative art can be described as modernist, often structured, often ornamental or organic, to a large extend non-figurative and iterative. [Compare: http://dada.compart-bremen.de/browse/artwork]

Continue reading

→ author: Francis Hunger, published on: 2018-Oct-15

Surveillancism

I would like to explore critically the often used notion of »surveillance« when it comes to data collections. We currently see a lot of approaches, that address the surveillance aspect of data collection: For example: Branden Hookway’s Panspectron (Hookway/Kwinter/Mau 1999), Mark Poster’s Super-Panoptikon (Poster 1995, 87), Didier Bigo’s Banopticon (Bigo 2006), Zygmunt Bauman’s Liquid Surveillance (Bauman/Lyon 2013), Shoshana Zuboff’s Information Panopticon and Surveillance Capitalism (Zuboff 2015), and Metahaven’s Black Transparency (Metahaven/Velden/Kruk (Hrsg.) 2015), followed by emerging academic centers and publications on the subject of »surveillance studies«. The discourse about surveillance keeps us busy: In academia, in media art, in mass media and discussing it at home with our partners and friends.

The major argument of surveillanceism follows Michel Foucault’s discussion of Jeremy Bentham’s Panopticon prison building concept and the practices in and around it, that Foucault developed in his book Surveiller et punir (Foucault 1976). A lot of the current discussion regarding data, circles around this concept of Panopticism, demonstrating how strong Foucault’s Panopticon-metaphor is. I can only notice, to what extend these arguments rely on one single theoretic impulse – the Foucauldian theoretization of Bentham’s concept. For time and space constraints I cannot go deeper into a possible critique of these theories and will only bring forward a few arguments, why I’m not using the surveillance metaphor.

Surveillance takes place. Data is part of electronic policing and as basis for politics. Human rights activists, political opponents, victims of police actions know that and everybody whose smartphone got confiscated on a demonstration like it happened here in Leipzig in January 2015, and how it happens all over the world, know it too. Amazon’s warehouse workers and Uber drivers know it was well.

Still I would maintain :

Continue reading

→ author: Francis Hunger, published on: 2018-Sep-30

Summerschool – Center for Digital Cultures, Lueneburg

September 16-19, 2018

Faculty: Monika Dommann, Thomas Haigh, Ben Peters, Claus Pias, Daniela Wentz

The CDC summerschool discussed issues along a set of questions

1. concepts and theories

What are the theoretical models that are able to contribute to a better understanding of the history and historiography of digital cultures? But also: How do digital cultures affect and shape common and current theoretical models of (media) historiographies?

2. methods and methodologies

What are the methods that meet the challenge of bridging digital media technologies with the field of history? How do the methods of the digital humanities affect the methodology of historic research?

3. critical revision of the so-called digital history

Does the source under digital conditions also change the construction of history and the rhetorics of its narration? Which “politics of the archive” can be observed in the course of or as a result of digitization?

 

In this framework I presented an excerpt from the thesis that dealt with the histories of the relational database model. It discussed three actors and their institutional background shaped early developments of what later became know as the relational model. E.F. Codd, C.T. Davies and David Childs in various constellations discussed issues of set theory, machine independence, data independence, time-sharing against the backdrop of the IBM System/360 and its pre-decessors. It aimed at decentering a narrative that is solely concentrated on the person of E.F. Codd putting it into the tension field between university and industry research.

It has been great to meet this particular faculty because they are all very involved in the histories of computing and were able to make very helpful suggestions. Also the fellow phd proposals were great to discuss, ranging from algorithmic structures to the history of object oriented programming to research about salesforce.com.

 

 

→ author: Francis Hunger, published on: 2018-Sep-20