Artificial Intelligence or Why machines will not take over the world. At least not now.

Part I: There is no Artificial Intelligence.

 

It’s pattern recognition, stupid!

A friend of mine recently exclaimed that since her Siri speech recognition became much better, compared to speech recognition ten years ago, Artificial Intelligence (AI) now has the potential to rule the world. What if there is no Artificial Intelligence at all? What if the so called AI revolution is indeed an empowered form of pattern recognition? I agree that todays pattern recognition shows better quality in recognizing patterns in language, image, orientation and similar fields. But is pattern recognition equal to intelligence, even to human intelligence?

Pattern recognition is about perception, and it is about statistical interference with a body of data. These are two areas that have become increasingly better over the past decade. Not only have businesses (like Amazon or Google) developed new techniques for distributed large scale computing using consumer hardware in large quantity. They have also developed decentralized, large scale solutions for data storage, labeled Big Data, that forms the base for more successful statistical interference. We see how both these quantitative changes have turned into a perceived new quality of empowered pattern recognition (EPR).

Better algorithms to search unstructured information

Three factors play into the overall growth in automation. First of all, search engine technology has grown and become better in sifting through large amounts of structured and unstructured data, especially since Google introduced tools such as Mapreduce, Bigtable in the Mid-2000s, and since new Open Source Software for data mining in unstructured information collections such as Hadoop became available. Structured data is for example held in tables, where each column contains a certain kind of information (e.g. a date, the weather condition, a color) and each row represents a record. Unstructured data in contrary is just loosely broken into new lines for each record, but you can never say, where a certain information may be held within the line nor if it is there at all. Unstructured data collections also holds data, that is not recognized in that sense, that there are »words« of which it is unclear to the algorithm what meaning they may have. Unstructured information machine searching has become better.

Attention direction through sentiment analysis

Secondly, sentiment analysis (the analysis of meaning of a succession of words) has become somewhat better, based on statistical learning techniques. This has driven advertisement platforms such as Google Adwords, analyzing for instance users emails or website contents to provide possibly related ads or Facebook’s ability to analyze user generated content streams and appropriate them as pools of attention. For the past 200 years attention has been the driver for value generation in traditional mass media: Generating relevant content such as news or entertainment to redirect the readers attention towards advertisements. Nikolas Luhman notes that mass media does not necessarily produce consent, but it lives from debates, critique and dissent meaning, so that readers do engage with it mentally. It seems that this mixture of consent and dissent continues to create spaces of attention in individualized mass entertainment such as Facebook, Twitter and the like.

There is no learning in machine learning (not even recognizing)

Third, the area of pattern recognition for visual and auditive content has advanced in terms of the algorithms. This field has been dubbed »machine learning«, coined by IBM engineer Arthur Samuel in 1952.[1] The term on the one hand informs us, that it is about machines, and machines it is – calculation machines turned into symbol processing machines. Computers are tayloristic machines for the division of mental labor into its smallest calculable pieces. What about the learning part? Learning is used here in a very specific and narrowed sense – in the sense of generating meaning in information through statistical interference with large amounts of existing categorized information. In practice this means, that in a first step specialists need to train a specific neural network by feeding it with massive amounts of information, such as pictures of dogs, that got actually labeled »dog« (by humans, who have assigned the meaning in a conscious act. Or you buy a pre-trained set and hope it works. During its constitution the neural networks »looks« at these pictures, that means it processes all pixels of an image and calculates the pixels’ values to identify repetitions and similarities in color, brightness, position and such among different images, labeled »dog«. »Machine learning is very brittle, and it requires lots of preparation by human researchers or engineers, special-purpose coding, special-purpose sets of training data, and a custom learning structure for each new problem domain. Today’s machine learning is not at all the sponge-like learning that humans engage in.« (Rodney Brooks)[2]

Not only should we use the term empowered pattern recognition instead of the wrong label of »artificial intelligence«. We also should no longer talk of machine learning but of machine feeding when it come to the training of neural networks.

It exists if you can calculate it

A given neural networks is restricted to information, that can be actually calculated, information that exists as a stream of distinct numbers. All other information is not existent to the neural network. Unlike humans, a computer neural network that is trained to process visual information can not – upon realizing it can not »recognize«, or better, process information – decide by itself, to retreat to other senses such as touching or hearing to generate a meaning from a given thing.
Microsoft’s engineers have learned this the hard way with their public »AI« Tay. When they put Tay online, equipped with neural network feeding algorithms that needed user input as a basis for functioning, the users understood: A group of trolls, techies and teenage geeks fed Tay with racist and anti-Semitic information and turned it into extremist rightwing propaganda blurter. They may have had decided to pick up on any other bias. Because, it is, what it is: The algorithms are there, but the input training data is not only the base for recognizable patterns but also for bias. The trained neural network calculates the »meaning«, or lets better say, the statistical correlation of incoming information against the existing data body. Microsoft’s engineers switched of Tay after less then 24 hours. They tried to better it, to repair it, so it would »better recognize malicious intend«.[3] Are Microsoft’s engineers programmers, or are they educators, or are they psychotherapists?

When the industry today use the term machine learning they are willingly deceiving us, because as I have argued, there is no learning in machines.

Splitting it off

Another discussion looms behind these developments – The questioning of the premises that led to comparing the human brain with neural networks. Proponents of this idea can not think other of the human brain, than that it is an apparatus that computes. Only by this premise, they have reason to believe they could eventually produce a machine equipped with algorithms, that could replace the human brain. They are not completely wrong: Indeed, certain parts of human brain activity can be described through calculation processes similar to neural networks. But the story is far more complex.

It makes little sense to reduce thinking to the brain and simply ignore, that the whole body with its nervous system is involved in thinking. I can tell from my gut.
In addition there is not that one single mode of thinking as Kevin Kelly, editor of the Whole World Catalogue and founder of Wired reminds us: »We contain multiple species of cognition that do many types of thinking: deduction, induction, symbolic reasoning, emotional intelligence, spatial logic, short-term memory, and long-term memory.«[4] And even within Kelly’s type-of-thinking list, there is an emptiness, that is typical for many engineers, mathematicians and the like. They are missing out on that part of the body that is dirty, that is emotional or dysfunctional. The are missing out on the effect on thinking conditioned by low blood pressure, by depression, or by desire. In short, they are missing out on those psychic and physiologic effects that make each body individual. What they leave is a premise – human brain is a computer – built on splitting off the incalculable.

Input, Output and what happens in-between

Even when reduced to logical brain functions, the degree of the brains’ complexity was shown recently when the question was turned upside down. Instead of applying neuroscience models to the human brain, scientists Erik Jonas and Konrad Kording evaluated standard models of neurosciences against a vintage microprocessor. It once sat in computers such as the Atari 800, the Apple I or the Commodore VC 20, of the early 1980s and it has the property, that one can determine every actual state that the processor is in during a calculation process. It was expected that the neuro-science models could explain what the computer was actually computing, but surprisingly they didn’t. Jonas and Kording put in question, whether these models should then be used for researching the functioning of the human brain. Because, the actual state of the brain’s »computing« is largely unknown and it is by factors more complex than a MOS 6502 Integrated Circuit. Still they must have had fun, especially when they used the classic games Donkey Kong, Pitstop and Space Invaders to evaluate the computers »brain«.[5]

It may be possible, to produce complex neuro models that allow to compute information with outcomes, that look similar to what the brain does.

The outcome, however, for instance the decision if a picture depicts a dog or a muffin, even when it seems to be close to the human perception, does not say anything about the underlying computational processes. »Deep learning, however, produces a convolutional neural network that may not so easily reveal its weights and thresholds, nor how it has learned to ›chunk‹ the gridded input.«[6] What appears as Artificial Intelligence turns out to be a combination of algorithms and input data able to produce output that is statistically close to human perception of distinct chunks of reality.[7]

There is no artificial intelligence

Portland based copy writer Karen Zack (twitter: @teenybiscuit) created a series of pictures in late 2015 until March 2016 that turned into memes. She was using Google image search and then rearranged the pictures of cats and biscuits, or chihuahuas and muffins using her phones album function.[8] Professor for neuroscience and psychology at Skidmore College Flip Phillips ran this meme through the image recognition algorithm of the online calculation machine Wolfram Alpha (aka Mathematica) and demonstrated a 50% hit rate and a 10–15% false alarm rate.[9] This is, where we are at with pattern recognition using day to day tools such as Wolfram Alpha.

Above I have shown, that the terms used in the field of Artificial Intelligence are often inappropriate and misleading. It would be worth, to analyze these further in regards to what kind of wishes, fears, traumata and desires they express. This analysis could be extended towards related terms such as »the cloud«, »big data«, »self-driving car«, »data mining«, »crowd-sourcing« and so on.

I have also discussed shortly the premises that reduce brain activity to something that is computable[10] and how this determines false claims about AI. It becomes obvious that we need to develop another language to talk about enhanced pattern recognition (EPR). We need an understanding what EPR means as a machine-algorithmic technology that does not lead to a Super-Intelligence, but has the potential to replace certain areas of human activity and labor.

 

Notes

[1] C.f. Roger Parloff http://fortune.com/ai-artificial-intelligence-deep-machine-learning/; More on the genesis of machine learning by Eren Golge at https://chatbotnewsdaily.com/since-the-initial-standpoint-of-science-technology-and-ai-scientists-following-blaise-pascal-and-804ac13d8151

[2] Rodney Brooks, https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/

[3] Peter Lee, https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

[4] Kevin Kelly, https://backchannel.com/the-myth-of-a-superhuman-ai-59282b686c62

[5] Eric Jonas & Konrad Paul Kording, http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

[6] Ken Regan, https://rjlipton.wordpress.com/2016/02/07/magic-to-do/

[7] C.f. Lance Fortnow, http://blog.computationalcomplexity.org/2017/04/understanding-machine-learning.html

[8] Karen Zach, https://twitter.com/teenybiscuit/status/707004279324696577/photo/1

[9] Flip Phillips, https://academics.skidmore.edu/blogs/flip/?p=712

[10] Ed Yong, https://www.theatlantic.com/science/archive/2017/02/how-brain-scientists-forgot-that-brains-have-owners/517599/

→ author: Francis Hunger, published on: 2017-Oct-15

Algorithms are made by humans

The artist Francis Hunger presents his video installation Deep Love Algorithm at the recent exhibition »Mood Swings – On Mood Politics, Sentiment Data, Market Sentiments and Other Sentiment Agencies«. In conversation with curator Sabine Winkler he tells, why we should no longer talk fearfully of algorithms.

Sabine Winkler: Your video essay Deep Love Algorithm reconstructs the evolution and history of databases as a love story between cyborg and writer Margret and Jan, a journalist. Margret embodies a kind of resistant position emerging from history, and also a linkage between the human and technology. The relation of human / database (technology) told through failing love story is an unique approach. How did this topic evolve, or rather how is this relationship structured, and why does it fail?

Francis Hunger: Margret is not necessarily a cyborg, actually it is only implied that she lives longer, compared to her appearance. This doesn’t, however, exclude that she is a cyborg. The original idea for Margret was, to create a figure who travels through the times. A figure, who beyond the ahistorical Samantha from the movie Her, or the movie figure Adaline, was and is part of political fights.

Continue reading

→ author: Francis Hunger, published on: 2017-Sep-08

Review of »Search Routines« by Neural

»These essays constitute an added value, providing an extended account of databases’ historical development, informing the reader about a strategic past that is often overlooked.« Read the full review on neural.it.

→ author: Francis Hunger, published on: 2017-Jun-30

Inside the Data Bank. Privatizing profits, socializing losses.

lecture at the Kulturen des Kuratorischen in May 2017 for the seminar »Leaking Bodies (and Machines)« by Julia Kurz and Anna Jehle. The session was joined by students of Peggy Buth who work about the topic of Big Data and Post-Internet art.

lecture and discussion about Data Banks

lecture and discussion about Data Banks

While the seminar was developed around reading Updating to Remain the Same: Habitual New Media (MIT Press) by Wendy Hui Kyong Chun, Francis Hunger was invited to add a more materialist perspective. Two lectures were delivered, the first developed a historical perspective on the development of computing technology in general and discussed relational databases in closer detail. It tried to give an overview about the state of art in computing technology and interconnect this with the social dimension of labor and cultural questions such as the crisis of the public/private and the cultural function of the archive. The second lecture introduced basic concepts regarding electronic infrastructure as developed by Bowker, Ruhleder and Star and ended with a discussion of data centers, the ideology of the »cloud«, and the function and meaning of data in »big data«. The following discussion evolved into a rather broad and general argument about how the mediocene emerged and influences current social relations.

→ author: Francis Hunger, published on: 2017-Jun-15

Deep Love Algorithm exhibited and Database Talk in Osnabrueck

Deep Love Algorithm – Exhibition View 2016 EMAF Osnabrück

Deep Love Algorithm – Exhibition View at European Media Art Festival 2016, Osnabrück

Deep Love Algorithm – Exhibition View 2016 EMAF Osnabrück

»black tower« with digital print showing two workers, one operating a punch card machine, the other working on the IBM calculation machine

Deep Love Algorithm – Exhibition View 2016 EMAF Osnabrück

Main projection screen with narration. Videoanimation, HD, 32 min

Deep Love Algorithm – Exhibition View 2016 EMAF Osnabrück

office workers on punchcard machines – digital print

Deep Love Algorithm – Exhibition View 2016 EMAF Osnabrück

Deep Love Algorithm – Exhibition View at European Media Art Festival 2016, Osnabrück

Big Data as permanent future

panel discussion with Marcus Burkhardt (Uni Paderborn) and Francis Hunger (Leipzig), moderator: Lena Brüggemann (d21 Kunstraum, Leipzig)

Under the banner of big data, states and enterprises are collecting data with the intention of using it some time in the future. Seen from the perspective of databases, humans are transformed into data bodies and data potentials that are to be saved and algorithmically processed. While states, for example, allow their police to experiment with systems to predict criminal activities and their political parties to mobilise the electorate for their election campaign using big data, enterprises such as Amazon, Allianz Insurance and Deutsche Bank use their customers’ data for strategic business development purposes.

Panel discussion Francis Hunger, Lena Brüggemann, Marcus Burkhardt
→ author: Francis Hunger, published on: 2016-Jun-09

Tabellen im Spiegel der drucktechnischen Innovationen des 20. Jahrhunderts.

This text describes the interdependence of table making and printing process innovations during the 19th century. It is largely based on Doron Swades article »The ‘unerring vertainty of mechanical agency’: machines and table making in the nineteenth century.« from Campbell-Kelly »The history of mathematical tables: from Sumer to spreadsheets.« (2003). I had to shorten this part from my larger essay about Tables, and for the sake of saving it somewhere, it is published here.

 

Tabellen im Spiegel der drucktechnischen Innovationen des 20. Jahrhunderts.

Die Berechnung der Tabellen war mit einem komplexen Druck- und Publikationsprozess verwoben. Bis zum Anfang des 20. Jahrhunderts[1] wurden Publikationen per Hand gesetzt, das heißt, der Setzer las die Zahlenfolge im Manuskript ab, entnahm die einzelnen Lettern einem Satzkasten und arrangierte diese auf dem Satzschiff in Spalten, Gruppen und Blöcken welche die Seite formierten. Beim Herausnehmen wurden die einzelnen Lettern nicht auf ihre Richtigkeit überprüft, vielmehr griff der Setzer gewohnheitsmäßig – man könnte auch sagen »blind« – in die Kästen.[2] Es war daher eine neue Drucktechnik, die wesentliche Unterschiede für den Buchdruck allgemein und für Tabellenproduzenten insbesondere machte: die Stereotypie, die Verwendung von Druckplatten aus Metall. Dabei wurde vom Setzschiff, in dem die Lettern vorübergehend fixiert waren, ein Abdruck genommen, der in eine Metallplatte gegossen wurde. Diese konnte aufbewahrt und wiederverwendet werden.

Continue reading

→ author: Francis Hunger, published on: 2016-Apr-26

Computing In-Formation: Data and its Base

Workshop with Francis Hunger, artist (Leipzig)
18th + 19th January, 10am-4pm
Research Center for Proxy Politics (Hito Steyerl, Vera Tollmann, Boaz Levin, Maximillian Schmoetzer, Anil Jain)
UDK Berlin, Raum 115

This workshop aims to establish a notion of computing history that is oriented towards database software. During the first day we look into diverse practices of database usage, its historical and social origins.
Knowledge production by way of the library, the collection, the processing of mathematical equations in the age of human computing and bio-political practices such as statistics, data collection, resource management and insurance business have informed database technologies.
Lately, notions like big data or large scale search engines were added to this set of practices. During the second day the discussion focuses on tables and relations that form and put in form the base of data. And we go for a database dérive, which means we go outside to observe databases in their natural habitat to sense the infrastructural dimension of database usage today.

Day 1:

  • Lecture »Computing In-formation: Data and its Base«,
  • Introductioduction to SQL and relational databases
  • Close Reading – Mark Poster: Databases as Discourse or Electronic Interpellations. In: The Second Media Age. Polity Press 1995

Day 2:

  • Continuation of Close Reading – Mark Poster
  • Database Derivé
Close Reading Mark Poster

Close Reading session of Mark Posters text about Databases as electronic interpellations. We read the text paragraph by paragraph and subsequently discuss each section until it is understood.

Database Derivé

On our way to Berlin stock exchange Building. As we learned, the stock exchange has moved out, because it is computer based now. A few people went into the hotel to find out, what data needs to be stored in their booking system, in order to get a room.

Database Derivé

Theater des Westens has an old-fashioned ticketing system on paper cards for the subscription. But you can buy tickets online via a third-party service.

Database Derivé – ID

Database Derivé: ID Number at a door – electric switch room for U2 subway line

Notes: This time the database derivé took place in February which turned out to be cold. So we went mostly into buildings and asked the businesses about their database practices. One could do so individually each day, but here it seems that the groups’ supportive existence encouraged inquiring.
Some of the participants were dissatisfied because »it seems with every workshop comes a we-go-into-the-city-and-experience-it-differently session«. That made me think about it and understand that a derivé may be me more exciting for a diverse crowd as it was drawn by the Galerie Wedding database derivé last year, but may be less interesting for art students. However, what worked well in both occasions, that the sheer fact of moving around, instead of sitting in a room, fosters communication amongst the participants – it is informal but still the conversation connects to the overall topic. Also the sceptics agree.
The close reading (paragraph by paragraph) of Mark Posters essay was very appreciated by the participants, both because the text itself is inspiring and because the reading and discussing paragraph by paragraph enabled an intense debate within the group. This session also benefited from the comments of the RCPP-members who provide additional input on a high academical level.
→ author: Francis Hunger, published on: 2016-Jan-15

Universal Concept

The database became an universal concept for software such as the Von-Neumann-Principle for computing hardware.

→ author: Francis Hunger, published on: 2016-Jan-02

Search Routines – Publication

The publication »Search Routines: Tales of Databases« enlarges on the topics discussed in the exhibition, the workshop and during the symposium which took place at D21 Kunstraum and sublab hackerspace Leipzig in 2014. A series of interviews with Francis Hunger, Kernel, Pil and Galia Kollectiv and Sebastian Schmieg review artistic strategies like narration or the translation of data and algorithms to adress the invisibility of databases. Reports from the workshops with Heath Bunting and WaiWai tell about the potential of making the invisible visible or simply of hiding oneself from the databases’ range of view. The symposium discusses databases from a sociological and cultural science perspective.

Download: Lena Brüggemann / Francis Hunger (eds.): Search Routines: Tales of Databases. D21 Kunstraum Leipzig, 2015

Search Routines – Bookcover

 

Search Routines Publication

 

Search Routines Publication

 

Search Routines Publication

 

Search Routines Publication

 

Design: Paul Spehr
Copy Edit: William Clapp, Juliane Richter
Printed by: Bod, Norderstedt 2015
Authors: Lena Brüggemann, Marcus Burkhardt, Cesca Golodnaya, Francis Hunger, Daniel Pauselius
Artists: Francis Hunger, Kernel, Pil and Galia Kollectiv, Sebastian Schmieg, Jonas Lund und Johannes P Osterhoff

Funded by Kulturstiftung des Freitstaates Sachsen. Realized in cooperation with the Hybrid Publishing Lab, Innovation Incubator, Leuphana University Lüneburg.

→ author: Francis Hunger, published on: 2015-Nov-10