Smart Curation – Algorithmic Phenomenology
Algorithmic Phenomenology is a short input I gave for a online colloqium on smart curation.
If Smart Curation is focusing on the tension between human and algorithmic curation then we shall not forget that algorithms are already infused with humanity. When we talk about algorithms in our context we mean software and how that software handles information towards a specific goal. Both, software and data-sets, are already humanized and biased by the way they’re constructed, collected and curated by humans.
On the other hand the way algorithms work these days, leaves traces in the creators, programmers and data scientists. An algorithm demands that everything is quantified and the act of quantifying is a specific mode of capitalism and exact sciences. This methodology reinforces, that everything can potentially be an engineering problem.
Thank you very much for the opportunity to give a little input today and thanks to everybody for attending. Today I want to talk a little bit on a technical aspects of our collective inquiry. If Smart Curation is focusing on the tension between human curation and algorithmic culture then we shall not forget that algorithms are already infused with humanity. And that this is not very well reflected upon by the creators of algorithms.
When we talk about algorithm in our context, we usually mean pieces of software. An algorithm is a series of steps we can follow to produce a desired piece of outcome, some knowledge maybe. Let’s say, I want to calculate the biggest common divisor of two numbers, then there is an algorithm from Euclid to do so. At the heart of what interests us here are also algorithms. They are used to search through, cluster and classify huge amounts of data.
The important point here, is that algorithms in our context are not worth much without software and data. If we want to think about smart curation, we can’t forget about these, because both can be highly problematic.
“The problem is that we are attempting to build systems that are beyond our ability to intellectually manage.” And that is because, software doesn’t behave like traditional engineering. You can’t come up with a good plan and execute it, then rigorously and physically test the produced object and see where it fails. Software will always execute exactly as it’s told to. Software will not break – a bug is not some material’s fault, but a mistake in thinking.
The king of algorithms these days in our context is machine learning. Especially deep learning is particularly popular and is commonly referred to as artificial intelligence. Deep learning is a kind of algorithm, where you write a piece of software and give it an input. The algorithm will do its calculation magic and when done will adjust itself upon if we were happy with the result or not. In order for the algorithm to learn something in this manner, it needs gigantic datasets. If you want an AI to learn, how a cat looks like you need not ten or hundred, but ten-thousand and more images of a cat in all kind of variations. Deep learning algorithms are structured in a way, that they extract from the images, what might be a cat.
Now the problem is, that these huge datasets are highly biased. If a data scientists curates a dataset, that person’s assumptions and worldview manifests in there and the AI is learning exactly that as well. The images chosen matter and there is no objective way to gather information.
In software and data we have streams of humanized perception and thinking going into these algorithms that might be governing our life.
But it’s not a one-way stream. This year I learned how to program AIs and participated in a course titled “Creative Applications of Deep Learning with Tensorflow”. Tensorflow is an industry-standard AI framework to make coding a bit easier. Maybe some of you heard about Deep Dream – that AI that produces psychedelic images out of photos. Around three/fourth of the course was about computer vision, computer science, some math and a lot about the different kind of neural network types.
What really took me by surprise in the course was, how everything needs to be brutally quantified. You can’t just use random images, they all need to have the same format, size and color-space for example. Only then can you transform a image, that we can see with our eyes, into a string of numbers, that the software can read.
So the AI forces us to reduce our environment to small common denominators, stripping what we’re looking at of all it’s relationality and complexity. This inability, to handle complexities, of our algorithmic software maps back onto us, or least those who work directly with AIs – software developers, data scientists and such.
To conclude, I believe we need to have people in the field of algorithmic curation reflect on their processes of production -maybe similar to what happened in Design with critical design for example.