New Media, Instagram & Homogenisation

Lev Manovich speaks with Adam Peacock

The following video and transcript are edited from a conversation held via video link at the Vigeland Museum in Oslo, on 6th September 2021 as part of PRAKSIS residency 18, Perfection / Speculation.

2021

Lev Manovich is an artist, author, and one of the most influential digital culture theorists in the world. He was included on the "25 People Shaping the Future of Design" (Complex, 2013) and "50 Most Interesting People Building the Future" (Verge, 2014) lists. Manovich is the Director of the Cultural Analytics Lab and a Presidential Professor of Computer Science at The Graduate Center, City University of New York. He has authoured and edited 15 books, including Software Takes Command (2013), Instagram and Contemporary Image (2016), Artificial Aesthetics (2023), Cultural Analytics (2020), AI Aesthetics (2018), and The Language of New Media (2001). Manovich received a visual arts education and began using computers to create art in 1984. His digital art has been exhibited in 12 solo and 120 international group exhibitions at many prestigious institutions including the ICA, London; Centre Pompidou, the Shanghai Biennale; and the ZKM | Center for Art and Media.


Please watch full screen

 

Extended transcript

Adam: Can we start by discussing you, yourself, Lev, as a human living in the age of new media? As you have argued in your 2020 book Cultural Analytics, “the human languages that developed rather recently [within the evolution of new media]... are not good at capturing the analogue properties of human sensorial and cultural experience”.¹ As a human being living within this system, how has your understanding of new media affected the way that you conduct yourself online, and the way that you perceive the online choices of others? Are you able to separate yourself from online life, or do you find yourself constantly gauging the effects of new media on yourself and others?

Lev: I'm glad you've raised this because it’s something I’ve been thinking about a lot, but so far nobody has asked me about it. There are various possible answers, and one has to do with a fundamental shift in contemporary culture, including visual culture, in the last fifteen years. In areas such as politics or music, people now ask information retrieval and analysis software for recommendations. For example, people use photo software to select the five best images from a photo album. The software does a good job, so it gets used routinely by both corporations and individuals on a worldwide basis. It's an interesting new turn in cultural media, but it's also a trap, because it allows individuals' online lives to be quantified, via a graph. Another example is the way platforms including Facebook, Instagram and Twitter alert you that you haven’t posted recently and you've lost a number of followers, and they tell you when the best day and best time is to post. 

The analytics are free, built into social media platforms that are used globally, and they lead to a situation where everyone becomes a kind of company. Fifteen years ago, this rationalisation and quantification of communication was only done by companies; now it's done by hundreds of millions of individuals. This stops me using Instagram for pleasure. If I post pictures of, let's say, coffee cups, I’m not going to get many likes, so either I have to close the account and get on with my life (which is very scary) or continue it in the form of yet another output propaganda communication format for my professional life. Maybe it’s not so bad; I also put all my new ideas on Facebook to get feedback. However, because of this constant quantification, it’s very, very hard to use this media in an intuitive, impressionistic way.

Adam: This makes me think about the possibility of teaching a machine to read "genetic strength"² from visual data, or developing a "science of culture", which is a phrase that you’ve used in your writing. Discussing the idea of teaching a machine to make aesthetic judgments, using the analogy of a magazine editor choosing imagery, you've argued that "probabilistic vision is not the same as understanding". Can we teach an algorithm to read beauty, sexiness or visual appeal—constructs that an evolutionary biologist might term ‘perception of genetic strength’ —and if so, how? Could it be done through statistics, or do we need the computer to synthesise the perception of strong genes? Do you think that quantifying human genetic strength via digital analysis is far off? As you explain in Cultural Analytics, Spotify API (Application Programming Interface) "curates" automated music playlists based on computational analysis of acoustic and stylistic properties. What would it look like if a genetic algorithm had the ability to align human beings together based on a synthesis of their genetic compatibility, and therefore appeal or attraction? What could it be used for, and would it benefit society?

Lev: OK! I'll try to select half a question from that, because you just gave me twenty! That's fine—it's why I agreed to the interview, because these questions are overwhelming, and figuring out what to focus on is a challenge. Let's begin by thinking about the background. Social science, which emerged at the end of the 19th century, wants to imitate the popular success of natural sciences such as physics, chemistry and biology. It aims to quantify society, to find fixed relations and social laws, and between the 1890s and 1930 the quantitative approach became the default, so that today, governments, oil companies, the media and so on all rely on statistics: to do with voting patterns, for example. We take this for granted, but none of it existed 120 years ago. Then, around 2005, with the arrival of big data, billions of people started sharing their media likes, dislikes and thoughts online, and around 2010 we had the emergence of something called computational social science, a new paradigm asserting that social science had a completely new tool. Statistics and statistical models wouldn't be discarded, because the tool enabled their application to much bigger data and the figuring-out of all kinds of social laws: the laws of attractiveness, of why people walk a certain way—you name it. Forget about experimenting on twenty, thirty, forty people: now, you can collect data from billions. Via Facebook, you can do an experiment simply by changing something on the website and seeing how people respond. All these social questions we can figure out. This new paradigm has led to profound insights into human behaviour.

In the second decade of the 21st century, a new tool arrived: supervised machine learning using deep networks. As an example, I've just searched Google Scholar for papers on attempts to predict attractiveness judgments using deep networks, and of course tens of thousands have come up. I picked just one of them, a recent study from the top of the list, involving giving the machine some data (photographs of people plus some selected human judgments of attractiveness), and it shows the machine learning to predict judgments with 70% accuracy. About 70% is noise, but with development the system will be able to get to 95% or even 100%. So, your questions relate to something much larger—to a longer history of humans aiming to quantify themselves. In the 19th century, with population growth and the growth of cities and new democratic societies, the need for statistics relating to demographics, politics, economics and so on became urgent: even then, society had become statistical. So, the present big data period is an extension of this, an expansion of statistics and quantification into areas of life that were previously difficult to measure. I could go on, but I think we’re probably out of time!

Adam: My next question to you concerns eugenics, racial issues, cultural limitations, and the evolution of language. We were curious to know if, within your projects, you've had to make conscious decisions about the statistics you’ve observed, to ensure that your analysis does not reveal hurtful, damning, or racist insights, or perhaps even truths about humanity. Unfortunately, the internet is rife with narrow assumptions, biases, and stereotypes that could all be reflections of the reality of human culture today. Does the study of new media yield truths that society is not ready for? In the study of new media, how are ethics, responsibility and social benefit to be defined?

Lev: Well, I wish I had a month to answer you, because I have a lot to say. Firstly, I don’t set up my work on purpose to do social good, but what I do hope is that my work is read and that it provokes questions which help artists, designers, filmmakers and other people within visual media by giving them some better dimensions and concepts to help their artistic production. It’s not about humanity at large, it’s really about the creative class, which of course is very large. Secondly, if you look at the millions of studies of online data, social scientists typically organise studies around a research question: this is how social sciences were conducted since their beginnings at the end of the 19th century—for instance, as Durkheim did in his foundational 1897 book Suicide: a Study in Sociology. They always have a question in focus when they figure out their methodology and collect data.

So how do we limit stereotyping? For me, what is important is to avoid a research question. Because I’m an artist, I approach social media as a kind of landscape. Take the example of a cinematographer who shows you a panorama of the city and city life, before focusing on the film’s main characters. I look at everything in the way a painter would look at the landscape, or a cinematographer would look at the city. I don’t have biases. I just want to look at everything we have produced: the culture of today, of tomorrow, to examine the patterns and see what’s going on, not looking at images in terms of their content, which is what a lot of people do, but basically looking at them as visual signs, as symbols of reality, and noting what happens online.

Because I mostly work most with Instagram and visual images—or at least, when I did that five years ago—these questions of hate and so on in social media were not so present, and I was able to avoid them. But what’s most important to me is to avoid starting with a research question. For example, I want to avoid asking what constitutes a "good use" of Instagram. When I give lectures about Instagram, historians say, "Why would you study Instagram? It's not interesting, the only interesting question is how artists use Instagram". That’s an example of bias, and in my view, I don’t have that bias. I just want to look at everything and see what’s going on. Maybe this is a very radical perspective, one where you don’t look for "bad guys" and "good guys", you look at everything, and you ask different questions.

Adam: That leads nicely to my next question, which relates to Gustav Vigeland. He lived from 1869 to 1943, so evidently long before new media's dissemination of social media, pornography and edited self-imagery, and their effects on perceptions of the human body. If Vigeland were alive today, how might he update or modify his sculptural practice, I wonder? How might the effects of new media play out in relation to this practice?

Lev: Firstly, I would say that I don’t completely agree with you that in his time, he and other people were not affected by media, in terms of body perception. It's a documented fact that pornography was one of the first and biggest users of all the new media. Huge quantities of pornographic photography were produced and distributed in the 19th and early 20th centuries. In addition, from the middle of the 19th century painters such as Manet and Degas started using photographs to paint from, and of course that affected their visual language. I don’t see any of this media influence on Vigeland's works, but you never know.

Here's an answer to this question. So-called big data and data science give us new and better ways to understand and to map the history of culture and the ways that different people and different works relate to each other. If you look at twentieth century art history you find a bunch of isms, and everything which doesn’t belong to these isms gets greeted with perplexity: "How do we talk about him?". For instance, Vigeland doesn’t go into abstraction, but he’s also not strictly classical, he’s somewhere in-between. However, 99% of all the artists, sculptors and photographers of the 19th century were also somewhere in between, in terms of isms. Traditional art history follows artists' manifestos and studies artists' PR, allowing most people to pretend that only a few hundred artists (who fit the isms and were good at PR) are very important. Everything else is not accounted for.

A project that interests me, and that I've been trying to approach for fifteen years (hopefully, I’ll do it soon) is to collect a sufficient number of images of 20th century paintings and try to cluster them according to their differing kinds of visual language. By finding and naming clusters, we can try to develop a language that allows us to talk about 99% of modern art, including figures such as Vigeland. We can see he simplifies forms, so he’s definitely part of the modernist movement, but he doesn’t go all the way, and it's difficult to talk about him, because he didn’t belong to the Futurists or the Constructivists; he doesn't fit the established, canonical categories. That’s why I like him so much, and that’s why I think computational methods have such great promise for the humanities. They can help us to account for and develop a better language and a more nuanced understanding of human cultural production.

Adam: Thanks Lev! That leads nicely into my next question: are new media leading to the rise of homogenised identities? With this, I wanted to ask you if we are justified in thinking of ourselves as unique individuals, or if we are all products of a designed ontology that promotes specific affects and patterns of consumption, for instance via the averaging of data and faces, and the foregrounding of ‘successful faces’. You have commented that "large-scale media analytics is often used in making decisions about what cultural products to create, their contents and aesthetics, and how they should be marketed and to what groups"³. Could it be said that people’s identities, or  the way we choose and aspire to illustrate ourselves today, are subject to homogenisation? Could the way that Netflix learns and adapts its content to users’ behaviours be applied to the exploration of the identity expression that we are being (accidentally) led towards within this post-ontological human identity? Further, what is choice today? Are all our present-day choices pre-designed for us? Do we even have free will?

Lev: Well, I think you and I have free will. I’m not sure about others, but don’t worry. So again, this is a wonderful set of questions, which I’d like to spend the rest of my life studying and answering, and in fact I'm doing that: I’m presently writing a couple of articles around these topics, but I will limit myself to a few short points.

One is that I’m not that young. I developed my career in the 1990s and 2000s and I often have an intuitive sense that there's a certain homogenisation going on. But maybe the way to describe it is different. Let's think about what’s called the "long tail". In every field, you find specific figures or phenomena that dominate: let’s say fashion looks, or the way to talk, or certain film stars or pop stars. Their market dominance can range from 10% to 70%; everything else is called the long tail. My hunch is that in the last twenty years, the mechanism has changed; the part of the cultural marketplace that is dominated by a very few choices, stars, looks, ways to behave, ways to think and so on has expanded, while everything else that exists, the long tail, has become relatively invisible. I definitely feel this is happening in the intellectual sphere, and I think the algorithms used by the internet and social media probably contribute to it.

However, maybe we shouldn't blame the algorithms, because people basically copy each other, don't they? Very few people are original; maybe one in a million. Most people don’t have time to carve out their own path, or they aren't brave enough to think for themselves—so they copy one other. In the present, it’s very easy to see what other people are saying, thinking and so on because you can find it online—you don’t need to go to the library. This could be the main mechanism which leads to this apparent homogenisation, which we could also think of as a kind of inequality. Particular ways to think, or keywords or ways to behave or dress, enjoy maybe 70-80% dominance. Alternatives exist, but they are much harder to find: this is what I think is happening.

Against this, the claim of cultural analytics is that we can’t trust these kinds of intuitions, because we can only see the microscopic details of the whole landscape, while big data analysis has much more powerful tools to study and quantify it. However, like all the interesting social questions, does this approach yield differing perspectives? Maybe. One study finds that yes, globalisation leads to more homogeneity while another will say no, globalisation leads to more choices. There are papers published by companies such as Spotify, based on their data analysis, which claim that over time, on average, its users listen to a widening range of types of music—but of course, that’s what Spotify would want to say; I’m not sure we can trust claims such as this.

So, the whole idea of cultural analytics is that now we have big data, we should not trust our intuitions but instead try to quantify the dimensions that can help us answer these questions. But finally, I want to stress that the digital universe is only one of many factors. Globalisation began around 1990, before the web: it is by no means the only factor leading to the world getting more closely connected, Perhaps certain network laws or biological principles lead to more homogeneity—but it's very easy to blame the Internet or Facebook for everything, and the media have been doing that for the last five years. You can’t blame Facebook for everything. It's just an algorithm, right? It can’t defend itself. Reality is much more complicated.

Adam: Lev, thank you. You have given us a tremendous amount of rich material to process. This has been a very valuable exploration.

Lev: Thanks Adam! I look forward to seeing the result.

You can find more on Lev’s research at www.manovich.net

 

[1] Manovich, L (2020) Automation: Five Ideas, in Cultural Analytics, Cambridge, Mass: MIT Press p.10
[2] Dawkins, R (1976) The Selfish Gene, Oxford: Oxford University Press
[3] Manovich, L., Automation: Media Actions in Cultural Analytics Cambridge, Mass: MIT Press (2020), p.66

 

© 2015-2021 PRAKSIS / Registered Organisation 915 733 417



Partially funded by: