The subtitle of this post ought to be “apparently,” since I have developing doubts about substituting digital surveillance systems and complex computer programs for the considered — humane — work of culture.
Case in point: about six weeks ago, Galley Cat reported on a new Kindle-related initiative called “popular highlights,”which Amazon.com had just rolled out onto the web for beta testing. In a nutshell, Amazon is now going public with information about which Kindle books are the most popular, as well as which passages within them have been the most consistently highlighted by readers.
How does Amazon determine this? Using the 3G connection built into your Kindle, the company automatically uploads your highlights, bookmarks, marginal notes, and more to its server array, or computing cloud. Amazon calls this service “back up,” but the phrase is something of a misnomer. Sure, there’s goodwill on Amazon’s part in helping to ensure that your Kindle data never gets deleted or corrupted. By the same token, it’s becoming abundantly clear that “back up” exists as much for the sake of your convenience as it does for Amazon itself, who mines all of your Kindle-related data. The Galley Cat story only confirms this.
This isn’t really news. For months I’ve been writing here and elsewhere about the back up/surveillance issue, and I even have an academic journal article appearing on the topic this fall. Now, don’t get me wrong — this is an important issue. But the focus on surveillance has obscured another pressing matter: the way in which Amazon, and indeed other tech companies, are altering the idea of culture through these types of services. Hence my concern with what I’m calling, following Alex Galloway, “algorithmic culture.”
In the old paradigm of culture — you might call it “elite culture,” although I find the term “elite” to be so overused these days as to be almost meaningless — a small group of well-trained, trusted authorities determined not only what was worth reading, but also what within a given reading selection were the most important aspects to focus on. The basic principle is similar with algorithmic culture, which is also concerned with sorting, classifying, and hierarchizing cultural artifacts.
Here’s the twist, however, which is apparent from the “About” page on the Amazon Popular Highlights site:
We combine the highlights of all Kindle customers and identify the passages with the most highlights. The resulting Popular Highlights help readers to focus on passages that are meaningful to the greatest number of people.
Using its computing cloud, Amazon aggregates all of the information it’s gathered from its customers’ Kindles to produce a statistical determination of what’s culturally relevant. In other words, significance and meaningfulness are decided by a massive — and massively distributed — group of readers, whose responses to texts are measured, quantified, and processed by Amazon.
I realize that in raising doubts about this type of cultural work, I’m opening myself to charges of elitism. So be it. Anytime you question what used to be called “the popular,” and what is now increasingly referred to as “the crowd,” you open yourself to those types of accusations. Honestly, though, I’m not out to impugn the crowd.
To my mind, the whole elites-versus-crowd debate is little more than a red-herring, one that distracts from a much deeper issue: Amazon’s algorithm and the mysterious ways in which it renders culture.
When people read, on a Kindle or elsewhere, there’s context. For example, I may highlight a passage because I find it to be provocative or insightful. By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing, or…you get the point. When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost. What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.
I’m not against the crowd, and let me add that I’m not even against this type of cultural work per se. I don’t fear the machine. What I do fear, though, is the black box of algorithmic culture. We have virtually no idea of how Amazon’s Popular Highlights algorithm works, let alone who made it. All that information is proprietary, and given Amazon’s penchant for secrecy, the company is unlikely to open up about it anytime soon.
In the old cultural paradigm, you could question authorities about their reasons for selecting particular cultural artifacts as worthy, while dismissing or neglecting others. Not so with algorithmic culture, which wraps abstraction inside of secrecy and sells it back to you as, “the people have spoken.”
[…] This post was mentioned on Twitter by Jose Afonso Furtado, Ted Striphas, Wayzgoose, Daniela Didier , e-Tail and others. e-Tail said: How to Have Culture in an Algorithmic Age: Considers the shift from human-driven to machine-driven culture by exam… http://bit.ly/boBhGQ […]
…or is there more of a dynamic at work? As “the people” decipher the rules/purpose of the algorithm, do they alter what data they contribute?
Now that I know how Amazon/NetFlix/dating sites/etc use my information for “recommendations” — either explicitly through seeing the algorithm or implicitly through using the results — do I alter how I write my “reviews”, which pages I “like,” and how I structure my dating profile? (I do.)
To your point, though, we may not see all of the consequences of these algorithms… and these “don’t know we don’t knows” are legitimate concerns.
@od: Thanks very much for your comment. I agree that there’s a dynamic at work, and that what I’m calling “algorithmic culture” has some form of human action at its core — both at the level of the programming and, as you note, at the level of the user. What’s intriguing to me is what I take to be a presumption in your comment: that the tailoring of my choices to “game” the recommendation engine is an act of humanism, or human intervention. Is it? Or is it a strange way in which we begin altering our our proclivities and predispositions to meet the presumed expectations of a machine? Perhaps I’m reading too much into your comment, but if I’m not, I’d be curious to hear your thoughts on the matter.
So I’ve been thinking about this a lot too. In an environment where the major content platforms, whether its Amazon or Google or Apple, are amassing enormous quantities of content and trying very hard to offer us ways of navigating through it to find the pieces we want, or are willing to purchase, we’re seeing a lot of these mechanisms — from “most viewed” to “people who purchased this” to these massive aggragets of “relevance.” I’m trying to decide what the important questions are around this. There’s certainly the claim, which we hear from the Amazons of the world, that this is some true and novel glimpse into the actual public sense of importance. (I’m working on an article now that links our contemporary faith in the algorithm with a traditional faith in ‘objectivity’ in knowledge production.) Maybe even an idea that these aggregate insights offer ways for us to see what’s relevant that we could never see otherwise.
Then there’s the counter-concern you raise, which I share: these are not in fact neutral or unmediated assessments, they’re algorithmic, which means they make (or pre-make) choices about how to judge relevance. And with that, they are socially constructed, interested — the amazonfail incident made this most visible: to me, the story wasn’t that Amazon was ‘censoring’ GLBT books, it was that, all the time, when Amazon lists the “top sellers” its in fact the “top sellers” within a boundary, outside of which “adult” books, even big sellers, do not count. And of course, these are not just interested assessments, but they’re made by an organization with a financial interest at stake.
I am with you on the point about how the context around what people count as ‘relevant’ is lost, though I could imagine the counterargument that in a way it doesn’t matter that much — previous to these kinds of techniques, certain texts, passages, and ideas were brought to cultural visibility through commentary, review, advertising, and criticism, often because the text was thought by someone to be of value but not always of positive value, but one might say that regardless of the assessment, it did resonate for someone, and this was reason enough to put it into circulation as an element of the culture.
Then there’s the concern your commenter brought up, that people might game the system, either purposefully, or as this system begins to produce certain kinds of effects, more inadvertently. Beyond the obvious search-engine-optimization kinds of gaming, there’s the question of how one could game this kind of massive data collections, but its intriguing to me that this concern comes up so regularly.
I wonder if there are there other ways to think about this issue? Maybe a question about the kinds of things that simply cannot be captured in these kinds of algorithmic nets? (I’m imagining books that just simply resonate with people, but not for the punch of particular passages? Art forms that do not fit neatly with these kinds of ‘tagging’ behaviors? There may be other ways to raise this question.)
I really like the breadth of a question like “what kind of culture might this produce?” and I want to be sure that we’ve thought about it from all sides. Thanks for the thoughts.
Tarleton,
Wow — thanks for offering such provocative questions and considered commentary. For whatever it’s worth, I’m constantly amazed at how much within the same orbit our work is. When I float ideas like these out there in the world, it’s reassuring to receive a response from someone whose work I’m already implicitly in dialogue with.
The counter-argument you raise in response to my point about context is a good one. In many respects, context is indeed irrelevant, or at least it’s so idiosyncratic as to have little value beyond matters of personal taste or motivation. I’m with you on that. But then again, what strikes me about questions of culture, taste, and judgment is the communicative aspect, that is, how decisions about what’s culturally important are the stuff of argument and debate. In that sense, context seems to me to be vitally important, for without it, how would one justify one’s decisions about what deserves to stand out and what ought to recede into the background?
The point about “gaming the system” is an intriguing one. Someone else brought this up in response to my piece on privacy concerns and the Amazon Kindle, posted here. I’ll be honest, though: I’m skeptical of that possibility, particularly on an individual level. I became even more skeptical this month when I read a Wired profile on Sergey Brin, who talked about Google’s efforts to scrub what he called “noisy data.” Of course, data “noise” is always a statistical or algorithmic representation of what someone’s deemed relevant, so perhaps it is indeed possible to game the system by exploiting the representational aspect somehow. The point, though, is that there’s already resistance to the resistance, as it were, which is tantamount to saying that folks concerned about algorithmic culture will need to out-Google Google.
Last thing: I love how you’ve ended your comment, namely, by underscoring the degree to which algorithms are, at the end of the day, systems of representation. This seems to me like an important inroad to explore, especially since (I’d suspect) many people believe that recommendation engines provide a more or less transparent account of what one likes, or ought to like.
Great stuff! Let’s keep the dialogue going!
[…] How to Have Culture in an Algorithmic Age — The Late Age of Print: […]
How do Amazon now if the highlighting is positive (“this is really good”) or negative (“this is complete nonsense”). It’s a bit like page rank : lots of people may be linking to somewhere and saying “this site is utter trash” but that can make it bubble up and so other people get fooled.
ach know not now. damn eyboards
[…] How to Have Culture in an Algorithmic Age by Ted Striphas […]
[…] How to Have Culture in an Algorithmic Age […]
[…] How to Have Culture in an Algorithmic Age […]
[…] http://www.thelateageofprint.org/2010/06/14/how-to-have-culture-in-an-algorithmic-age/ […]
[…] 15th, 2011 | Algorithmic Culture Tweet Back in June I blogged here about “Algorithmic Culture,” or the sorting, classifying, and hierarchizing of people, […]
[…] Culture, Redux Ted Striphas Back in June I blogged here about “Algorithmic Culture,” or the sorting, classifying, and hierarchizing of people, […]
[…] Ted Striphas revisits the issue of “algorithmic culture” (something he considered previously a few months ago) which he discusses primarily in terms of Amazon’s recommendation […]
[…] How to Have Culture in an Algorithmic Age is basically telling us to not totally trust the algorithms. Though they are great and helpful, they can never factor in every aspect of the context in which a song, movie, passage, book, etc. can be graded. And in the case of Amazon, the dynamic of the algorithm is hidden, so it is impossible to tell exactly how it is interpreting the data we send it. In the example they mention with tracking Kindles, one person may highlight a sentence because it’s inspiring, while another may highlight it because of a grammar mistake. I think the author is recognizing the validity of recommender systems and at the same time acknowledging that culture will persevere with these algorithms. […]
[…] How to Have Culture in an Algorithmic Age The author expresses concern for what he calls the “algorithmic culture.” For example, Amazon generates its “Popular Highlights” page by making a “statistical determination of what is relevant.” Both “significance and meaningless are decided by a massive group of readers” and context is not taken into consideration. “For example, I may highlight a passage because I find it to be provocative or insightful. By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing… when Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost.” This is troublesome because Amazon does not reveal its algorithm for the “Popular Highlights.” There is no way to question the validity of their decisions because it is proprietary information. More importantly, corporate secrecy is masked behind the notion that the “public has spoken.” […]
[…] blogged off and on over the past 15 months about “algorithmic culture.” The subject first came to my […]
[…] understood. A increasing number of subtle thinkers are these days treading similar ground. Ted Striphas‘s notion of algorithmic culture2 imagines that this process replaces the elite culture of […]
[…] Recommended Reading: I first got interested in algorithms through Ted Striphas’ posts on algorithmic culture, like this one: http://www.thelateageofprint.org/2010/06/14/how-to-have-culture-in-an-algorithmic-age/ […]
[…] Striphas, Ted (2010) “How to Have Culture in an Algorithmic Age” The Late Age of Print June 14. http://www.thelateageofprint.org/2010/06/14/how-to-have-culture-in-an-algorithmic-age/ […]