Friday, December 18, 2020

"Springs of Action"

 I've long been a fan of the thoughtful work of Louise Amoore, whose interests touch on many of the themes that are at the heart of my own research, though from quite a different direction. So, I approached her recent, influential work on Cloud Ethics with great interest. However there is an inversion in one of the book's linchpin concepts that masks a symptomatic omission characteristic of the book's theoretical alignment with recent strands of so-called "new" materialism. It is hard not to miss this concept, because it is one of the book's key conceptual refrains: the notion of "spring of action" invoked by the philosopher Richard Braithwaite in a well documented exchange with Alan Turing on machine learning aired on BBC. For Braithwaite, the concept seems relatively straightforward: for some entity to think about the world, that is to focus on and isolate aspects of the environment in order to understand or make sense of them, there must be an underlying desire or motivation. The question of desire, of course, is crucial to any discussion of machine cognition, AI, or machine learning. A computer program may learn how to beat a chess grandmaster, but does it even want to play chess in the first place? Does it want to win? It may compose music and recognize cats, but does it care about music or cats, does it have any meaningful concept of music or cats and why they might be of interest? Could it care any less about the variation in the different tasks that might be set it? Does it matter to the machine if it is learning how to identify cats or anomalies in budget reports? I take this set of questions to cluster around Braithwaite's point about desire (which Amoore quotes, in part, as a chapter epigraph):

A machine can easily be constructed with a feed-back device so that the programming of the machine is controlled by the relation of its output to some feature in its external environment—so that the working of the machine in relation to the environment is self-corrective. But this requires that it should be some particular feature of the environment to which the machine has to adjust itself. The peculiarity of men and animals is that they have the power of adjusting themselves to almost all the features. The feature to which adjustment is made on a particular occasion is the one the man is attending to and he attends to what he is interested in. His interests are determined, by and large, by his appetites, desires, drives, instincts—all the things that together make up his ‘springs of action’. If we want to construct a machine which will vary its attention to things in its environment so that it will sometimes adjust itself to one and sometimes to another, it would seem to be necessary to equip the machine with something corresponding to a set of appetites.

The somewhat surprising aspect of Amoore's uptake of the term, "springs of action," is that she transforms it from a motivating force to an outcome: the moment at which an action takes place. So, whereas Braithwaite sees appetite/desire as setting the whole process of learning in motion, Amoore takes the term "spring of action" to refer to the precipitation of desire into an action of some sort -- it manifests only when something happens. For example, in her description of how surgeons work together with a surgical robot, she frames the "spring of action" as the moment when a particular action results from the human/machine assemblage:

The spring to action of surgical machine learning is not an action that can be definitively located in the body of human or machine but is lodged within a more adaptive form of collaborative cognitive learning. Intimately bound together by machine learning algorithms acting on a cloud database of medical the we of surgeon and robot restlessly seeks an optimal spring of action — the optimal incision, the optimal target or tumor or diseased organ, the optimal trajectory of movement.

The shift from "spring of action" to "spring to action" is perhaps significant in this formulation. She is interested in the moment when learning occurs: when the human and machine conspire to make an "optimal" move: an incision or some other (optimal, again) trajectory of movement. The "spring of action" is the result: something happens: a cut is made. Of course, what gets glossed over in this moving description of human-machine collaboration in the name of optimization, is what motivates the machine (or the human, for that matter). It turns out the "spring of action" as framed by Amoore requires a prior desire -- whatever it is that makes the human-machine assemblage "seek" in the first place. This is Braithwaite's point -- desire gets the whole assemblage moving. It is perhaps telling, again, that in this particular formulation, the "optimal" result is a cut -- we might describe it, drawing on Karen Barad's work as an "agential cut." What looks like a failure to distinguish between cause and effect, motive and outcome, desire as motive force and desire as a product of the assemblage, is characteristic of the fate of causality in recent versions of "new" materialism -- and its related Deleuze-inflected ontology of desire. In such formulations, causality is emergent -- there is no meaningful distinction between cause and effect, which means the version of desire invoked by Braithwaite is elided. The fact that a cut occurs retroactively constitutes the fact of the "seeking" -- in fact, this is perhaps the only meaningful way, in such a framework, that we might approach the notion of desire. It's hard to imagine that Braithwaite would endorse this reconfiguration of his formulation of a "spring" of action, which is what makes its repeated, inverted invocation come across as so jarring in Amoore's formulation -- not least because she fails to acknowledge this inversion, taking it as read. Perhaps the assumption is that whenever we talk about desire, we are only talking about it as a post-subjective, post-human formulation: something that is co-extensive with its effects and manifestations: the mark of desire is the optimal cut. 

Sunday, July 5, 2020

Pessimism of the intellect...

A couple of thoughts in response to Alex Burns's mediations on a somewhat anguished Tweet I launched on a Friday afternoon. In most cases, the various responses to the Tweet took it in the spirit in which it was offered: as concern about the seemingly inevitable colonization of higher education by the tech industry, exacerbated by the restrictions ushered in by the current pandemic. 

Some read it, a bit more hostilely, as a selfish focus on cashing out before succumbing to a system that is antithetical to my academic values and commitments. 

Such is the destiny of Tweets. 

For the record, I intend to do what I can to resist the commercial platforming of higher education, in keeping with an academic career that has been devoted to identifying and critiquing the pathologies of digital capitalism. That does not mean I'm particularly optimistic with respect to the outcome. There is too much administrative support for such a move -- and, as Burns's post indicates, significant buy-in among academics, at least in certain contexts. At the same time, I've had the good fortune to work and be trained in academic institutions that will likely be among the holdouts, and for that I applaud them and will continue to support them however I can. 

I don't know Alex, though I take him to be a person of good will, and I suspect the future belongs to him and other like minded people. Maybe that's good news for them. I don't think I share their particular vision for the social role of higher education, and I worry about the consequences of such a vision for the character of our social world. 

I am just going by the one post - so I am likely missing some very important context -- but there are some moments in the post that prompted this response. The first is a conspicuous absence in the definitions of education on offer. The choices Burns provides are  higher education as contributing to "knowledge creation," serving as a form of "elite reproduction," or, finally, one more version of capitalist alchemy: a way of turning student fees into land grabs and retirement savings (a dig at the original Tweet). 

None of these really speak to the core mission of the University, as I understand it: education. Missing, for me, in this list, is the process itself: fostering a community and culture of thought (for both researchers and students) informed by the achievements of humankind in ways that contribute to a critical and constructive engagement with the world. 

I realize the definition of "achievements" is a contested one, and the field for enabling and recognizing these has long been warped -- but this contestation and an ongoing reckoning with the forms of power that shape knowledge production seem part of the culture of thought promoted by such an education. 

I imagine the reference to elitism in Burns's post  is meant to encompass this view of the role of education. The charge of elitism is often deflected toward the content of particular forms of thought and the knowledge with which these are associated, when perhaps the actual target is the conditions of its production and reproduction. To restrict the forms of understanding and knowledge to which I'm referring to a select, privileged, group (through, for example, making a liberal arts degree prohibitively expensive), is elitist. The way to challenge this form of elitism is not to do away with such an education altogether, but to make it available to all who are interested, and, in so doing to transform and enrich it (to reconfigure the content by addressing the deformations associated with the elite conditions of access that shaped it).

By way of response, I would press against the ready equation of technology with the commercial tech sector. I realize that the latter is at the forefront of technological development, but I think there is still a meaningful difference between imagining constructive uses for new technological affordances and merging higher ed with the commercial tech sector.

What worries me about the spectre of a tech-sector takeover is that the result may well be regressively elitist: reserving the type of education I associate with the University to a few pricey holdouts. Perhaps this is simply a function of my being woefully out of touch with my time. However, I would resist the accusation of nostalgia: the version of higher education to which I remain wedded is one that has only ever appeared as a potentiality. The commercial, corporate capture of the University would most likely extinguish it altogether. 

It's hard for me to get enthusiastic about the platform commercialization of research metrics. Burns refers to the prospect of commercial platforms showing us the, "20% of your research outputs that are having the 80% readership impacts." I suppose this is meant to shape our research the way audience ratings might shape the development of a TV show, or how market research might contribute to product development. Who wants to spend their time on research that doesn't have "impact"? 

Nonetheless, I don't think we should take for granted the currently fashionable term "impact" and its relation to the various proxy measures that represent it. In the highly metricised research environment in which we operate, it means how many times an article gets cited, shared, or mentioned (not necessarily read). It is a quantitative measure that doesn't tell me whether or how the piece changed how someone might see or think about or act in the world. It doesn't tell me how this research might influence the classroom experience of my students and their understanding of the world.

It is, for all practical purposes, a metric that, through a series of transformation, can be monetized. (citations=impact=rankings=student fees). Platform capitalism, natch. That doesn't mean important qualities are necessarily excluded from "impact" or that citation numbers don't have any relation to what they're meant to serve as a proxy for. We all want our work to enter into the conversation. 

However, it does underwrite a tendency for the proxy to displace the real goal, and we know how that plays out. The notion -- imported from marketing -- that the proxy has some kind of predictive value is, I suspect, a deeply problematic one. I've got a couple of friends who, very early on in the era of digital media studies, started working on copyright issues. At the time, very few in the field were working on the topic, so who else would cite them, anyway? 

It turned out they saw something others did not, and they built successful careers on the foundations of this work. By contrast, platform algorithms give us the kind of endless repetition that has come to characterize so much of our data-driven culture. I doubt they're much good for guiding original, ground-breaking research. They can tell us after the fact, that the research got attention -- which is fine -- but that's about it. 

The other provocative moment in the post, for me, is the reference to the increasing cost and allegedly diminishing productivity of academic labor. I'm not sure what the reference point here is, but the stats I've seen show some measures of productivity on the rise. Research outputs have been increasing. Although this varies across fields, student-faculty ratios have also been increasing. I suppose this speaks to productivity in some way, but I don't greet either of these as positive developments -- they are driven by economic logics that have been promulgated by another trend: the increase in administrators per student (perhaps this speaks to the issue of diminished productivity?). 

None of this should be read as a blanket critique of technological innovation. My target is the commercialization of higher education. I have yet to see evidence that commercial platforms are equipped to support the type of intellectual engagement and culture that is crucial to higher education as I understand and experience it. There is certainly a version of higher education that tech companies will be able to implement, and they will likely do it much more efficiently and profitably than universities can. However, I worry it will be unable to provide the type of thought and understanding we will need to address the pathologies of the current moment -- many of which are associated with those companies most likely to take a lead in "disrupting"  higher education. I'm wary of the recurring tech industry promise that only the spear that inflicted the wound can heal it. 

Tuesday, February 25, 2020

A Response to Jill Walker Rettberg

Note: Upon first receiving a link to Jill Walker Rettberg's review of Automated Media in the journal Convergence from its author, I asked her if she would support my request to the journal to publish a response. Professor Walker Rettberg graciously agreed to this, so I approached the editors with this request. They replied that current editorial policy does not provide them with the latitude to publish my response, but agreed to promote it via social media.

Mark Andrejevic
Monash University

For the record, I don’t believe journals or reviewers have any obligation to promote new books in the field or to be positive about them in the interest of collegiality, solidarity, or politics. I do think, however, that reviewers have the fundamental obligation to be roughly accurate in their description of the book under review. It is this belief that prompted me to ask the editors of Convergence -- and the author of its recently released review of Automated Media (2019) -- to support the publication of a response to the review.

This decision was bolstered by the fact that I first learned of Jill Walker Rettberg’s (2020) review from several tweets she directed toward me on the occasion of the review’s online publication. This social media flurry felt like a direct invitation to respond in some way. I replied that I thought her review misconstrued the book’s main arguments, but that I didn't feel Twitter was suited to productive academic discussion, especially when there are substantive misunderstandings to be sorted out. My goal in this response is not to take issue with Professor Walker Rettberg’s core arguments, but to suggest that they miss their target. The strange thing to me about reading the review is how much I agree with the arguments she arrays against what she takes to be my own -- precisely because she gets the book's central claims exactly the wrong way around. There may be the makings of a debate here, but it cannot get off the ground until the mischaracterizations in the review are addressed.

The main ones center upon what the book describes as “the bias of automation” and also upon the notion that automated data collection might live up to the promise of “total coverage” or what the book describes as “framelessness” (that is, the fantasy of digitizing information about the world “in its entirety”).

The book starts off by differentiating between two meanings of the term “bias”: the first is a familiar one that refers to the fact that automated processes can systematically privilege or disadvantage particular groups. The examples here are myriad, ongoing, and alarming, warranting the robust critical attention they receive. The second meaning of bias invoked in the book is less common and draws on the work of Harold Innis -- discussed in some detail -- to suggest that the very choice to use a specific media technology (in a particular social context) can privilege particular logics and tendencies. The book notes that critical work on the first version of bias is well developed and crucially important, and argues for the importance of considering the consequences of the choice to use automated systems in the first place (within a particular context). The book's goal is to examine the logical tendencies that flow from this choice, describing them as “biases” in the way that we might describe, for example, market transactions as “biased” toward the assessment of value in ways that can be quantified. Such transactions may also be biased in the first sense as when, for example, they result in discriminatory outcomes for particular groups. I take these two levels of bias to be distinct, but they can certainly overlap -- as in practice they so often do.

The review overlooks this distinction, proceeding as if all mentions of bias refer to the first version, and faults the book for not engaging in more depth with the relevant literature on this. I strongly agree with Professor Walker Rettberg regarding the importance of this work, and I do think there is room for further development of the connection between these two forms of bias. There is also an interesting discussion to be had about what happens to the first sense of “bias” when we concede its irreducibility. However, neither of these discussions would justify the review’s wholesale assimilation of one meaning of bias to the other. Perhaps she thinks the distinction is untenable -- an interesting claim -- but this is not the argument advanced in the review.

The most confounding misreading, however, is the attempt to attribute to the book the very perspective it critiques: that automation can somehow escape the constraints of finitude and representation. Professor Rettberg accuses the book of not recognizing that, “the fantasy of total knowledge, of there being no gap between data and reality, is just that, a fantasy” (2). However this is, almost verbatim, the core repeated argument of the book.

The chapter on “framelessness” for example, refers to the ambition of digitally capturing and reproducing the world in its entirety as an impossible fantasy (see, for example, p. 114: "The fantasy of automation is that in the breadth of its reach, in the ambition of its scope, it can approach the post-subjective perspective of the view from everywhere -- or nowhere: the purely objective representation that leaves nothing out"; p. 115: p. 122: "Conspiracy theory defaults to a choice between an impossible standard of completeness (framelessness) and...gut instinct..."; p. 126: "There is a seemingly 'democratic' cast to the fantasy of framelessness").

To drive the point home, the book summarizes the examples it critiques as representing, “tendencies and trajectories – many of which, I want to emphasize, head in logically impossible directions such as, for example, the attempt to dispense with a frame entirely, to envision the possibility of a purely objective decision-making system, and to capture all information about everything, all the time” (160). It is no accident that the book uses the language of fantasy to describe the logics of pre-emption and framelessness: these are only conceivable from an impossible, infinite perspective -- as the book repeatedly argues.

Something similar takes place in the review with respect to Professor Walker Rettberg’s attribution to me of, “the idea that there is no gap between data and reality.” The book takes this very gap as one of its defining themes, as illustrated from the opening pages and in a number of passages, including the following: “Critiquing the post-political bias of automation means engaging with the possibility that the world does not work this way: that it cannot be captured and measured ‘all the way down,’ because there are irreducible uncertainties and gaps in reality” (101).

The book argues repeatedly that the fantasy of total information collection -- of overcoming the gap between representation and reality -- is both a structural tendency of automated technologies (“if this system is inaccurate, all it needs is more data, so that it can be better trained”) and an impossibility. To treat fantasies as if they have real consequences is not the same thing as saying they are real, true, or accurate. The book’s concern is directed toward these consequences.

Consider, for example, Professor Walker Rettberg’s accurate claim that emotion detection algorithms do not measure actual emotion -- that the data do not capture the supposed referent. The book points out that from an instrumental and operational perspective, the referent drops out. Imagine (as many tech companies have), a system that links “emotion detection” to a marketing campaign: a designated “emotional state” of some kind is associated with the increased likelihood of someone clicking on an ad and purchasing a product. Whether the machine has correctly identified the user’s state (the “referent” of the identified emotion) is immaterial to this correlational system: the “emotional state” becomes a vanishing mediator. What matters is the robustness of the correlation between one set of variables (facial expression, for example) and another (purchasing behavior).

Prof. Walker Rettberg attributes the supposed inability of the book to recognize the fantasy as such (despite repeated explanations for precisely why each of the fantasies it describes is incoherent and self-contradictory), as a function of its failure to engage with feminist and intersectional theory. This criticism overlooks the fact that much of the book's argument, including the entire final chapter, is influenced by the work of Alenka Zupancic (2017), a theorist who does groundbreaking work at the intersection of feminism, critical theory, and psychoanalytic theory. The chapter’s argument draws heavily on Zupancic's 2017 book, What is Sex?, which develops an original, psychoanalytically inflected argument to ground the very claim that Rettberg accuses it of ignoring: the non-identity of data and the world, sign and referent. As Zupancic puts it, "feminism (as a political movement) puts in question, and breaks precisely this unity of the world, based on massive suppression, subordination, and exclusion" (36).

The conclusion develops an extended interpretation of Zupancic's discussion of the impossibility of the perfected "relation" as a way of highlighting the fantastical biases of automation. That the review misconstrues this argument to the point of getting it backward is perhaps testimony to the fact that Zupancic has not received the attention in the field she deserves.

Professor Walker Rettberg’s review brings together interesting and important literature to make arguments that, in many cases, align with the book's key concerns. I find myself agreeing with most of the points she makes -- with the caveat that they do not apply to the book in the way she imagines. The review does an excellent job of demonstrating her familiarity with an important set of theories, arguments, and academics, but it does so at the expense of misreading and mischaracterizing the book's defining themes.


Andrejevic, M (2019) Automated Media. New York, London: Routledge.

Innis, HA (2008). The Bias of Communication. Toronto: University of Toronto Press.

Rettberg, JW (2020) Book review. Convergence, first published online at:

Zupancic, A (2017). What is Sex? Cambridge: The MIT Press.

Thursday, June 23, 2016

This is what ideology looks like

Lev Manovich had some interesting takeaway points from his recent visit to Facebook Korea that highlight familiar tendencies in contemporary media studies. The scare quotes serve as signposts for where he's headed in his post, as they designate the terms he deems obsolete in the Facebook era: "ideology," "control," "dominant logic," and, of course, "global capitalism" (as in: "There is no 'master plan' or 'global capitalism'"). This is a short step away from the familiar Thatcherian observation about "society." All there are, in the end, are particularities combining in assemblages whose activities are spontaneous, emergent, and unpredictable -- irreducible to the crude terminology of critical theory and free of any discernible structuring logics. Ideology is dead: long live the new (?) ideology of new materialist pluralism.
I suppose there are two ways to take these claims: the more reasonable (that ideology is complex and multi-faceted, but it still exists, that abstractions alway leave something out, but retain a certain utility) or the wholesale ingestion of the Kool-aid (once upon a time people may have been duped and propaganda existed, and capitalism was a thing, but now everything is so complex and particularized that abstractions themselves no longer have any use at all, everything is up in the air and free -- and because of that wonderfully liberating). There is certainly plenty to be said in the support of the first interpretation, but the second one seems to fit better with the conclusion of Manovich's post:

"The future is open and not determined. We are all hacking it together. There is no "master plan," or "global capitalism," or "algorithms that control us" out there. There are only hundreds of millions of people in "developing world" who now have more chances thanks to social media and the web. And there are millions of creative people worldwide adapting platforms to their needs, and using them in hundreds of different ways. To connect, exchange, find support, do things together, to fall in love and to support friends. Facebook and other social media made their lifes more rich, more meaningful, more multi-dimensional. Thank you, Facebook!" 

Wow -- this is a veritable paean to Facebook. Clearly there are interesting things taking place on Facebook, and there are plenty of constructive uses for it, but it seems a bit extreme to portray it as the savior of love, support, and the meaning of life. Not long ago, it seemed to me that perhaps the moment had passed for emphasizing a critique of the flip side of the benefits and conveniences of the online commercial world, because the moment of an unquestioning cyber-utopianism has passed, but it seems alive and well. 

To paraphrase Adorno:
Just as the ruled have always taken the morality dispensed to them by the rulers more' seriously than the rulers themselves, the defrauded new media enthusiasts today cling to the myth of success still more ardently than the successful. They, too, have their aspirations. They insist unwaveringly on the ideology by which they are enslaved. Their pernicious love for the harm done to them outstrips even the cunning of the authorities

Saturday, May 16, 2015

The Fate of Art

It was very strange to see BoingBoing promoting this hackneyed critique of contemporary art by "artist" and illustrator Robert Florczak or Prager "University." More on the scare quotes in a second. Why strange? Maybe it's the blender effect of Twitter which constantly recirculates the old as if it's new and the new as if it's already been around the block so often that by the time you get to it it's old news. Maybe it's because former WIRED editor Chris Anderson retweeted it with the following observation: "Well argued and brave. Plus fun prank on his grad students." Really? Let's start with the last bit first. The prank that Anderson thought was so fun: giving his students a close up photo of a painting he claims to be by Jackson Pollock (but is actually a close up of his studio smock) and making them explain why it's so great, so that he can then humiliate them by revealing the true source of the image. This raises some interesting questions about his grad students (at Prager University?), who seemed to think that this:
was pretty much indistinguishable from this:
Ok, I get it, squiggles are squiggles, but these are supposed to be graduate students in art (history? studio art?) of some kind. Which makes one wonder what kind of university this is. Apparently it's the online creation of conservative talk show host Dennis Prager -- a venue for right-wing, low budget Ted-type Talks devoted to topics like "Feminism vs. Truth" and "The War on Boys," and why Christians are the "Most Persecuted Minority." Maybe the inability to tell the difference between these two images helps explain why Florczak, who paints things like this:

seems to think that he's working in the tradition forged by the painters of images like this: 

and this: 

Rather than the tradition forged by the creators of images like this: 

and this:
Florczak's claims seem to have something to do with technique and skill -- things that, for example, both Kenny G. and John Coltrane have mastered, but that don't make them the same type of artist. That this distinction is lost on the likes of Anderson and BoingBoing's Mark Frauenfelder (another former WIRED editor) is an indication of the cultural confidence of the tech world, in which expertise becomes fungible and the perpetual vindication of financial success a kind of all-purpose cultural qualifier.

Wednesday, February 12, 2014

Post-Critical Theory: Desire and New Materialism

What to make of the recurring claim that matter "desires" -- articulated perhaps most passionately by Karen Barad: "Matter feels, converses, suffers, desires, yearns and remembers." I suppose the real question here is what one might mean by "desire" in this context (or "converse," for that matter). I suggest that these are metaphorical uses of the terms -- matter (except for that which takes the form of human sociality) does not have recourse to language even though it may "communicate" in the archaic sense of a physical transfer (heat can be communicated, so too electrical signals -- even quantum states). Without access to language, matter can no more desire, in a psychoanalytic sense, than it can converse. Surely it can be entangled, embedded, or otherwise caught up in some form of relations with other entities and with itself -- indeed it cannot not be. 

But that is something altogether different from the dimension opened up by language (as might be demonstrated in negative fashion by, for example, by Ian Bogost's dismissal of linguistic forms of production as not being on a par with more properly material ones. For more on this point, see my critique of Alien Phenomenology). This is perhaps where the pendulum swing away from discourse represented by "new materialism" goes a bit too far: in conserving notions like desire while simultaneously setting aside any engagement with the dimension of language (and, consequently, that of the subject).  

This setting aside has ramifications for the fate of critique, as suggested by Barad's vociferous dismissal of critical approaches:  "I am not interested in critique. In my opinion, critique is over-rated, over-emphasized, and over-utilized...Critique is all too often not a deconstructive practice, that is, a practice of reading for the constitutive exclusions of those ideas we can not do without, but a destructive practice meant to dismiss, to turn aside, to put someone or something down." This is a response that reveals much about the stakes of critique in contemporary academic (primarily literary-theoretic) circles. Critique has become a game of one-upsmanship and can have unconstructive rather than deconstructive results. If, once upon a time, the point of critique was to address human suffering, reflexive critique can apparently, exacerbate it -- at least in certain circles. Someone's (or something's?) feelings might get hurt. 

For Bogost, the concern is somewhat different: overly humanistic thinking -- even of the ostensibly critical kind -- can get a tad boring: "Just as eating only oysters becomes gastronomically monotonous, so talking only about human behavior becomes intellectually monotonous.”  

It is hard not to read such observations as registering the level of contemporary academic alienation. I'm worried that these are the types of concern ("I"m bored" or "If you critique my argument, then you're putting me down") that come to the fore when you've lost any urgent sense of the point of what you're doing beyond constructing an argument for argument's sake -- what Adorno might call the wholesale aestheticization of theory. It seems absurd to even say this in the current conjuncture, but what if social theory were, on some level, actually about working toward making the world a better place for humans? OK, that might bore some people who've eaten too many oysters, but presumably they have the luxury not to worry about where the next oyster is coming from, and perhaps the lack of imagination to consider the fate of those who do not. 

It is this alienation that, I think characterizes the critical inertness (or refusal) of what passes for "new" materialism these days. I put "new" in scare quotes, since there is a strong affinity between this versionof materialism and what Zizek describes as "Althusser’s materialist nominalism of exceptions (or'clinamina'): what actually exists are only exceptions, they are all the reality there is. (This is the motif of historicist nominalism endlessly repeated in cultural studies...) However, what nominalism does not see is the Real of a certain impossibility or antagonism which is the virtual cause generating multiple realities." This structuring or generative antagonism -- and for Zizek it is, of course, the constitutive rift of capitalism -- is what falls by the wayside in such materialist nominalisms. One symptom of this loss, is the sidestep away from the register of language and its deadlocks -- and thus, of course, from an engagement with the question of desire. Matter may desire -- in some reconfigured, alinguistic conception of the notion -- but desire does not matter.  

Monday, August 26, 2013

Drone Theory and Goldfish Crap

Generally I like the idea of dividing academic labor up so I can read the theory I like and apply it to things I don’t (“symptoms” of a damaged world). But these days, that division is breaking down, and some of the hip “new” theories are creeping disconcertingly into the symptomatic realm. In particular, some recent work on “object oriented ontology” and new materialism leaves me trying to figure out why those whose critical commitments I share might find them interesting or useful. 

The problem is not so much how to work out the theory, but to make sense of its uptake. The more I engage with this work – and, I’m not sure how much more time I really want to spend on it – the more it looks to me like a close relative of the enthusiasm over data mining and the forms of “knowledge” it generates. The logics align with one another – post-narratival, post-subjective, post-human – even though the sensibilities are ostensibly opposed. The following is a bit of a rant that emerged as a by-product of an offer to collaborate on a review of Ian Bogost’s Alien Phenomenology, a symptomatic book if ever there was one. The invitation meant having to read the book, which I found largely a frustrating endeavor, as evidenced by the following observations (all citations are from the book, which I read on Kindle without pagination):

Ian Bogost’s paean to the pleasures of the great outdoors – the “grassy meadows of the material world” casts poor old Immanuel Kant in the role of the stereotypical video gamer tethered to the tube.  It is hard not to hear in Bogost’s call to flee the “rot of Kant” seeping from the “dank halls of the mind’s prison” the all-too-familiar admonition to video game geeks to “get out of the house.” Perhaps this is a call Bogost has heard so frequently that he has internalized it sufficiently to wield it against others: the call of the great outdoors is a recurring refrain in his celebration of the mysteries of the object world – primarily and paradoxically incarnated for him in the form of high-tech electronics: digital cameras, computer games, and cathode ray tubes. “Let’s go outside and dig in the dirt” he enjoins us, but only metaphorically, really. 

In a sense, the entire book is a rejoinder to the call to get out of the house:  “I’m already outside -- that’s where I’ve been all along.” Bogost’s interpretation of what, following Meillassoux he calls “correlationism” (which he equates with seeing things through the lens of how they impact humans) pits him firmly against any attempt at developing an analysis that “still serves the interest of human politics” (a charge he levels at Latour for not being anti-correlationalist enough).  But this opposition runs headlong into the repeated theme of his urgent (though largely unexplained) claim that “to proceed as a philosopher today demands the rejection of correlationalism”: we need to get outside and romp in the “grassy meadows” so we can collect the “iridescent shells” of realism and so on. If we chose to do so because it turned out to be good for us, of course, we would have succumbed to the trap of correlationism. Even animal studies is too anthropocentric for Bogost’s tastes because, “we find a focus on creatures from the vantage point of human intersubjectivity, rather than from the weird, murky, mists of the really real” – what we might otherwise describe as “the view from nowhere.” Much the same goes for Michael Pollan’s attempt at a “plant’s eye view of the world” – for “he too seeks to valorize the apple or the potato only to mobilize them in critiques of the human practices of horticulture, nutrition, and industrialism.” 

We get the message: any perspective that is in any way articulated to a human interest is ruled out in advance.  There is something disconcertingly incoherent about the Bogost two-step: step one is the unquestioned assumption that we might “wish to understand a microcomputer or a mountain range or a radio astronomy observatory or a thermonuclear weapon or a capsaicinoid [he apparently loves peppers] on its own terms.” Step two rules out the appeal to a subject who might wish to do something like this. He writes off science studies, for example, for retaining “some human agent at the center of the analysis.” OK, we get the point, Bogost wants to think about really thingy things and not those other things called human scientists or engineers.  But it’s pretty clear that what’s driving the whole show is the desire on the part of humans to experience things as things (other than human things) – even if this desire is anthropomorphically projected upon (non-human) things.

And so we are left with the thorny question of why such a perspective might be interesting. The philosopher Theodor Adorno neatly described the dialectic of autonomy: a fantasy of independence combined with the utterly irrational form this had taken. For Adorno, the autonomous artwork rehearsed capitalism’s crazy (aestheticized) embrace of production for production’s sake. What is left but to read Bogost’s injunction along the same lines: theory for the sake of everything and thus for nothing. It is a pure position, perhaps too pure, insofar as it does little to interrogate the goal of purity itself. The result is that the argument’s normative framing takes the form of recurring and somewhat mysterious demands on the reader: “the heroin spoon demands as much intrigue as the institutional dysfunctions that intersect it.” Why? To whom? These are questions that go unanswered – or perhaps such demands are only available to those who hear them, which poses a challenge for any attempt to impose them on the rest of us. 

In the book’s conclusion, Bogost briefly nods towards Levi Bryant’s claim that Object Oriented Ontology envisions “a new sort of humanism” in which “humans will be liberated from the crushing correlational system.” But after the wholesale dismissal of any attempt to frame his approach in terms that serve human interests, it’s difficult to buy into this meta-correlational gesture: the claim that we should surpass the attempt to relate knowledge to human interests, because it might be in our interest to do so (!?). Bogost slips this in so close to the final downhill run toward the blissful prospect of his argument’s end, that the reader’s tendency is to just coast though it rather than to give it the double-take it deserves. He follows with an explanation that sounds a bit more like the one that characterizes his own affinity for the extra-human – the “bored consumer” rationale: “Just as eating only oysters becomes gastronomically monotonous, so talking only about human behavior becomes intellectually monotonous.” This is not a particularly rare claim in some circles of the humanities, although one wonders just how widely distributed is the subject position that would take it as the most compelling reason to embrace a shiny new, if somewhat nonsensical perspective: a kind of intellectual ennui in search of the next big thing. Such a stance is surely associated with the somewhat sheltered subject position of gastronomic satiety, or surfeit. There is a certain luxury or self-anesthesis associated with the charge that thinking about humans and their problems is just a tad dreary. (“Why is it that one’s disregard for laundry, blogs, or elliptical trainers entails only metaphorical negligence,” Bogost asks, “while one’s neglect of cats, vagrants, or herb gardens is allowed the full burden of general disregard?”).

It is telling that Bogost’s ostensibly random lists of beings in the object world so often emphasize interesting sounding objects and words, both technical and natural. He lures the reader with bright, shiny, and mysteriously magical objects: “the obsidian fragment, the gypsum crystal, and the propane flame” (these are a few of his favorite things: musket buckshot,  gypsum, and space shuttles, redwoods, lichen and salamanders, Erlenmeyer flasks, rubber tired Metro rolling stock, the unicorn and the combine harvester, the color red and methyl alcohol, mountain summits and gypsum beds, chile roasters and buckshot, microprocessors, Harry Potter, keynote speeches, single-malt scotch, Land Rovers, lychee fruit, love affairs, asphalt sealcoat, and appletinis). We don’t hear much about toxic waste or shit stains (surely, the shit stain, too, demands to be understood on its own terms). The object world is by definition an intricately rich and edifying one compared to that nasty, dank world of our own mind – an object still, to be sure -- but not so salubrious or interesting  as the grassy meadows, iridescent shores and scoria cones. If “everything exists equally” for Bogost, some things clearly exist more equally than others.

Conspicuously absent from Bogost’s account is any explanation as to why being a philosopher today demands the rejection of what he terms correlationism. From what position is this demand made? Surely, given his round denunciation of “correlationalist” tendencies it cannot be made on the basis of anything having to do with us humans (despite the supposed benefits of escaping the dank prison of our minds). Such a perspective is ruled out in advance by the hubris-slaying, egotism-deflating thrust of anti-correlationalism.  Is the demand, then, made from the perspective of truth, based on the claim that this way of thinking accurately reflects the way things are for everything, everywhere, forever and therefore we must  adjust our own way of thinking to match the world (damn you, correlationism! Back again!). Well, why? What claim does reality have on us in Bogost’s universe?  Perhaps the claim is less a normative one (we should adopt the stance of anti-correlationalism) than a descriptive one: inevitably we will come to think this way thanks to the predictable and inexorable flow of certain types of entities called thoughts (and the claim exerted upon them by other beings). Such a perspective would embrace not a “new” materialism but the very oldest. The use of the word “must” would imply not an injunction but an inevitability: we must embrace object oriented ontology the way a stone in the earth’s gravitational field must, absent any obstruction, fall to the ground.  Such a formulation would certainly obviate the need for any kind of manifesto – (“a specter is haunting the object world: the specter of gravity!”).      

After ditching this book several times for failing to pass the basic coherence-of-thought test, I came to the realization that it is modeled much like the things it describes: unable to truly interact with other beings (like me), it simply recedes infinitely into itself. How else to understand statements like, “The construction and behavior of a computer system might interest engineers who wish to optimize or improve it, but rarely for the sake of understanding the machine itself, as if it were a buttercup or a soufflé.” He seems to be making an aesthetic point (along the lines of Kant, that dankest of thinkers): that his proposed way of understanding a buttercup is different from figuring out how a computer works because it is an apparently disinterested understanding – and emphatically not one that reflects what Kant described (realizing a certain logical necessity) as a disinterested interest. Once again, any sign of interest on our part runs the risk of channeling us back into a retrograde correlationism.

It is not surprising that one of the paradigmatic examples of the wonders of “the list” world invoked by Bogost is that of Roland Barthes’s like and dislikes, taken from his autobiography. Ontography, Bogost style, takes the form of the database, and what is more characteristic of the database in its current market-driven configuration than the preference list?  Facebookers with their endless “likes” rehearse this list-building activity, as do databases of purchases, search terms and so on. IBM tells us that various digital sensors of all kinds gather the equivalent of 4,000 Libraries of Congress worth of data a day. But these are not books, poems, maps, plays, biographies, etc. Rather the data comes in the form of list-like collections in which the human and nonhuman mingle with the promiscuous abandon celebrated by Bogost:  credit card purchases, airline seating preferences, underground tremors, EZ Pass records, atmospheric pressure, geo-locational data, levels of particulate matter in the air, stock market fluctuations, and so on. Such data collection rehearses the “virtue” espoused by Bogost: “the abandonment of anthropocentric narrative coherence in favor of worldly detail.” And, of course, experiencing this data flow becomes, necessarily, the job of various kinds of high-tech objects. Perhaps this is the appeal of Bogost’s theory in the digital era: the celebration of the very forms of post-human experience that characterize automated data collection (and the simultaneous de-valuation of narrowly human experiential and narrative alternatives).  

Suggestive in this regard is Bogost’s explicit rejection of the pursuit of knowledge as “metaphysically undesirable” because it violates the adherence to “A fundamental separation between objects…the irreconcilable separation between all objects, chasms we have no desire or hope of bridging – not by way of philosophy, not through theism, not thanks to science.” With a tweak to include information about things as well as humans, this formulation readily recalls Chris Anderson’s manifesto on “the end of theory” in the big database era: “Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people [and things] do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity.” 

Of course, “we” are not really doing the tracking here but are offloading it onto machines that do the work for us, offering up their experience of the endless litanies of information captured by a proliferating array of sensors. We might describe this reliance on the prosthetic extension of sensing -- combined with its offloading onto the sensor array -- as a process of dronification: we oversee seemingly endless databases of information collected by remote sensing devices about everything from the online activity of consumers, to tweets, volcanic activity, carbon monoxide levels, ocean currents, subway locations, factory emissions, sales records, and on and on. The experience of our sensors can be summed up in terms of Bogost’s broadened definition: “The experience of things can be characterized only by tracing the exhaust of their effects on the surrounding world.” Such a formulation has been explicitly embraced by the data mining world in the term “data exhaust” – which does the added work of treating data as something cast off, an almost passive byproduct (but something that can be captured and recycled by those with the resources). 

Bogost goes on to suggest that the tracings of thing-exhaust can serve as the basis for speculation, “about the coupling between that black noise and the experiences internal to an object.” This is the part that is lopped off by Anderson’s formulation, which in its fascination with instrumental efficacy has little interest in such speculations. Rather the interest in capturing all available data embraces what Bogost describes as “a general inscriptive strategy, one that uncovers the repleteness of units and their interobjectivity.” He calls this process one of ontography: the writing of being, which “involves the revelation of object relationships without necessarily offering clarifying description of any kind.” Isn’t this the logic of big data mining, which unearths patterns of relationship without explanation?  

Clearly, Bogost would differentiate his goal of pure philosophical reflection from those of data mining, insofar as the latter (as outlined by Anderson) are crassly correlationist since the generated patterns are only of interest to the extent that they serve human interests (epidemiology, earthquake modeling, threat detection, marketing, etc.). And yet, the form of “knowledge” on offer, reduced to an object-agnostic tracing of the impact of objects “on the surrounding ether” models the “knowledge” generated by the database. Indeed, if we could imagine a data mining operation devoted to simply generating patterns independent of their utility to humans, we would come quite close to the process of ontography described by Bogost. He calls it alien experience, but given the ongoing development of new forms of object sensors (which preoccupy Bogost in his discussion of digital photography), we might call it simply drone experience.    

One of the more baffling – and perhaps telling – moments in the book is Bogost’s diatribe against academic writing. In tone, his critique takes the familiar form of charges against pedantry, obscure writing, and, predictably, a cloistered reluctance to pry one’s head out of the books and “visit the great outdoors” (that again!). Academics, he tells us, are relentlessly crappy writers who, even in public, insist on, “reading esoteric and inscrutable prose aloud before an audience struggling to follow, heads in hands.” He implicitly embraces the ready rejoinder that such critiques rehearse a familiar and fatigued set of clichés with the observation that, “Clichés also bear truth, after all.” Fair enough, but not ones that are interesting enough to warrant a multi-page chapter introduction.

Things start to get a bit dicier when he proposes his alternative: we need to start relating to the world not only through language, but through the things that we make, through our practice in the world (as if language, writing, etc. are not really real practices): “If a physician is someone who practices medicine, perhaps a metaphysician ought to be someone who practices ontology.” Academics he suggests in a distant echo of Thesis Eleven, spend too much time writing, and not enough time doing. He notes in passing that it seems “ironic” to even suggest such a thing in a book (rather than simply doing it, perhaps). We might take this as a call for diversity – let’s not limit ourselves to just one mode of object production (books); rather let’s make other kinds of objects (computer programs, motorcycles, maybe even some sturdy walnut shelves for all those books).

But the argument does not stop at the call for diversity – it actively disparages writing (as a form of doing that doesn’t quite count as one) by comparison with other forms of doing. At this point a somewhat confounding binarism slips into the argument. Why might it be “ironic” to advocate the making of things in a book? Isn’t making a book just as much a form of doing as other forms of doing? For Bogost, a book (or at least its ideational content – as opposed to, say, its binding) turns out not really to be a thing in the way that other things (tables, motorcycles, computer programs, unicorns?) are. Why not? According to Bogost, “carpentry” (by which he apparently means making something out of anything other than words), “might offer a more rigorous kind of philosophical creativity, precisely because it rejects the correlationist agenda by definition, refusing to address only the human reader’s ability to pass eyeballs over words and intellect over notions they contain.”

Unlike really thingy things, moreover, “philosophical works generally do not perpetrate their philosophical positions through their forms as books” (that is, their more material attributes: page texture, shape, binding glue, etc. For a book to really perpetrate its position this way, you’d have to be literally struck by it). By contrast the maker of material things (like software?), “must contend with the material resistance of his or her chosen form, making the object itself become the philosophy.” We might describe this set of oppositions as “the separate but equal” clause of Bogost’s book. He puts it this way, “all things equally exist, yet they do not exist equally,” by which he means, from what I can gather, that although things do not exist in precisely the same way, one group cannot be privileged over another – or, more specifically, human beings ought not to be privileged over other entities from a philosophical perspective, and vice versa.

And yet, why are those beings called books less “philosophical” in their construction than objects (like bookshelves and computer applications) crafted by philosophical “carpenters”? What makes the “immaterial” object less philosophical than the material? It is hard to extract any answer from Bogost’s argument other than that ideas are less philosophical than things precisely because their significance emerges through their relationship to humans (whereas material things relate not just to humans but to other things as well). In other words, humans are less equal than other things from a philosophical perspective, because their form of relating (as opposed to that between, say a stone and a stream) invokes a particular relation in which the mental capacity of humans is involved.

Perhaps the thrust of the argument here is corrective: we spend too much time thinking of beings for humans and not enough considering the ways in which beings of all kinds exist for one another. But the substance outstrips the tone of the argument, suggesting that as soon as humans enter the equation in their ideational (as opposed to material) form of relating, a relationship becomes necessarily less philosophical. Software (Bogost’s chosen form of “carpentry”) escapes the fate of writing because it is more “material” – that is, there is apparently more resistance in the symbolic substrate of machine language than that of human language. As in the case of, say, truing a bike wheel, or building a bridge, it is harder to make things work at a basic level when writing code than when writing theory. And yet, Bogost’s own book provides a compelling example of how, even in the realm of ideas (as in that of more material things), “simply getting something to work at the most basic level is nearly impossible.” It turns out that arguments and words can be just as recalcitrant as more material things.
If in Bogost's account, human cognitive experience gets devalued vis-à-vis that associated with the objects of “carpentry” – philosophers’ products come in for a special degree of scorn: “For too long, philosophers have spun waste like a goldfish’s sphincter, rather than spinning yarn like a charka.” 

Crap, it turns out, is less equal than yarn in the court of flat ontology, although this valuation reeks of an allegedly surpassed anthropocentrism: by what measure other than some presumably surpassed correlationism is yarn more desirable as a product than goldfish waste? What does that comparison even mean from the viewpoint of flat ontology – is there a ready-made imperative that differentiates spinning yarn from spouting crap? If Bogost imagines he’s doing the latter when he writes books it would have helped to warn the reader at the outset.