Friday, December 18, 2020

"Springs of Action"




 I've long been a fan of the thoughtful work of Louise Amoore, whose interests touch on many of the themes that are at the heart of my own research, though from quite a different direction. So, I approached her recent, influential work on Cloud Ethics with great interest. However there is an inversion in one of the book's linchpin concepts that masks a symptomatic omission characteristic of the book's theoretical alignment with recent strands of so-called "new" materialism. It is hard not to miss this concept, because it is one of the book's key conceptual refrains: the notion of "spring of action" invoked by the philosopher Richard Braithwaite in a well documented exchange with Alan Turing on machine learning aired on BBC. For Braithwaite, the concept seems relatively straightforward: for some entity to think about the world, that is to focus on and isolate aspects of the environment in order to understand or make sense of them, there must be an underlying desire or motivation. The question of desire, of course, is crucial to any discussion of machine cognition, AI, or machine learning. A computer program may learn how to beat a chess grandmaster, but does it even want to play chess in the first place? Does it want to win? It may compose music and recognize cats, but does it care about music or cats, does it have any meaningful concept of music or cats and why they might be of interest? Could it care any less about the variation in the different tasks that might be set it? Does it matter to the machine if it is learning how to identify cats or anomalies in budget reports? I take this set of questions to cluster around Braithwaite's point about desire (which Amoore quotes, in part, as a chapter epigraph):

A machine can easily be constructed with a feed-back device so that the programming of the machine is controlled by the relation of its output to some feature in its external environment—so that the working of the machine in relation to the environment is self-corrective. But this requires that it should be some particular feature of the environment to which the machine has to adjust itself. The peculiarity of men and animals is that they have the power of adjusting themselves to almost all the features. The feature to which adjustment is made on a particular occasion is the one the man is attending to and he attends to what he is interested in. His interests are determined, by and large, by his appetites, desires, drives, instincts—all the things that together make up his ‘springs of action’. If we want to construct a machine which will vary its attention to things in its environment so that it will sometimes adjust itself to one and sometimes to another, it would seem to be necessary to equip the machine with something corresponding to a set of appetites.

The somewhat surprising aspect of Amoore's uptake of the term, "springs of action," is that she transforms it from a motivating force to an outcome: the moment at which an action takes place. So, whereas Braithwaite sees appetite/desire as setting the whole process of learning in motion, Amoore takes the term "spring of action" to refer to the precipitation of desire into an action of some sort -- it manifests only when something happens. For example, in her description of how surgeons work together with a surgical robot, she frames the "spring of action" as the moment when a particular action results from the human/machine assemblage:

The spring to action of surgical machine learning is not an action that can be definitively located in the body of human or machine but is lodged within a more adaptive form of collaborative cognitive learning. Intimately bound together by machine learning algorithms acting on a cloud database of medical the we of surgeon and robot restlessly seeks an optimal spring of action — the optimal incision, the optimal target or tumor or diseased organ, the optimal trajectory of movement.

The shift from "spring of action" to "spring to action" is perhaps significant in this formulation. She is interested in the moment when learning occurs: when the human and machine conspire to make an "optimal" move: an incision or some other (optimal, again) trajectory of movement. The "spring of action" is the result: something happens: a cut is made. Of course, what gets glossed over in this moving description of human-machine collaboration in the name of optimization, is what motivates the machine (or the human, for that matter). It turns out the "spring of action" as framed by Amoore requires a prior desire -- whatever it is that makes the human-machine assemblage "seek" in the first place. This is Braithwaite's point -- desire gets the whole assemblage moving. It is perhaps telling, again, that in this particular formulation, the "optimal" result is a cut -- we might describe it, drawing on Karen Barad's work as an "agential cut." What looks like a failure to distinguish between cause and effect, motive and outcome, desire as motive force and desire as a product of the assemblage, is characteristic of the fate of causality in recent versions of "new" materialism -- and its related Deleuze-inflected ontology of desire. In such formulations, causality is emergent -- there is no meaningful distinction between cause and effect, which means the version of desire invoked by Braithwaite is elided. The fact that a cut occurs retroactively constitutes the fact of the "seeking" -- in fact, this is perhaps the only meaningful way, in such a framework, that we might approach the notion of desire. It's hard to imagine that Braithwaite would endorse this reconfiguration of his formulation of a "spring" of action, which is what makes its repeated, inverted invocation come across as so jarring in Amoore's formulation -- not least because she fails to acknowledge this inversion, taking it as read. Perhaps the assumption is that whenever we talk about desire, we are only talking about it as a post-subjective, post-human formulation: something that is co-extensive with its effects and manifestations: the mark of desire is the optimal cut. 


Sunday, July 5, 2020




Pessimism of the intellect...

A couple of thoughts in response to Alex Burns's mediations on a somewhat anguished Tweet I launched on a Friday afternoon. In most cases, the various responses to the Tweet took it in the spirit in which it was offered: as concern about the seemingly inevitable colonization of higher education by the tech industry, exacerbated by the restrictions ushered in by the current pandemic. 

Some read it, a bit more hostilely, as a selfish focus on cashing out before succumbing to a system that is antithetical to my academic values and commitments. 

Such is the destiny of Tweets. 

For the record, I intend to do what I can to resist the commercial platforming of higher education, in keeping with an academic career that has been devoted to identifying and critiquing the pathologies of digital capitalism. That does not mean I'm particularly optimistic with respect to the outcome. There is too much administrative support for such a move -- and, as Burns's post indicates, significant buy-in among academics, at least in certain contexts. At the same time, I've had the good fortune to work and be trained in academic institutions that will likely be among the holdouts, and for that I applaud them and will continue to support them however I can. 

I don't know Alex, though I take him to be a person of good will, and I suspect the future belongs to him and other like minded people. Maybe that's good news for them. I don't think I share their particular vision for the social role of higher education, and I worry about the consequences of such a vision for the character of our social world. 

I am just going by the one post - so I am likely missing some very important context -- but there are some moments in the post that prompted this response. The first is a conspicuous absence in the definitions of education on offer. The choices Burns provides are  higher education as contributing to "knowledge creation," serving as a form of "elite reproduction," or, finally, one more version of capitalist alchemy: a way of turning student fees into land grabs and retirement savings (a dig at the original Tweet). 

None of these really speak to the core mission of the University, as I understand it: education. Missing, for me, in this list, is the process itself: fostering a community and culture of thought (for both researchers and students) informed by the achievements of humankind in ways that contribute to a critical and constructive engagement with the world. 

I realize the definition of "achievements" is a contested one, and the field for enabling and recognizing these has long been warped -- but this contestation and an ongoing reckoning with the forms of power that shape knowledge production seem part of the culture of thought promoted by such an education. 

I imagine the reference to elitism in Burns's post  is meant to encompass this view of the role of education. The charge of elitism is often deflected toward the content of particular forms of thought and the knowledge with which these are associated, when perhaps the actual target is the conditions of its production and reproduction. To restrict the forms of understanding and knowledge to which I'm referring to a select, privileged, group (through, for example, making a liberal arts degree prohibitively expensive), is elitist. The way to challenge this form of elitism is not to do away with such an education altogether, but to make it available to all who are interested, and, in so doing to transform and enrich it (to reconfigure the content by addressing the deformations associated with the elite conditions of access that shaped it).

By way of response, I would press against the ready equation of technology with the commercial tech sector. I realize that the latter is at the forefront of technological development, but I think there is still a meaningful difference between imagining constructive uses for new technological affordances and merging higher ed with the commercial tech sector.

What worries me about the spectre of a tech-sector takeover is that the result may well be regressively elitist: reserving the type of education I associate with the University to a few pricey holdouts. Perhaps this is simply a function of my being woefully out of touch with my time. However, I would resist the accusation of nostalgia: the version of higher education to which I remain wedded is one that has only ever appeared as a potentiality. The commercial, corporate capture of the University would most likely extinguish it altogether. 

It's hard for me to get enthusiastic about the platform commercialization of research metrics. Burns refers to the prospect of commercial platforms showing us the, "20% of your research outputs that are having the 80% readership impacts." I suppose this is meant to shape our research the way audience ratings might shape the development of a TV show, or how market research might contribute to product development. Who wants to spend their time on research that doesn't have "impact"? 

Nonetheless, I don't think we should take for granted the currently fashionable term "impact" and its relation to the various proxy measures that represent it. In the highly metricised research environment in which we operate, it means how many times an article gets cited, shared, or mentioned (not necessarily read). It is a quantitative measure that doesn't tell me whether or how the piece changed how someone might see or think about or act in the world. It doesn't tell me how this research might influence the classroom experience of my students and their understanding of the world.

It is, for all practical purposes, a metric that, through a series of transformation, can be monetized. (citations=impact=rankings=student fees). Platform capitalism, natch. That doesn't mean important qualities are necessarily excluded from "impact" or that citation numbers don't have any relation to what they're meant to serve as a proxy for. We all want our work to enter into the conversation. 

However, it does underwrite a tendency for the proxy to displace the real goal, and we know how that plays out. The notion -- imported from marketing -- that the proxy has some kind of predictive value is, I suspect, a deeply problematic one. I've got a couple of friends who, very early on in the era of digital media studies, started working on copyright issues. At the time, very few in the field were working on the topic, so who else would cite them, anyway? 

It turned out they saw something others did not, and they built successful careers on the foundations of this work. By contrast, platform algorithms give us the kind of endless repetition that has come to characterize so much of our data-driven culture. I doubt they're much good for guiding original, ground-breaking research. They can tell us after the fact, that the research got attention -- which is fine -- but that's about it. 

The other provocative moment in the post, for me, is the reference to the increasing cost and allegedly diminishing productivity of academic labor. I'm not sure what the reference point here is, but the stats I've seen show some measures of productivity on the rise. Research outputs have been increasing. Although this varies across fields, student-faculty ratios have also been increasing. I suppose this speaks to productivity in some way, but I don't greet either of these as positive developments -- they are driven by economic logics that have been promulgated by another trend: the increase in administrators per student (perhaps this speaks to the issue of diminished productivity?). 

None of this should be read as a blanket critique of technological innovation. My target is the commercialization of higher education. I have yet to see evidence that commercial platforms are equipped to support the type of intellectual engagement and culture that is crucial to higher education as I understand and experience it. There is certainly a version of higher education that tech companies will be able to implement, and they will likely do it much more efficiently and profitably than universities can. However, I worry it will be unable to provide the type of thought and understanding we will need to address the pathologies of the current moment -- many of which are associated with those companies most likely to take a lead in "disrupting"  higher education. I'm wary of the recurring tech industry promise that only the spear that inflicted the wound can heal it. 

Tuesday, February 25, 2020

A Response to Jill Walker Rettberg




Note: Upon first receiving a link to Jill Walker Rettberg's review of Automated Media in the journal Convergence from its author, I asked her if she would support my request to the journal to publish a response. Professor Walker Rettberg graciously agreed to this, so I approached the editors with this request. They replied that current editorial policy does not provide them with the latitude to publish my response, but agreed to promote it via social media.

Mark Andrejevic
Monash University


For the record, I don’t believe journals or reviewers have any obligation to promote new books in the field or to be positive about them in the interest of collegiality, solidarity, or politics. I do think, however, that reviewers have the fundamental obligation to be roughly accurate in their description of the book under review. It is this belief that prompted me to ask the editors of Convergence -- and the author of its recently released review of Automated Media (2019) -- to support the publication of a response to the review.

This decision was bolstered by the fact that I first learned of Jill Walker Rettberg’s (2020) review from several tweets she directed toward me on the occasion of the review’s online publication. This social media flurry felt like a direct invitation to respond in some way. I replied that I thought her review misconstrued the book’s main arguments, but that I didn't feel Twitter was suited to productive academic discussion, especially when there are substantive misunderstandings to be sorted out. My goal in this response is not to take issue with Professor Walker Rettberg’s core arguments, but to suggest that they miss their target. The strange thing to me about reading the review is how much I agree with the arguments she arrays against what she takes to be my own -- precisely because she gets the book's central claims exactly the wrong way around. There may be the makings of a debate here, but it cannot get off the ground until the mischaracterizations in the review are addressed.

The main ones center upon what the book describes as “the bias of automation” and also upon the notion that automated data collection might live up to the promise of “total coverage” or what the book describes as “framelessness” (that is, the fantasy of digitizing information about the world “in its entirety”).

The book starts off by differentiating between two meanings of the term “bias”: the first is a familiar one that refers to the fact that automated processes can systematically privilege or disadvantage particular groups. The examples here are myriad, ongoing, and alarming, warranting the robust critical attention they receive. The second meaning of bias invoked in the book is less common and draws on the work of Harold Innis -- discussed in some detail -- to suggest that the very choice to use a specific media technology (in a particular social context) can privilege particular logics and tendencies. The book notes that critical work on the first version of bias is well developed and crucially important, and argues for the importance of considering the consequences of the choice to use automated systems in the first place (within a particular context). The book's goal is to examine the logical tendencies that flow from this choice, describing them as “biases” in the way that we might describe, for example, market transactions as “biased” toward the assessment of value in ways that can be quantified. Such transactions may also be biased in the first sense as when, for example, they result in discriminatory outcomes for particular groups. I take these two levels of bias to be distinct, but they can certainly overlap -- as in practice they so often do.

The review overlooks this distinction, proceeding as if all mentions of bias refer to the first version, and faults the book for not engaging in more depth with the relevant literature on this. I strongly agree with Professor Walker Rettberg regarding the importance of this work, and I do think there is room for further development of the connection between these two forms of bias. There is also an interesting discussion to be had about what happens to the first sense of “bias” when we concede its irreducibility. However, neither of these discussions would justify the review’s wholesale assimilation of one meaning of bias to the other. Perhaps she thinks the distinction is untenable -- an interesting claim -- but this is not the argument advanced in the review.

The most confounding misreading, however, is the attempt to attribute to the book the very perspective it critiques: that automation can somehow escape the constraints of finitude and representation. Professor Rettberg accuses the book of not recognizing that, “the fantasy of total knowledge, of there being no gap between data and reality, is just that, a fantasy” (2). However this is, almost verbatim, the core repeated argument of the book.

The chapter on “framelessness” for example, refers to the ambition of digitally capturing and reproducing the world in its entirety as an impossible fantasy (see, for example, p. 114: "The fantasy of automation is that in the breadth of its reach, in the ambition of its scope, it can approach the post-subjective perspective of the view from everywhere -- or nowhere: the purely objective representation that leaves nothing out"; p. 115: p. 122: "Conspiracy theory defaults to a choice between an impossible standard of completeness (framelessness) and...gut instinct..."; p. 126: "There is a seemingly 'democratic' cast to the fantasy of framelessness").

To drive the point home, the book summarizes the examples it critiques as representing, “tendencies and trajectories – many of which, I want to emphasize, head in logically impossible directions such as, for example, the attempt to dispense with a frame entirely, to envision the possibility of a purely objective decision-making system, and to capture all information about everything, all the time” (160). It is no accident that the book uses the language of fantasy to describe the logics of pre-emption and framelessness: these are only conceivable from an impossible, infinite perspective -- as the book repeatedly argues.

Something similar takes place in the review with respect to Professor Walker Rettberg’s attribution to me of, “the idea that there is no gap between data and reality.” The book takes this very gap as one of its defining themes, as illustrated from the opening pages and in a number of passages, including the following: “Critiquing the post-political bias of automation means engaging with the possibility that the world does not work this way: that it cannot be captured and measured ‘all the way down,’ because there are irreducible uncertainties and gaps in reality” (101).

The book argues repeatedly that the fantasy of total information collection -- of overcoming the gap between representation and reality -- is both a structural tendency of automated technologies (“if this system is inaccurate, all it needs is more data, so that it can be better trained”) and an impossibility. To treat fantasies as if they have real consequences is not the same thing as saying they are real, true, or accurate. The book’s concern is directed toward these consequences.

Consider, for example, Professor Walker Rettberg’s accurate claim that emotion detection algorithms do not measure actual emotion -- that the data do not capture the supposed referent. The book points out that from an instrumental and operational perspective, the referent drops out. Imagine (as many tech companies have), a system that links “emotion detection” to a marketing campaign: a designated “emotional state” of some kind is associated with the increased likelihood of someone clicking on an ad and purchasing a product. Whether the machine has correctly identified the user’s state (the “referent” of the identified emotion) is immaterial to this correlational system: the “emotional state” becomes a vanishing mediator. What matters is the robustness of the correlation between one set of variables (facial expression, for example) and another (purchasing behavior).

Prof. Walker Rettberg attributes the supposed inability of the book to recognize the fantasy as such (despite repeated explanations for precisely why each of the fantasies it describes is incoherent and self-contradictory), as a function of its failure to engage with feminist and intersectional theory. This criticism overlooks the fact that much of the book's argument, including the entire final chapter, is influenced by the work of Alenka Zupancic (2017), a theorist who does groundbreaking work at the intersection of feminism, critical theory, and psychoanalytic theory. The chapter’s argument draws heavily on Zupancic's 2017 book, What is Sex?, which develops an original, psychoanalytically inflected argument to ground the very claim that Rettberg accuses it of ignoring: the non-identity of data and the world, sign and referent. As Zupancic puts it, "feminism (as a political movement) puts in question, and breaks precisely this unity of the world, based on massive suppression, subordination, and exclusion" (36).

The conclusion develops an extended interpretation of Zupancic's discussion of the impossibility of the perfected "relation" as a way of highlighting the fantastical biases of automation. That the review misconstrues this argument to the point of getting it backward is perhaps testimony to the fact that Zupancic has not received the attention in the field she deserves.

Professor Walker Rettberg’s review brings together interesting and important literature to make arguments that, in many cases, align with the book's key concerns. I find myself agreeing with most of the points she makes -- with the caveat that they do not apply to the book in the way she imagines. The review does an excellent job of demonstrating her familiarity with an important set of theories, arguments, and academics, but it does so at the expense of misreading and mischaracterizing the book's defining themes.


References:

Andrejevic, M (2019) Automated Media. New York, London: Routledge.

Innis, HA (2008). The Bias of Communication. Toronto: University of Toronto Press.

Rettberg, JW (2020) Book review. Convergence, first published online at: https://journals.sagepub.com/doi/abs/10.1177/1354856520906610.

Zupancic, A (2017). What is Sex? Cambridge: The MIT Press.