Friday, December 18, 2020

"Springs of Action"




 I've long been a fan of the thoughtful work of Louise Amoore, whose interests touch on many of the themes that are at the heart of my own research, though from quite a different direction. So, I approached her recent, influential work on Cloud Ethics with great interest. However there is an inversion in one of the book's linchpin concepts that masks a symptomatic omission characteristic of the book's theoretical alignment with recent strands of so-called "new" materialism. It is hard not to miss this concept, because it is one of the book's key conceptual refrains: the notion of "spring of action" invoked by the philosopher Richard Braithwaite in a well documented exchange with Alan Turing on machine learning aired on BBC. For Braithwaite, the concept seems relatively straightforward: for some entity to think about the world, that is to focus on and isolate aspects of the environment in order to understand or make sense of them, there must be an underlying desire or motivation. The question of desire, of course, is crucial to any discussion of machine cognition, AI, or machine learning. A computer program may learn how to beat a chess grandmaster, but does it even want to play chess in the first place? Does it want to win? It may compose music and recognize cats, but does it care about music or cats, does it have any meaningful concept of music or cats and why they might be of interest? Could it care any less about the variation in the different tasks that might be set it? Does it matter to the machine if it is learning how to identify cats or anomalies in budget reports? I take this set of questions to cluster around Braithwaite's point about desire (which Amoore quotes, in part, as a chapter epigraph):

A machine can easily be constructed with a feed-back device so that the programming of the machine is controlled by the relation of its output to some feature in its external environment—so that the working of the machine in relation to the environment is self-corrective. But this requires that it should be some particular feature of the environment to which the machine has to adjust itself. The peculiarity of men and animals is that they have the power of adjusting themselves to almost all the features. The feature to which adjustment is made on a particular occasion is the one the man is attending to and he attends to what he is interested in. His interests are determined, by and large, by his appetites, desires, drives, instincts—all the things that together make up his ‘springs of action’. If we want to construct a machine which will vary its attention to things in its environment so that it will sometimes adjust itself to one and sometimes to another, it would seem to be necessary to equip the machine with something corresponding to a set of appetites.

The somewhat surprising aspect of Amoore's uptake of the term, "springs of action," is that she transforms it from a motivating force to an outcome: the moment at which an action takes place. So, whereas Braithwaite sees appetite/desire as setting the whole process of learning in motion, Amoore takes the term "spring of action" to refer to the precipitation of desire into an action of some sort -- it manifests only when something happens. For example, in her description of how surgeons work together with a surgical robot, she frames the "spring of action" as the moment when a particular action results from the human/machine assemblage:

The spring to action of surgical machine learning is not an action that can be definitively located in the body of human or machine but is lodged within a more adaptive form of collaborative cognitive learning. Intimately bound together by machine learning algorithms acting on a cloud database of medical the we of surgeon and robot restlessly seeks an optimal spring of action — the optimal incision, the optimal target or tumor or diseased organ, the optimal trajectory of movement.

The shift from "spring of action" to "spring to action" is perhaps significant in this formulation. She is interested in the moment when learning occurs: when the human and machine conspire to make an "optimal" move: an incision or some other (optimal, again) trajectory of movement. The "spring of action" is the result: something happens: a cut is made. Of course, what gets glossed over in this moving description of human-machine collaboration in the name of optimization, is what motivates the machine (or the human, for that matter). It turns out the "spring of action" as framed by Amoore requires a prior desire -- whatever it is that makes the human-machine assemblage "seek" in the first place. This is Braithwaite's point -- desire gets the whole assemblage moving. It is perhaps telling, again, that in this particular formulation, the "optimal" result is a cut -- we might describe it, drawing on Karen Barad's work as an "agential cut." What looks like a failure to distinguish between cause and effect, motive and outcome, desire as motive force and desire as a product of the assemblage, is characteristic of the fate of causality in recent versions of "new" materialism -- and its related Deleuze-inflected ontology of desire. In such formulations, causality is emergent -- there is no meaningful distinction between cause and effect, which means the version of desire invoked by Braithwaite is elided. The fact that a cut occurs retroactively constitutes the fact of the "seeking" -- in fact, this is perhaps the only meaningful way, in such a framework, that we might approach the notion of desire. It's hard to imagine that Braithwaite would endorse this reconfiguration of his formulation of a "spring" of action, which is what makes its repeated, inverted invocation come across as so jarring in Amoore's formulation -- not least because she fails to acknowledge this inversion, taking it as read. Perhaps the assumption is that whenever we talk about desire, we are only talking about it as a post-subjective, post-human formulation: something that is co-extensive with its effects and manifestations: the mark of desire is the optimal cut. 


No comments: