Thursday, September 5, 2024

 


Democracy and Its Discontents



Whether intentionally or not, this observation touches on something deep in the current GOP psyche. In so many ways, the current political right does see civil spaces as war zones. This is apparent in the way they talk about cities famous for their pedestrian culture -- like NYC and San Francisco -- but also in the way they treat the virtual spaces of political interaction. Their imaginary is deeply embedded in some romanticized version of small town middle America - an imaginary in which cities, with their diverse concentrations of population are disaster scenes. Somewhat more metaphorically, because they have strategically rejected established practices and institutions for adjudicating disagreements. Without recourse to standards of reasonable argument and evidence, civil space does, indeed, become a war zone. Moreover, this new right has a distinct aversion to public schools precisely because they are both civil and civic spaces. The notion that a society would need to instil in its members the civic values that allows for the meaningful exercise of democracy is treated by the right as a form of illegitimate indoctrination. The very notion of civic life is viewed as an imposition upon individual autonomy. 



Saturday, October 29, 2022

The Media Are Not Off the Hook: Silver Bullets, Hypodermic Needles, and Straw Men

 

In the wake of growing concern over the role played by social media and other pathological internet formations in shaping the tenor of contemporary political discourse, the discussion of so-called "media effects" keeps bubbling up to the surface. 

The term serves primarily as an epithet (except in somewhat rarefied and self-enclosed realms of experimental research) sometimes shorthanded by the dismissive nomenclature of the "hypodermic needle model." 

It is worth nothing, in this regard that the "hypodermic model" and the "silver bullet theory" are not actual theoretical approaches to the media. Rather these term emerges as dismissive insults: foils for the so-called "limited effects" model. 

Recently, when these models are invoked, it is with the goal of disparaging ostensibly overblown claims about the impact of social media on political polarization, shortened attention spans, and so on. 

This is not to say that there have not been attempts to demonstrate media effects that embrace the media model implied by the charge of "hypodermicism". 

The typical problem with all such experiments and with more general claims about the "direct effects" of media artifacts is that it is difficult if not impossible to isolate the media as a causal factor in a real world setting. When experiments are conducted in a laboratory setting there is the related problem of generalizing these findings to the world outside the lab. There is a very good reason for these difficulties: the "media" do not stand outside and separate from the social realm -- they are an integral part of society. This is the substance of the fundamental claim against "direct effects" models: that they fetishize and abstract "the media" from their broader social context. 

But this same claim applies to the attempt to absolve the media through the dismissive charge of "silver bulletism" or "hypodermicism." When the accusation of naive "direct effects" is used to imply that the media are not to blame -- that the problem lies "somewhere else" in society, the same fetishization is at work. If the media cannot be abstracted from society, the converse is also true: society cannot be abstracted from the media. In other words, the pathologies of society are also the pathologies of the media, which play an important role in reproducing them, just as other communicative and representational practices do. To say, then, that polarization is caused "elsewhere" and the media merely "reflect" it is to inadvertently align oneself with a debunked hypodermic model, by underwriting the abstraction of media from society. 



Friday, December 18, 2020

"Springs of Action"




 I've long been a fan of the thoughtful work of Louise Amoore, whose interests touch on many of the themes that are at the heart of my own research, though from quite a different direction. So, I approached her recent, influential work on Cloud Ethics with great interest. However there is an inversion in one of the book's linchpin concepts that masks a symptomatic omission characteristic of the book's theoretical alignment with recent strands of so-called "new" materialism. It is hard not to miss this concept, because it is one of the book's key conceptual refrains: the notion of "spring of action" invoked by the philosopher Richard Braithwaite in a well documented exchange with Alan Turing on machine learning aired on BBC. For Braithwaite, the concept seems relatively straightforward: for some entity to think about the world, that is to focus on and isolate aspects of the environment in order to understand or make sense of them, there must be an underlying desire or motivation. The question of desire, of course, is crucial to any discussion of machine cognition, AI, or machine learning. A computer program may learn how to beat a chess grandmaster, but does it even want to play chess in the first place? Does it want to win? It may compose music and recognize cats, but does it care about music or cats, does it have any meaningful concept of music or cats and why they might be of interest? Could it care any less about the variation in the different tasks that might be set it? Does it matter to the machine if it is learning how to identify cats or anomalies in budget reports? I take this set of questions to cluster around Braithwaite's point about desire (which Amoore quotes, in part, as a chapter epigraph):

A machine can easily be constructed with a feed-back device so that the programming of the machine is controlled by the relation of its output to some feature in its external environment—so that the working of the machine in relation to the environment is self-corrective. But this requires that it should be some particular feature of the environment to which the machine has to adjust itself. The peculiarity of men and animals is that they have the power of adjusting themselves to almost all the features. The feature to which adjustment is made on a particular occasion is the one the man is attending to and he attends to what he is interested in. His interests are determined, by and large, by his appetites, desires, drives, instincts—all the things that together make up his ‘springs of action’. If we want to construct a machine which will vary its attention to things in its environment so that it will sometimes adjust itself to one and sometimes to another, it would seem to be necessary to equip the machine with something corresponding to a set of appetites.

The somewhat surprising aspect of Amoore's uptake of the term, "springs of action," is that she transforms it from a motivating force to an outcome: the moment at which an action takes place. So, whereas Braithwaite sees appetite/desire as setting the whole process of learning in motion, Amoore takes the term "spring of action" to refer to the precipitation of desire into an action of some sort -- it manifests only when something happens. For example, in her description of how surgeons work together with a surgical robot, she frames the "spring of action" as the moment when a particular action results from the human/machine assemblage:

The spring to action of surgical machine learning is not an action that can be definitively located in the body of human or machine but is lodged within a more adaptive form of collaborative cognitive learning. Intimately bound together by machine learning algorithms acting on a cloud database of medical the we of surgeon and robot restlessly seeks an optimal spring of action — the optimal incision, the optimal target or tumor or diseased organ, the optimal trajectory of movement.

The shift from "spring of action" to "spring to action" is perhaps significant in this formulation. She is interested in the moment when learning occurs: when the human and machine conspire to make an "optimal" move: an incision or some other (optimal, again) trajectory of movement. The "spring of action" is the result: something happens: a cut is made. Of course, what gets glossed over in this moving description of human-machine collaboration in the name of optimization, is what motivates the machine (or the human, for that matter). It turns out the "spring of action" as framed by Amoore requires a prior desire -- whatever it is that makes the human-machine assemblage "seek" in the first place. This is Braithwaite's point -- desire gets the whole assemblage moving. It is perhaps telling, again, that in this particular formulation, the "optimal" result is a cut -- we might describe it, drawing on Karen Barad's work as an "agential cut." What looks like a failure to distinguish between cause and effect, motive and outcome, desire as motive force and desire as a product of the assemblage, is characteristic of the fate of causality in recent versions of "new" materialism -- and its related Deleuze-inflected ontology of desire. In such formulations, causality is emergent -- there is no meaningful distinction between cause and effect, which means the version of desire invoked by Braithwaite is elided. The fact that a cut occurs retroactively constitutes the fact of the "seeking" -- in fact, this is perhaps the only meaningful way, in such a framework, that we might approach the notion of desire. It's hard to imagine that Braithwaite would endorse this reconfiguration of his formulation of a "spring" of action, which is what makes its repeated, inverted invocation come across as so jarring in Amoore's formulation -- not least because she fails to acknowledge this inversion, taking it as read. Perhaps the assumption is that whenever we talk about desire, we are only talking about it as a post-subjective, post-human formulation: something that is co-extensive with its effects and manifestations: the mark of desire is the optimal cut. 


Sunday, July 5, 2020




Pessimism of the intellect...

A couple of thoughts in response to Alex Burns's mediations on a somewhat anguished Tweet I launched on a Friday afternoon. In most cases, the various responses to the Tweet took it in the spirit in which it was offered: as concern about the seemingly inevitable colonization of higher education by the tech industry, exacerbated by the restrictions ushered in by the current pandemic. 

Some read it, a bit more hostilely, as a selfish focus on cashing out before succumbing to a system that is antithetical to my academic values and commitments. 

Such is the destiny of Tweets. 

For the record, I intend to do what I can to resist the commercial platforming of higher education, in keeping with an academic career that has been devoted to identifying and critiquing the pathologies of digital capitalism. That does not mean I'm particularly optimistic with respect to the outcome. There is too much administrative support for such a move -- and, as Burns's post indicates, significant buy-in among academics, at least in certain contexts. At the same time, I've had the good fortune to work and be trained in academic institutions that will likely be among the holdouts, and for that I applaud them and will continue to support them however I can. 

I don't know Alex, though I take him to be a person of good will, and I suspect the future belongs to him and other like minded people. Maybe that's good news for them. I don't think I share their particular vision for the social role of higher education, and I worry about the consequences of such a vision for the character of our social world. 

I am just going by the one post - so I am likely missing some very important context -- but there are some moments in the post that prompted this response. The first is a conspicuous absence in the definitions of education on offer. The choices Burns provides are  higher education as contributing to "knowledge creation," serving as a form of "elite reproduction," or, finally, one more version of capitalist alchemy: a way of turning student fees into land grabs and retirement savings (a dig at the original Tweet). 

None of these really speak to the core mission of the University, as I understand it: education. Missing, for me, in this list, is the process itself: fostering a community and culture of thought (for both researchers and students) informed by the achievements of humankind in ways that contribute to a critical and constructive engagement with the world. 

I realize the definition of "achievements" is a contested one, and the field for enabling and recognizing these has long been warped -- but this contestation and an ongoing reckoning with the forms of power that shape knowledge production seem part of the culture of thought promoted by such an education. 

I imagine the reference to elitism in Burns's post  is meant to encompass this view of the role of education. The charge of elitism is often deflected toward the content of particular forms of thought and the knowledge with which these are associated, when perhaps the actual target is the conditions of its production and reproduction. To restrict the forms of understanding and knowledge to which I'm referring to a select, privileged, group (through, for example, making a liberal arts degree prohibitively expensive), is elitist. The way to challenge this form of elitism is not to do away with such an education altogether, but to make it available to all who are interested, and, in so doing to transform and enrich it (to reconfigure the content by addressing the deformations associated with the elite conditions of access that shaped it).

By way of response, I would press against the ready equation of technology with the commercial tech sector. I realize that the latter is at the forefront of technological development, but I think there is still a meaningful difference between imagining constructive uses for new technological affordances and merging higher ed with the commercial tech sector.

What worries me about the spectre of a tech-sector takeover is that the result may well be regressively elitist: reserving the type of education I associate with the University to a few pricey holdouts. Perhaps this is simply a function of my being woefully out of touch with my time. However, I would resist the accusation of nostalgia: the version of higher education to which I remain wedded is one that has only ever appeared as a potentiality. The commercial, corporate capture of the University would most likely extinguish it altogether. 

It's hard for me to get enthusiastic about the platform commercialization of research metrics. Burns refers to the prospect of commercial platforms showing us the, "20% of your research outputs that are having the 80% readership impacts." I suppose this is meant to shape our research the way audience ratings might shape the development of a TV show, or how market research might contribute to product development. Who wants to spend their time on research that doesn't have "impact"? 

Nonetheless, I don't think we should take for granted the currently fashionable term "impact" and its relation to the various proxy measures that represent it. In the highly metricised research environment in which we operate, it means how many times an article gets cited, shared, or mentioned (not necessarily read). It is a quantitative measure that doesn't tell me whether or how the piece changed how someone might see or think about or act in the world. It doesn't tell me how this research might influence the classroom experience of my students and their understanding of the world.

It is, for all practical purposes, a metric that, through a series of transformation, can be monetized. (citations=impact=rankings=student fees). Platform capitalism, natch. That doesn't mean important qualities are necessarily excluded from "impact" or that citation numbers don't have any relation to what they're meant to serve as a proxy for. We all want our work to enter into the conversation. 

However, it does underwrite a tendency for the proxy to displace the real goal, and we know how that plays out. The notion -- imported from marketing -- that the proxy has some kind of predictive value is, I suspect, a deeply problematic one. I've got a couple of friends who, very early on in the era of digital media studies, started working on copyright issues. At the time, very few in the field were working on the topic, so who else would cite them, anyway? 

It turned out they saw something others did not, and they built successful careers on the foundations of this work. By contrast, platform algorithms give us the kind of endless repetition that has come to characterize so much of our data-driven culture. I doubt they're much good for guiding original, ground-breaking research. They can tell us after the fact, that the research got attention -- which is fine -- but that's about it. 

The other provocative moment in the post, for me, is the reference to the increasing cost and allegedly diminishing productivity of academic labor. I'm not sure what the reference point here is, but the stats I've seen show some measures of productivity on the rise. Research outputs have been increasing. Although this varies across fields, student-faculty ratios have also been increasing. I suppose this speaks to productivity in some way, but I don't greet either of these as positive developments -- they are driven by economic logics that have been promulgated by another trend: the increase in administrators per student (perhaps this speaks to the issue of diminished productivity?). 

None of this should be read as a blanket critique of technological innovation. My target is the commercialization of higher education. I have yet to see evidence that commercial platforms are equipped to support the type of intellectual engagement and culture that is crucial to higher education as I understand and experience it. There is certainly a version of higher education that tech companies will be able to implement, and they will likely do it much more efficiently and profitably than universities can. However, I worry it will be unable to provide the type of thought and understanding we will need to address the pathologies of the current moment -- many of which are associated with those companies most likely to take a lead in "disrupting"  higher education. I'm wary of the recurring tech industry promise that only the spear that inflicted the wound can heal it. 

Tuesday, February 25, 2020

A Response to Jill Walker Rettberg




Note: Upon first receiving a link to Jill Walker Rettberg's review of Automated Media in the journal Convergence from its author, I asked her if she would support my request to the journal to publish a response. Professor Walker Rettberg graciously agreed to this, so I approached the editors with this request. They replied that current editorial policy does not provide them with the latitude to publish my response, but agreed to promote it via social media.

Mark Andrejevic
Monash University


For the record, I don’t believe journals or reviewers have any obligation to promote new books in the field or to be positive about them in the interest of collegiality, solidarity, or politics. I do think, however, that reviewers have the fundamental obligation to be roughly accurate in their description of the book under review. It is this belief that prompted me to ask the editors of Convergence -- and the author of its recently released review of Automated Media (2019) -- to support the publication of a response to the review.

This decision was bolstered by the fact that I first learned of Jill Walker Rettberg’s (2020) review from several tweets she directed toward me on the occasion of the review’s online publication. This social media flurry felt like a direct invitation to respond in some way. I replied that I thought her review misconstrued the book’s main arguments, but that I didn't feel Twitter was suited to productive academic discussion, especially when there are substantive misunderstandings to be sorted out. My goal in this response is not to take issue with Professor Walker Rettberg’s core arguments, but to suggest that they miss their target. The strange thing to me about reading the review is how much I agree with the arguments she arrays against what she takes to be my own -- precisely because she gets the book's central claims exactly the wrong way around. There may be the makings of a debate here, but it cannot get off the ground until the mischaracterizations in the review are addressed.

The main ones center upon what the book describes as “the bias of automation” and also upon the notion that automated data collection might live up to the promise of “total coverage” or what the book describes as “framelessness” (that is, the fantasy of digitizing information about the world “in its entirety”).

The book starts off by differentiating between two meanings of the term “bias”: the first is a familiar one that refers to the fact that automated processes can systematically privilege or disadvantage particular groups. The examples here are myriad, ongoing, and alarming, warranting the robust critical attention they receive. The second meaning of bias invoked in the book is less common and draws on the work of Harold Innis -- discussed in some detail -- to suggest that the very choice to use a specific media technology (in a particular social context) can privilege particular logics and tendencies. The book notes that critical work on the first version of bias is well developed and crucially important, and argues for the importance of considering the consequences of the choice to use automated systems in the first place (within a particular context). The book's goal is to examine the logical tendencies that flow from this choice, describing them as “biases” in the way that we might describe, for example, market transactions as “biased” toward the assessment of value in ways that can be quantified. Such transactions may also be biased in the first sense as when, for example, they result in discriminatory outcomes for particular groups. I take these two levels of bias to be distinct, but they can certainly overlap -- as in practice they so often do.

The review overlooks this distinction, proceeding as if all mentions of bias refer to the first version, and faults the book for not engaging in more depth with the relevant literature on this. I strongly agree with Professor Walker Rettberg regarding the importance of this work, and I do think there is room for further development of the connection between these two forms of bias. There is also an interesting discussion to be had about what happens to the first sense of “bias” when we concede its irreducibility. However, neither of these discussions would justify the review’s wholesale assimilation of one meaning of bias to the other. Perhaps she thinks the distinction is untenable -- an interesting claim -- but this is not the argument advanced in the review.

The most confounding misreading, however, is the attempt to attribute to the book the very perspective it critiques: that automation can somehow escape the constraints of finitude and representation. Professor Rettberg accuses the book of not recognizing that, “the fantasy of total knowledge, of there being no gap between data and reality, is just that, a fantasy” (2). However this is, almost verbatim, the core repeated argument of the book.

The chapter on “framelessness” for example, refers to the ambition of digitally capturing and reproducing the world in its entirety as an impossible fantasy (see, for example, p. 114: "The fantasy of automation is that in the breadth of its reach, in the ambition of its scope, it can approach the post-subjective perspective of the view from everywhere -- or nowhere: the purely objective representation that leaves nothing out"; p. 115: p. 122: "Conspiracy theory defaults to a choice between an impossible standard of completeness (framelessness) and...gut instinct..."; p. 126: "There is a seemingly 'democratic' cast to the fantasy of framelessness").

To drive the point home, the book summarizes the examples it critiques as representing, “tendencies and trajectories – many of which, I want to emphasize, head in logically impossible directions such as, for example, the attempt to dispense with a frame entirely, to envision the possibility of a purely objective decision-making system, and to capture all information about everything, all the time” (160). It is no accident that the book uses the language of fantasy to describe the logics of pre-emption and framelessness: these are only conceivable from an impossible, infinite perspective -- as the book repeatedly argues.

Something similar takes place in the review with respect to Professor Walker Rettberg’s attribution to me of, “the idea that there is no gap between data and reality.” The book takes this very gap as one of its defining themes, as illustrated from the opening pages and in a number of passages, including the following: “Critiquing the post-political bias of automation means engaging with the possibility that the world does not work this way: that it cannot be captured and measured ‘all the way down,’ because there are irreducible uncertainties and gaps in reality” (101).

The book argues repeatedly that the fantasy of total information collection -- of overcoming the gap between representation and reality -- is both a structural tendency of automated technologies (“if this system is inaccurate, all it needs is more data, so that it can be better trained”) and an impossibility. To treat fantasies as if they have real consequences is not the same thing as saying they are real, true, or accurate. The book’s concern is directed toward these consequences.

Consider, for example, Professor Walker Rettberg’s accurate claim that emotion detection algorithms do not measure actual emotion -- that the data do not capture the supposed referent. The book points out that from an instrumental and operational perspective, the referent drops out. Imagine (as many tech companies have), a system that links “emotion detection” to a marketing campaign: a designated “emotional state” of some kind is associated with the increased likelihood of someone clicking on an ad and purchasing a product. Whether the machine has correctly identified the user’s state (the “referent” of the identified emotion) is immaterial to this correlational system: the “emotional state” becomes a vanishing mediator. What matters is the robustness of the correlation between one set of variables (facial expression, for example) and another (purchasing behavior).

Prof. Walker Rettberg attributes the supposed inability of the book to recognize the fantasy as such (despite repeated explanations for precisely why each of the fantasies it describes is incoherent and self-contradictory), as a function of its failure to engage with feminist and intersectional theory. This criticism overlooks the fact that much of the book's argument, including the entire final chapter, is influenced by the work of Alenka Zupancic (2017), a theorist who does groundbreaking work at the intersection of feminism, critical theory, and psychoanalytic theory. The chapter’s argument draws heavily on Zupancic's 2017 book, What is Sex?, which develops an original, psychoanalytically inflected argument to ground the very claim that Rettberg accuses it of ignoring: the non-identity of data and the world, sign and referent. As Zupancic puts it, "feminism (as a political movement) puts in question, and breaks precisely this unity of the world, based on massive suppression, subordination, and exclusion" (36).

The conclusion develops an extended interpretation of Zupancic's discussion of the impossibility of the perfected "relation" as a way of highlighting the fantastical biases of automation. That the review misconstrues this argument to the point of getting it backward is perhaps testimony to the fact that Zupancic has not received the attention in the field she deserves.

Professor Walker Rettberg’s review brings together interesting and important literature to make arguments that, in many cases, align with the book's key concerns. I find myself agreeing with most of the points she makes -- with the caveat that they do not apply to the book in the way she imagines. The review does an excellent job of demonstrating her familiarity with an important set of theories, arguments, and academics, but it does so at the expense of misreading and mischaracterizing the book's defining themes.


References:

Andrejevic, M (2019) Automated Media. New York, London: Routledge.

Innis, HA (2008). The Bias of Communication. Toronto: University of Toronto Press.

Rettberg, JW (2020) Book review. Convergence, first published online at: https://journals.sagepub.com/doi/abs/10.1177/1354856520906610.

Zupancic, A (2017). What is Sex? Cambridge: The MIT Press.




Thursday, June 23, 2016

This is what ideology looks like




Lev Manovich had some interesting takeaway points from his recent visit to Facebook Korea that highlight familiar tendencies in contemporary media studies. The scare quotes serve as signposts for where he's headed in his post, as they designate the terms he deems obsolete in the Facebook era: "ideology," "control," "dominant logic," and, of course, "global capitalism" (as in: "There is no 'master plan' or 'global capitalism'"). This is a short step away from the familiar Thatcherian observation about "society." All there are, in the end, are particularities combining in assemblages whose activities are spontaneous, emergent, and unpredictable -- irreducible to the crude terminology of critical theory and free of any discernible structuring logics. Ideology is dead: long live the new (?) ideology of new materialist pluralism.
I suppose there are two ways to take these claims: the more reasonable (that ideology is complex and multi-faceted, but it still exists, that abstractions alway leave something out, but retain a certain utility) or the wholesale ingestion of the Kool-aid (once upon a time people may have been duped and propaganda existed, and capitalism was a thing, but now everything is so complex and particularized that abstractions themselves no longer have any use at all, everything is up in the air and free -- and because of that wonderfully liberating). There is certainly plenty to be said in the support of the first interpretation, but the second one seems to fit better with the conclusion of Manovich's post:

"The future is open and not determined. We are all hacking it together. There is no "master plan," or "global capitalism," or "algorithms that control us" out there. There are only hundreds of millions of people in "developing world" who now have more chances thanks to social media and the web. And there are millions of creative people worldwide adapting platforms to their needs, and using them in hundreds of different ways. To connect, exchange, find support, do things together, to fall in love and to support friends. Facebook and other social media made their lifes more rich, more meaningful, more multi-dimensional. Thank you, Facebook!" 

Wow -- this is a veritable paean to Facebook. Clearly there are interesting things taking place on Facebook, and there are plenty of constructive uses for it, but it seems a bit extreme to portray it as the savior of love, support, and the meaning of life. Not long ago, it seemed to me that perhaps the moment had passed for emphasizing a critique of the flip side of the benefits and conveniences of the online commercial world, because the moment of an unquestioning cyber-utopianism has passed, but it seems alive and well. 

To paraphrase Adorno:
Just as the ruled have always taken the morality dispensed to them by the rulers more' seriously than the rulers themselves, the defrauded new media enthusiasts today cling to the myth of success still more ardently than the successful. They, too, have their aspirations. They insist unwaveringly on the ideology by which they are enslaved. Their pernicious love for the harm done to them outstrips even the cunning of the authorities




Saturday, May 16, 2015

The Fate of Art

It was very strange to see BoingBoing promoting this hackneyed critique of contemporary art by "artist" and illustrator Robert Florczak or Prager "University." More on the scare quotes in a second. Why strange? Maybe it's the blender effect of Twitter which constantly recirculates the old as if it's new and the new as if it's already been around the block so often that by the time you get to it it's old news. Maybe it's because former WIRED editor Chris Anderson retweeted it with the following observation: "Well argued and brave. Plus fun prank on his grad students." Really? Let's start with the last bit first. The prank that Anderson thought was so fun: giving his students a close up photo of a painting he claims to be by Jackson Pollock (but is actually a close up of his studio smock) and making them explain why it's so great, so that he can then humiliate them by revealing the true source of the image. This raises some interesting questions about his grad students (at Prager University?), who seemed to think that this:
was pretty much indistinguishable from this:
Ok, I get it, squiggles are squiggles, but these are supposed to be graduate students in art (history? studio art?) of some kind. Which makes one wonder what kind of university this is. Apparently it's the online creation of conservative talk show host Dennis Prager -- a venue for right-wing, low budget Ted-type Talks devoted to topics like "Feminism vs. Truth" and "The War on Boys," and why Christians are the "Most Persecuted Minority." Maybe the inability to tell the difference between these two images helps explain why Florczak, who paints things like this:

seems to think that he's working in the tradition forged by the painters of images like this: 

and this: 


Rather than the tradition forged by the creators of images like this: 


and this:
Florczak's claims seem to have something to do with technique and skill -- things that, for example, both Kenny G. and John Coltrane have mastered, but that don't make them the same type of artist. That this distinction is lost on the likes of Anderson and BoingBoing's Mark Frauenfelder (another former WIRED editor) is an indication of the cultural confidence of the tech world, in which expertise becomes fungible and the perpetual vindication of financial success a kind of all-purpose cultural qualifier.