When art is commoditized

 I've had the commoditization cycle stuck in my head for the past several days. (Being sick at home with a fever that was making my linguistic centers run rampant while the rest of my brain was saying "sleep, please, sleep" probably helped.) This article on an otherwise unrelated topic struck a chord:

The Irascible Professor -  The teaching of English may not be dead after all
In a nutshell: Lots of stuff gets written, and most of it is crap. But predicting which stuff will stand the test of time is beyond the powers of the contemporary critics.

Now that's not a novel insight. But it made me think about the differences between the hard sciences (math, physics, etc), soft sciences (sociology, linguistics, etc) and that other stuff they teach in school that I could never take seriously.

Simply put, science is about predictability. If you can't come up with a testable hypothesis, it's not science. So if you could come up with a testable hypothesis for "what makes literature good?" then suddenly literary criticism would be a respectable science!

But the consequence of that is that once something which is information-based is repeatable, it's a commodity. The first example which came to mind is cel-shading.

Any artistic style fundamentally follows a set of rules - otherwise it wouldn't be identifiable as a style. It may take a long time to be able to articulate or codify the style in an algorithm, but it must be doable or it wouldn't be identifiable. (Unless there's some non-real metadata floating around which our brains process, but any disection of dualism will lay that notion to rest quickly.) What happens when the algorithm is codified is that the style becomes a commodity.

Cel-shading is one example - it's the style of pen-and-ink animation used for most cartoons through the 20th Century. Simply put - for each frame of an animation, an artist draws the outline and key edges of a character or object with a black, then another artist fills it in with colors from a limited palette. This algorithm results in a distinctive look - there is very limited shading, where each surface has a "lit" and "unlit" color; differences in depth are explicitly called out, such that an arm in front of a body is deliniated, but the body->shoulder->arm region is unbroken. I'm sure the original animators were not consciously aware of this algorithm, but it is an algorithm. And in the 1990s this was codified - it's now possible to take any 3D description of a scene and - in real-time - render it using cel shading. This is most popular right now in console games based on properties or genres traditionally animated using this technique, such as Japanese RPGs.

Nowadays, doing pen-and-ink drawing by hand for a big budget movie would seem foolish. Even if you didn't want the glossy Pixar look, you could use the same pipeline for creating the content, and just slap a different rendering shader on at the end of the pipeline. You'd get the same look as the Mouse's films, but with the benefits of being able to "re-shoot" scenes instantly that 3D gives you.

To recap - if you did "reduce" literary criticism to a science, suddenly producing literature would be commoditized. An even more relevant example: postmodern writing; if you can get a computer to generate it, then it's a commodity.

Note that this doesn't mean that the product is worthless. Luddites resisted the commoditization of physical work by automation. Computers can commoditize lots of things we previously thought were in the domains of humans only - chess, algorithmic proofs, facial recognition, and so on.

In the case of postmodern writing, the jury was still out on whether or not the field had any merit whatsoever before computers were spewing out publishable writing within it. Now I think it's agreed that the jury's heads were hurting and they took off for some time in the tropics. But what happens when computers can commoditize things which aren't just intellectual masturbation?

How about criticism itself? Computers are already getting better at basic things like speech recognition (slowly!), natural language processing (also slowly!), grammar checking (go Word!) and algorithmic proofs. It doesn't seem too much of a stretch to be able to put in an attempt at either a written assertion or a criticism and have software deconstruct it. Strip out the noise, identify the core claims, and try and sort out the unstated assumptions and what can logically follow from it.

Or within a domain (say, physics), act as an expert system and look for possible flaws in a conclusion (what variables weren't accounted for in a gravitational simulation of galactic motion?)

Comments