Feedback and Folly
My employer’s Chief Takes Officer is on a kick about feedback loops so I’m gonna write about them too.
The benefits of feedback are central to the cybernetic age we inhabit. Building in a channel for information about the operation of a machine that exists alongside but distinct from the channel for energy output can permit the technical artifact to self-regulate. Self-regulation permits the performance of tasks would fail with just the output channel alone or with a mere appendage like Watt’s governor. [Gilbert Simondon illustrates this in his On the Mode of Existence of Technical Objects (pg. 142-145) in a comparison between thermal engines and information processors.]
The issue with feedback that Simondon pointed out in that book (written in the 1950s!) and others like Raymond Ruyer also noted is that it assumes that what the processor is doing is worth doing. Feedback is inherently about improving whatever is happening along a certain dimension. For example, one may take coaching advice in order to get better at one’s job. Such feedback aims to shift how a person is behaving by amplifying certain aspects of the behavior and dampening others, treating some part as signal and the other as noise. The distinction between signal and noise, however, is a value judgement and a matter of practical utility.
In business, feedback loops are built, in theory, to let operators know if what they’re doing is benefiting the organization. If it is, keep doing it; in fact, double down. If not, knock it off. If there are questions about what’s important then it’s up to managers up and down the org chart to determine what’s valuable and to rank the values according to their assessment. In today’s capitalist system, the bottom line (the highest rank) has to be monetary profit. If the business isn’t growing at a quick enough clip then it’s dying.
The question (and it is a question, not something to be assumed) is if feedback actually leads to doing what’s best for the organization. Because feedback is necessarily about known values. Improvement (“progress”) is necessarily aimed towards some ideal, and that purpose justifies or makes the activity meaningful. There is a teleology, though a rather flimsy one. Flimsy because once it’s reached then it becomes the means, via feedback, to another cycle towards the ideal. The whole cycle becomes actually endless, and therefore ultimately meaningless. (So reasons Hannah Arendt in her criticism of utilitarianism in The Human Condition, pg. 153-159)
Even if one resists succumbing to anguish and pessimism at that nihilistic state, feedback can still cause problems. The great risk is “hypertelic” behavior, i.e. over-fitting or over-indexing. This risk is inherent in the concept of feedback. In the progress towards the goal, one relies on information that gets parsed into signal and noise, and pursuing that goal itself produces more information. Setting aside Ruyer’s question of where that initial information comes from, one may ask what to do if values are themselves re-ranked, or if whole new values are created. Isn’t that the whole point of innovation? David Stark, in his book The Sense of Dissonance, explores this avenue and argues that incorporating what is presently deemed “noise” is actually beneficial to organizations. (See also Krakauer and Wolpert’s argument here.) So if the goal is to innovate, i.e. to call into question what one is told is best, then perhaps it’s wisest to avoid feedback.
Finally, I want to comment on AI. Because we all know we can’t avoid it in this day and age. A study on the impact of AI in scientific research has found that AI improves individual scientists’ results and careers but undermines the overall scientific enterprise. As noted by one of its authors, the underlying dynamic is that those individual scientists point AI, a feedback machine if there ever was one, to areas of research where there’s already lots of data. The AI-backed research produces even more data, leading scientists and their AI tools to return to that same place until eventually, maybe, they’ve strip mined the place. That proves, however, to be a Pyrrhic victory as it leaves other areas unexplored just like the proverbial drunk who can’t find his keys. What’s happening is that places where there is more data more easily (‘efficiently’) obtained are valued above the more difficult because individual scientists are optimizing for their own career rather than science per se. This is really just to impress upon the reader the seriousness of the risk of hypertely and how AI will make things worse for everyone. Because organizations under capitalism, of course, are using AI for exactly this same purpose.