How Technosolutionism Works
Technosolutionism is the idea that one can solve a problem using techne. This Greek term is the root of English words like “technique,” “technology,” and “technical.” What it signifies and brings to each of those derivatives is a more or less well-defined or delineated artificial method. Techne originates from living beings and their activity; it does not come before living beings are on the scene, so to speak. The scope of which living beings can perform techne is in question. It is often scoped to just humans and their ability to come up with processes to create and wield tools; it is unclear if instinctual activities like cellular mitosis, vegetable photosynthesis, slime molds’ “computational” capacities, salmon runs or birdsong are themselves techne.
For my purposes, I use the term “technosolutionism” here to refer to the idea in a capitalist economy that some given techne now available for purchase will solve its buyers’ problem(s). In other words, a given commodity has a use value. (This is Marxist terminology, for those unfamiliar with that literature.) My experience has led me to be very critical of this idea because I’ve seen that there are not ready-made problems which a novel techne can simply slot in and solve. In fact, what I’ve experienced is that new techne must first create problems, and only then can the techne be used to solve them.
My most recent perception of this comes from so-called AI, especially generative AI in the form of Large Language Models. These techne are touted as having general-purpose application, or at least they can be applied in a wide variety of situations as a solution to local problems. But I’ve found this to not be the case. In my workplace, we’ve felt lots of pressure to adapt our product roadmap and strategy to incorporate this techne. That pressure has come from investors, industry analysts, current customers, prospective customers, and even our fellow employees. There is a large chorus voice questions and otherwise putting pressure on the organization to have “an AI strategy.” Notably, this has not been an explicit demand for some specific use or other of the techne; it’s more like feeling an expectation that we will make use of it. Coincident is the feeling that if we don’t then we will “lose” in the capitalist marketplace.
This felt expectation has led us to create a team dedicated to exploring possible ways to make use of AI in our product. They do a lot of experimenting, make lots of requests of people like myself who regularly interact with customers for problems which we think AI might help with, and otherwise gather information and opinions about what AI can or could do in the hopes that it will spark an idea of how we could use it profitably. It’s this process that, I think, undermines the idea of technosolutionism. We could call what this team is doing “scoping.” They are organizing the data in such a way that it creates a sense of the situation, in this case with the goal of creating new problems which AI happens to be able to solve. The problems aren’t there first; they come later.
This is, I think, how all capitalist markets are made. The world and its people are there and some speculators decide to invest money in a techne. In order to get a return on that investment they need to create the sense amongst the people that they lack something (have a problem) which their techne will solve. Marketing, whether paid or “earned” from media sources like newsmedia, industry analysts, or other “influencers” like “thought leaders,” creates this sense of a lack. The implications made by marketing material make people feel that they have to contort their sense of the world to make it seem to themselves that they have a lack that the techne will address. In this case, the problem of privation is created. It is itself a techne, and I doubt that even more techne will solve it.
——
In the middle of drafting this blog post I spoke with my comrade, Beau. He pointed out another way that the discourse around AI is operating which I thought would be good to include. This is discussion of the complexity of wicked problems, like climate change. As he put it, the interrelated and systemic effects of activities has created a sense of the massive scale and scope of challenges which humanity faces. Solutions in one locality can intensify existing problems in others, like how buying electric cars can reduce a household’s carbon emissions while at the same time increasing the reliance on fossil fuels to power electricity generating facilities which power the electric cars.
This situation has induced people to call for adopting AI as the only sufficiently complex and comprehensive techne to address such wicked problems. The reasoning goes, essentially, that humans can’t adequately address the problems we’ve created and are perpetuating, so we need to let ourselves be subsumed by an AI overlord which will make us do the things necessary to address the problems. This seems to me to be quite stupid. No techne that we’ve created can ever become independent of us, let alone govern us without our going along with it. We can always refuse, and so its power is only what we project onto it.