This month's Frame: a model for how technology influences social change
A framework for making sense of techno-moral change.
In his recent screed The Techno Optimist Manifesto, Netscape-founder and venture capitalist Marc Andreessen asserted not only that “there is no material problem—whether created by nature or by technology—that cannot be solved with more technology” but that humans “are, have been, and will always be the masters of technology, not mastered by technology.”
In its starry-eyed optimism about the unalloyed benefits of technology, the piece doubles as a master class in technological determinism—the idea that changes in technology are the primary influence on human social relations and structure.
The accusation that any account of social or cultural change is guilty of technological determinism is often enough for scholars to reject it tout court. When a politician announces that data is “the driving force of the world’s modern economies” this claim can be dismissed as sociologically naive. Yet the organising narrative of Western historiography is, to a large degree, that of technological innovation: the “Ages of Man” are those of progressive technical development, moving humans from the Stone age, to the Iron, Steam and Information Ages. Technology is, in this estimation, the driving force of history.
Satisfying though it is to dismiss a historical account or a future prediction as technologically determinist, what are our alternatives? If, as Robert Frost is reputed to have quipped ”a critic is someone who pisses into a river and says look what a big stream I made”, how can we be more than critics? What other means of theorising the ways in which technology enables change might we lean on?
This is not just a question of academic interest. Any framework that can help us understand the mechanisms through which technology shapes society, values and ultimately morality—what people perceive and understand to be worth promoting, pursuing or valuing—has worth. A recent taxonomy developed by John Danaher and Henrik Skaug Særtra— Mechanisms of Techno-Moral Change: A Taxonomy and Overview—is a good place to start.
The framework
Danaher and Særtra’s framework explores six “mechanisms” through which technology shapes people’s moral beliefs and practices. Their core idea is that “technologies-in-use help to establish relations between human beings and their environment. In these relations, technologies are not merely silent ‘intermediaries’ but active ‘mediators’. By organising relations between humans and the world, technologies play an active, though not a final, role in morality.”
These six mechanisms can be divided into the Decisional, the Relational and the Perceptual.
Decisional
Technology changes option sets
Technologies present us with new options. They force new decisions on us which involve new moral questions. Before work communications were available on your phone, when you were away from your work computer you were out of reach. The smartphone forces upon us decisions about how available to make oneself, and how to navigate personal-work boundaries.
Technology changes decision making costs
Technology changes the availability and ease of access to various options. Social media has made it easier for us to seek out views that reinforce our pre-existing attitudes or beliefs. Equally, AI-generated fakes have made the “Truth” harder to access. By changing costs, technology can make it both harder, and easier, to access certain values and for us to do the “right” thing.
Relational
Technology enables new relationships
Information, communication and transportation technologies enable relationships that were previously not possible, both between humans, and between humans and non-humans (eg. robots or AI). For example, when we shift from face-to-face to remotely mediated relationships the embodied dimension of relationships gets lost, but we gain the ability to connect with people easily and cheaply. New relationships, and the ways in which we pursue them, are potential sources of value, and harm.
Technology changes the burdens and expectations within relationships
Human morality is fundamentally relational. Relationships come with roles and thus with expectations about what we, and others, will or should do. The introduction of “labour saving devices” into the home changed ideas about who should perform household chores. It also shifted ideas about the standards to be expected in their execution. While these technologies didn’t always change who did what in the home they did provoke new conversations about domestic relationships and standards.
Technology changes the balance of power within relationships
Relationships are rarely equal and power inequalities have moral effects and technology can either reduce or increase power imbalances. Smartphone cameras paired with social media can give previously powerless publics new powers to record and share footage. AI-powered facial recognition can give authorities new powers of surveillance over the public.
Perceptual
Technology changes moral perception
Technology must change how we perceive the world in order to have any effect on morality. It changes what forms of information we receive about the world—which is relevant to our decisions and actions. In doing so it provides us with frameworks and models for understanding the world. For example, the emergence of computers gave rise to the computational theory of mind—the idea that the human mind is an information processing system and that cognition (and consciousness) are forms of computation.
Using the framework
Danaher and Særtra’s mechanisms provide a useful means of generating understandings of how the introduction of a new technology changes how people understand, interact and act in, and on, the world. In doing so it helps us sidestep the temptation to fall into technologically determinist accounts of the world.
Although focused on morality it can usefully be applied to deepen our understanding of the mechanisms through which technology informs change.
While the six mechanisms can help us look back to understand the ways technology shaped society, it’s also fruitful to consider how they might be deployed in a forecasting mode: to understand what first order effects a technology might have. First order effects are often easiest to predict, but since these mechanisms are likely to layer onto each other it is possible to use them to speculate about second or third order effects.
For example, Generative AI technology like Chat GPTgives us new options for seeking knowledge, with different costs, which in turn change relationships. Take the educational context: Gen AI enables new forms of information gathering and analysis. It changes the ways in which we understand information, moving it from something to be sourced, consumed and made sense of to something that is ready-to-hand in a pre-processed and pre-packaged form. Mental models about knowledge, what it is and how it is evaluated will shift, and in the process the respective roles of teacher and pupil change.
The framework has utility as a way to disentangle effects at different levels. A new educational technology might change how a teacher perceives their relationships with those they instruct. Moving up the scale, educational Gen AI might start to reconfigure the role of schools as instructional centres. At an even wider scale these mechanisms provide a useful window into how Gen AI may shift our broader relationships with pedagogical technologies, both low and hi-tech.
A product team exploring Gen AI in this context might want to consider the decisional, relational and perceptual shifts that Gen AI solutions may kick-start to understand better the ways in which its adoption and use might destabilise existing education equilibria, as well as the new opportunities it might open up.
Technological determinism is simultaneously an intuitive but also reductive way of accounting for the ways in which technology shapes the world. Danaher and Særtra’s mechanisms provide a fresh lens which brings some specificity to bear on the mutually constitutive relationship between technology and humans.
News
We were delighted to participate in the EPIC conference in Chicago last week. You can read a summary of one paper we presented, showcasing how our social scientists are working with our data science team, and Large Language Models, to develop engineering and product-focused insights.
Frames is a monthly newsletter that sheds light on the most important issues concerning business and technology.
Stripe Partners helps technology-led businesses invent better futures. To discuss how we can help you, get in touch.