A framework that helps team leaders understand the possibilities of working with AI by challenging the distinction between people and tools.
ChatGPT was the breakout star of 2022. Released by OpenAI at the end of November, the text generation chatbot quickly became a global phenomenon. After an initial flurry of playful experimentation, serious applications started to emerge, and haven’t stopped since. On 23 January, Microsoft announced a rumoured $10 billion investment in OpenAI—a declaration of faith in AI and its central place in the future of work. Just yesterday, Google shared plans to release an “experimental conversational AI service” to rival ChatGPT. We are waking up to a new era of tech, defined by generative AI.
Organisations and teams are grappling with how to respond to this sudden technological shift. Business orthodoxy since the 1960s frames organisational change in terms of the “3-legged stool” or “golden triangle” of PPT: People, Processes and Tools. Alignment between these 3 elements is seen as crucial for business success: when one “leg” changes, the others must be adjusted to keep the stool balanced.
But is this model still fit for purpose? When new possibilities are emerging everyday from the interactions between people and AI tools, does it make sense to consider them separately?
The framework
Actor-Network Theory (ANT) offers a fresh perspective. More than a singular theory, ANT describes an approach to studying social and technological systems developed by Bruno Latour, Michel Callon, Madeleine Akrich and John Law in the early 1980s.
ANT posits that the social and natural world is best understood as dynamic networks of humans and nonhuman actors. We tend to think of humans as actors with agency, and nonhuman entities as passive things that we use to achieve our objectives. But in ANT, anything that has an effect on anything else is afforded actor status. At least as a starting point, “things” are considered on a level playing field with people.
More fundamentally, it is an illusion that any one entity acts independently. Rather, agency emerges from the network. As expressed by Callon and Law, “by themselves, things don’t act. Indeed, there are no things “by themselves.” Instead, there are relations, relations which (sometimes) make things.”
Some networks may seem like fixed, bounded structures. In fact, this perceived stability is a precarious effect generated as long as actors’ interests align, and is therefore subject to internal and external forces of change. (The Covid-19 pandemic brought this contingency and interconnectedness into sharp relief, as a new pathogen actor brought the world to a standstill.)
Within this sprawling web of potentially infinite relations, the concept of hybrid offers a foothold. Zooming into any network, we can identify hybrid actors—instances of humans and nonhumans acting together. Through their interaction, people and things transform one other’s capacities and (sometimes) make things happen.
Using the framework
Applying ANT to the workplace, we can think of organisations as networks made up of human and nonhuman actors. Human actors are obvious to us—the personnel who work together to achieve organisational goals. But none of this would be possible without nonhuman actors such as office buildings, computers, software and coffee machines. A company’s outputs result from the interactions of these entities. Processes can be seen as patterns of interaction.
Within this network, there are myriad human-tool hybrids. While this has always been true, the advent of accessible AI makes the concept of hybrid more relevant than ever.
Much of the conversation about AI tools focuses on how people can use them, instrumentally, to speed up or enhance their work. Or how AI generates impressive outputs that compete with expert-produced work, threatening to replace humans altogether. But both of these perspectives miss the fact that outputs arise from the interaction of human and machine, and that we change one another in the process. The real actor is hybrid.
Those who have experimented with ChatGPT may recognise this sense of mutual influence. The intuitive interaction and ability to iterate leads to emergent outputs. Not necessarily in the text spat out on the screen—which as many point out, can feel hollow and derivative—but in the ideas it generates, as it learns from us and we from it.
To harness the potential of human-AI hybrid actors, teams could adopt a “lab” mentality, carving out space for individual and collective experimentation and remaining open to new ways of working. (AI experts offer inspiration for this intensely iterative approach, as they collaborate with the very technologies they research and develop.) Curiosity and creativity will be increasingly important skills to hire for and foster. Leaders should prepare for an accelerated pace of change, for example by shortening cycles for revising business processes.
Organisations should also remain cognisant that hybrids are part of multiple shifting networks, whose actors’ interests may or may not remain aligned. ChatGPT’s capacities will shift as the large language model underpinning it is upgraded. OpenAI may decide to restrict access to its products, and regulatory bodies may begin to control usage. ANT reminds us to pay attention to these wider network forces.
We can’t predict where innovations in generative AI will lead. Teams that think in terms of dynamic networks and hybrid actors will be well-placed to navigate shifting sands. The stable 3-legged stool starts to look like a relic from the past.
NEWS
Join us for the launch of the Consumer Culture Lab (CCL) in IIM Udaipur’s report Rangbhoomi: Exploring the Digital Heartlands of India on the 15th February at 12:00 GMT.
Through 2022, Stripe Partners supported the work of researchers from the CCL seeking to explore the technology revolution unfolding in the hinterlands of India through the eyes of local content creators. Sign up for the online event here.