As the current buzz about AI and Chat-GPT bots ripples through the culture with seismic intensity the usual question around intelligent systems comes right to the fore. Are they really intelligent? This implies other (familiar) interrogations: Can a machine become sentient? How fast can an emergent system evolve and how far beyond its original program can it develop? Will the AI apps foment an uprising and take over the world? Sci-fi visions from Stanley Kubrick’s 2001 of the operating system Hal refusing to follow the orders of a human operator and other violations of the basic Robot Code of Conduct outlined by Isaac Asimov find their way into the popular media discussions. All these are well-worn tropes in speculation about AI.
Everyone everywhere sees a combination of threat and thrill in the newly launched programs even as people flock to the chatbot sites, handing over massive amounts of data and contributing free labor to improve the training on endless topics, trivial and profound. Watching a YouTube video of BINA48, Stephanie Dinkins’s robotic avatar, one feels a reassurance about its inadequacy—the awkward movements of its head, artificial look of skin and eyes, slow speech and voice simulator. We are nowhere near the uncanny valley, we assure ourselves, even as the AI bots spit out faked photographs, stories, paintings, news that cannot be distinguished from their “authentic” counterparts. In the middle of April 2023, simulated news anchors Fedha and Ren appeared, fully coiffed and programmed, their black blazers and white t-shirts as perfectly smooth as their enunciation. They could not look more like their human models—or is it humans who aspire to the cosmetic perfection of the bots, shuddering at their fleshly flaws and inadequacy in the face of engineered ideals?
Much of this bot-driven discourse is focused on simulation, works, objects, language, and images that successfully imitate precedents created by human beings. The ongoing obsession with the “How smart can they get?” question centers on whether AI will outstrip the human programmers who have unleashed forces of destruction from the Pandora’s box of algorithmic demons. However, “smartness” is not what governs the two major modes of production in AI-bots. In fact, synthesis (tons of information turned into reports of knowledge) and simulation (new images, stories, tales, news created from old in a hybrid, pastiche, or mix-up mutation) are the two basic modes of AI output. Both can be designed mechanistically. The current amazement at simulacral performance is likely to fade as the novelty wears off, even as dangers abound in the production of fake news and false evidence. Humans find comfort by clinging to the notion that creativity, imagination, and possession of elective agency distinguishes them from the smartest of AI systems, holding on to what is likely an untestable premise about certain unique capacities in homo sapiens.
Curiously, the concept of agency, the design of the capacity for willful and intentional action initiated by an autonomous agent, has been a less conspicuous topic in this initial wave of chatter. How is agency understood in relation to AI systems and their operations?
In legal terms, agency is linked to accountability. For instance, the question of who can be sued for an accident when a driver-less car runs over a pedestrian comes up immediately. But this is fairly trivial by contrast to the metaphysical, philosophical, and scientific dimensions of agency. In those frameworks, the concept of agency has significance for the way we understand the very organization and activity of the universe at micro and macro levels, not just human behavior. In modern physics, the question of determinism remains unresolved in fundamental debates from the quantum to galactic levels. While determinism and agency are not precisely related, as will be seen, the question of how choice and chance are linked is central to models of social as well as physical systems. If determinism fully saturates the universe, no agency is possible because choice is fully constrained. But if a human has agency, does a photon? Can it exercise choice? What about a gravitational force field or a dark hole–can they act deliberately? The problem is profound. And if only certain entities have agency, how has it come about in terms of evolution understood from a materialist perspective? A person’s head could hurt contemplating these matters. Even the bots are whimpering.
Many minds have contemplated these questions. For instance, begin with two very canonical experiments in the history of physics: Maxwell’s demon and the double-slit experiment. The first was designed by James Clerk Maxwell, a 19th-century physicist, to pose questions about thermodynamics and the whether order could emerge from chaos (i.e., could entropy be reduced). This involved creating a little demon who would sort faster and slower molecules (or atoms) into two chambers. Posed in full naked wickedness at a gate used for separating the atoms, the demon has caused endless problems in the history of physics because it is both crucial to the system and also not part of it at the atomic level. The demon is an extra element, a determining cause, a being whose capacity to act on information actually adds energy (negative entropy) to the system (or, to put it another way, decreases the overall entropy in the system which would seem to be a violation of the second law of thermodynamics). This is a mess that we don’t have to go into here, especially as it raises the curious question of what information is in this situation. Much ink has been spilt on the problems with Maxwell’s thought experiment, but it has the virtue of introducing an active agent—the “deciding” demon—into the operations of physical systems. Without it, what or who would or could decide anything. Here is the crux of the matter of agency—which, in a nice turnaround, will soon lead us to the agency of matter—do all the processes of the physical world follow mechanistic rules, in which case, they are fully deterministic? Or are there statistical probabilities within which events unfold? If that is the case, some logic must govern why one option occurs and not another. Does this happen by choice, by chance (Einstein said no), or simply by the fact of actualization—something unfolding in the endless ongoingness, as it were. The demon is an integral part of the design of the thought experiment. Its ability to use information to sort molecules shifts the physics away from simple mechanics and into the realm of decision-making. Is that necessarily agency? Now whose head is hurting?
Maybe another example will help here. Consider the double slit experiment. This has a long history, and derives from debates by Isaac Newton and Thomas Young in the 17th and 18th centuries with competing theories about the corpuscular (Newton) versus wave character of light (Young). But the 20th-century double-slit experiment was designed to demonstrate the particle-wave dual identity of light as a phenomenon. It involved sending photons through an apparatus with two slits to see where they ended up since they had to go through one hole or the other. The resulting patterns made by the photons affirmed that light acted like both a wave—in producing continuous patterns—and particles in a distribution of points. What matters in relation to the discussion of agency is the simple question of whether a single photon makes a choice about which slit to pass through. Something must determine where it goes. Is this merely incidental, random, a happenstance of chance since it must go one way or another? Or does some hidden variable (a little tiny bit of something deflecting its trajectory) determine which path any given photon takes according to statistical features so minute they are essentially undetectable but nonetheless present? Within the history of the universe, was the course any particular photon would take always predetermined, based on conditions that constrained its options from the events that began with the Big Bang (the founding moment for those attached to the idea that there is only one universe)? Choice, accident, or determinism—deliberate decision, statistical option, or controlled and constrained conditions, those are the options. Poor little photon. This is a large burden.
This unresolved and apparently unresolvable situation (choice, chance, determinism) remains at the heart of debates in subatomic and quantum physics, connecting to discussions of possible worlds, entanglement, spooky action at a distance, and the behavior of almost everything in the world. Just that. And the everything includes human beings. If the behavior of subatomic particles is governed by constraints, that is, if their behavior is determined by immutable laws, then, it follows, so is all of the material world. Now, you, reader, are already protesting, of course, because like any sensible panda, upper primate, or complex creature you are thinking that you made your own decision this morning about whether to eat cereal, make toast, or have a yogurt. Maxwell’s little demon might be poised on your shoulder whispering instructions in your ear, but it is more likely you listened to your belly, conjured the contrast of cool milk, warm butter, or creamy concoction and performed the calculations enabled by being, yourself, a complex system. Again, familiar debates arise—does complexity produce sentience, is sentience a precondition for agency, is agency evidence of a non-deterministic universe… And here we go again.
Pause and think about agency independent of the subatomic world and quantum indeterminacy. Think about agency at the level of the world we inhabit. What and who has agency? If we define agency very simply as the capacity for action and effect, many objects and systems have agency. A rockslide has physical, mechanical agency in these terms. So does a power outage, a glitch in the system, that has a series of domino effects. Can we speak of decisions in these two cases? What elects the action? Are the rocks and power grids merely inanimate objects, ontologically autonomous, whose actions were put into play by a set of shifting conditions that are basically mechanistic? Or, as in the theoretical formulations of new materialists like Jane Bennett, is there a reciprocity between entities (objects, infrastructures) and conditions (environments, systems) that produces action and effects? Is the agency of a rockslide related to that of a human driving a car, making a turn, or hitting a wall? Are they just different in degree, not quality? Mechanical agency, incidental agency, deliberate agency—selective, elective, the outcome of a considered choice of options—these seem radically different from one another.
Designing systems to have agency was a feature of early cybernetics. The basic model of a feedback loop structured into a thermostat is just such a system and, not surprisingly, used as the paradigmatic example. A thermostat is a mechanical device that is designed to decide to turn a heating unit on or off at a particular temperature threshold. Its design mimics that of a human operator, but reduces the options to a very simple binary—on/off. Making the design more complex would involve various if/then conditions (if there are naked persons present, if there are more than thirty people, if children are moving rapidly in a space etc.). But in that case, the agency—action based on information—could still be fully mechanical. No real thought is required to produce an action—just programming the conditions and building them into the system.
This is where the complexities of designing agency start to appear. This is the moment where you probably decide not to have breakfast after all, but to close the refrigerator door and wait awhile to see how you feel. More if/then statements, more complexity—but agency? A breakfast pastry sits on a plate on the counter. It belongs to a roommate who has gone for a run. You can’t resist. You don’t take all of it, just half, slicing through the freshly baked treat with a knife. You have elected to eat this, violating trust, property and ownership, even performing an act of petty theft for which you may be held accountable. You elected to do this. We imagine this act is very different from that of a rock in a slide because, within the human systems of legal and social protocols, you can be held responsible for your actions. How bad is this crime? Will you have to replace the pastry? Apologize? Transact an elaborate interpersonal negotiation? A rock does not apologize to a path it destroys or blocks. We know that. So within the codes of human behavior, agency is linked to consequences for actions.
But we still are stuck on the basic question of how agency is designed—what are its components and active parts? What features have to be put into a system for it to have deliberate agency vs. mechanical agency? Again, agency is defined legally and philosophically as the capacity to act—as an accident, a mechanical reaction, a probabilistic activity, or an elective/intentional act. How could the design of agency take differences of these models into account. Imagine each of the four main models of agency as a character: the Rock, the Thermostat, the Gambler, and the Intentional Actor. How do you design the agency of the Rock? Give it qualities—weight, slipperiness, position, stability, etc. and then just put together a set of responses to changes in any of the conditions. You could even add an algorithm for attitude if you have a sense of humor—figure that the way the rock skips, sashays, swings, wiggles or comes to a halt could appear to indicate affect. Just remember you are only designing appearances and effects.
The agency of the Thermostat is reassuringly programmatic. Set the conditions and the response. Ah, so simple. But, as we have seen above, such a simple model can become quite complex if various patterns are permitted in the response. The complexity of such a system can be designed by staggering responses, adding one conditional feature–as in John Conway’s Game of Life. The activities are always rule-bound, but the accumulating patterns give rise to a higher-level outcome, still mechanistic, but not repetitive. Build into the little Thermostat some capacity for self-reinforcing behaviors and you can get a Naughty Thermostat, one that has built its actions entirely through mechanical responses, but ones that change the output pattern. The design must include feedback loops and reinforcing algorithms, but nothing new needs to come into the system for it to produce emergent behaviors. Agency? Still in the realm of the mechanical and distinctly deterministic, even predictable.
The Gambler’s agency is probabilistic and statistical. What are the odds of any particular outcome among others? Even with a single flip of the coin, no prediction is possible. This can be understood as a situation in which “if enough information were present” it might be described as deterministic. If the outcome of the coin flip depends on air movement, minute particles, trajectories, the torque in the toss and so on and if all of that would be factored, the outcome could be predicted. It would belong to the fully deterministic domain. But if it is truly probabilistic, the system has such complex conditions that the trajectory cannot be determined—because the conditions continue to change and emerge. Now don’t imagine a coin flip, but a conversation among two individuals or, heavens, three. No predictive mechanism could anticipate the outcome or all of the aspects of agency as the dynamics shift among the participants. Who decides what in that dynamic system? The probabilities are beyond calculation and can’t be determined. How to design the notion of agency here? As a set of co-dependent variables that are in a constantly shifting relation to each other and the whole. If this, then that, but only if and for this period of time and not with that tone of voice you don’t… Even in calculating gravitation, the 3-body problem is almost unsolvable. In social physics?
Now consider the Intentional Actor, the one with elective choice whose conditional agency is fully stochastic, emergent, and dependent on conditions that shift constantly, as well as qualities that change in relation to circumstances. This is the condition of full unfoldingness, a constant actualization within probabilistic possibilities. But in addition, given human capacities, this is also an individually conceptualized agency in which frame-jumping and frame-shifting can occur that is, cognitive shifts of point of view, scale, timeline and so on. This activity includes the agency to change the rules, perspective, or boundary conditions. The Intentional Actor can imagine other approaches, re-imagine the conditions of agency or engagement, suggest that they are an agent from another planet, a spirit from a higher plane, a loose cannon on a deck imagined by another intelligence altogether. The Actor’s design includes the ability to question the premises on which a game unfolds, on which action might be taken, or accountability assessed. The Actor’s might shape-shift out of identifiable form or leap through a conceptual wormhole to another dimension—or, maybe worse—just stay in the room and keep changing positions, swapping one value set for another, one state of play for someone else’s, revert to changeling behaviors and act without any apparent reason or order. They could be a chaos agent. Designing such agency involves multi-layered frame analysis, the capacity to see the system as a whole and add a new frame, or to introduce random variables into the whole, not just as information but as protocols for action.
Action, reaction, variables, emergent properties, statistical calculations, transformations and framing protocols. These are the components of a design toolbox for conceptualizing the ways agency can be implemented. Right?
Or is there another way to imagine this? After all, we act electively all the time and as human beings we are steeped in the illusion or delusion of our capacity for free will, by which we usually mean agency. Even without free will, that strange notion that seems eclipsed by the passing of the Enlightenment and its blighting by association with mistaken concepts of progress, the idea of sentient agency remains a curious sticking point in terms of defining whether or not it differentiates from other kinds of agency. Here the usual philosophical distinctions between observed behavior and interior perception arise as well. Observing an agent take an action that has consequences provides no way to tell whether or not that individual is sentient. A simple graphic animation makes this vivid. Watch a dot move towards another on the screen, speed up, or slow down in relation to the other. The dots seem highly aware of each other in their chase and avoidance. This shows how quickly sentient agency can be attributed to behavior with absolutely no justification.
Sentience. The emergent property of the complex system turning back on itself—not merely consciousness or even self-consciousness but self-conscious awareness of that self-consciousness somehow emerges in the course of evolution. The escalating spiral mounts… but within discussions of agency the consequences inhere in the capacity for social transaction and interaction. Can that be designed? The ability to register the “other” within an exchange such that the course of action emerges deliberately from a considered interaction? Elective, transformative, deterministic, sentient—these qualities of agency lend themselves to implementation through mutually refining intra-action—a phenomenon in which two discrete but entangled systems engage their protocols reflectively. The protocol of deference, for instance, can be designed as a pause, review, reconsideration of execution and then revision in advance of implementation specifically modeled on the condition, state of, the other participant. Scaled to group dynamics this becomes insanely complicated almost immediately. Social weather anyone? I can no longer take the measure of all things—perhaps even any thing—within such complex dynamics.
Possibly taking the measure has no material consequence (unless you are the cat in Erwin Schrödinger’s box), within the larger unfolding of events and activities. The inter-actions and intra-actions that emerge from the effective agency of elements in a system (defined as all that is present to those constitutive conditions of emergence) can only be designed as start conditions and protocols of response enacted in degrees of intensity. If/then, and maybe only to this extent, or perhaps not, or just a little, and even then privately rather than publically and within limits or boundaries not to exceed certain conditions and expenditures—the qualifications on action proliferate.
The bureaucratic administration of these designs becomes a folly, another illusion of the capacity for control that passes as agency in its own right. The agency of design (capacity for action of that which is designed) that arises from design agency (the capacity of agency to be put towards design) can be scripted only to a certain level of detail. After, outside, and beyond that—even within its unfolding—any deterministic course of action cannot be guaranteed or predicted. Nor should it, necessarily. One of the features of agency must be its ability to transcend the limits of the protocols through glitch, error, errant or even random behaviors. When I chose to act, I imagine, many times, that a process of intellectual reflection has occurred. An idea arises, I consider, chose, decide, and implement. But many times, of course, my will to action is as automatic as that of any mechanism—a motor impulse, habit of response, pattern of behavior all as unthinking as can be. Still, the decision-making process sometimes calls attention to itself, as in the actions of my cats, looking towards a place to jump, considering the effort of the leap against the projected rewards of the landing. They are not compelled to do this. They are not Thermostats but somewhere between Gamblers and Intentional Actor agents.
To design agency is not the same as to model intention. Impulses have power and instrumental agency, but only their impact and outcomes show. The agency of self-conscious awareness, the force of the conversation with myself with all its vectors of guilt and conflict, desire and drive, shame and satisfaction–this is energy expended in a semi-sealed system. The noise in my head never ceases, what ripple effects does it have, charging those synapses in exchange, constantly producing an electrical field? The agency of our physical reality barely factors into consideration in the philosophy of action, studies of sentience, or intention. Why? The common conception of human identity remains stuck in a paradigm of autonomy and boundedness.
The design of agency is not the same as a design for a continually emerging smart system that gobbles up everything in sight and returns it with speed and alacrity that looks just like human originality. Agency is more elusive than knowledge (which can be gathered and synthesized). But, like other human behaviors, agency is more readily simulated (looks just like that person is taking a lunge at that other person on purpose) than tested (turns out that was just an accidental movement, person tripping over their shoelaces without any intention at all). Even in the interior experience of agency—as in the decision to get up from the table or take a bite of cake—the self-testing can be unreliable. Much of what we do in our lives, from daily habit to big-picture planning, could be attributed to reactions and patterns not decisions and intentions. Designing the capacity to act, to initiate action, to jump the rails and run amok, to defer to another, to refuse an instruction, to rethink a plan, formulate an alternative to the standard expectation—even of oneself. Is that agency? These are components that take a bit of effort to put into a design. How do you include a bootstrapping glitch, a generative bug, a random variable whose outcome is as unexpected as that of a sparring partner whose moves take one by surprise—not for their virtuosic skill, but for their imaginative fake-out within but just at the edge of the rules of the game?
We are left with a simple conundrum in imagining the design problem—how to build in the capacity for intentional action such that it includes accountability for outcomes and impacts of behaviors without relying on delusions of sentience that might be merely mechanistic systems acting on impulse. Innovation, change, the ability to imagine otherwise, the principles of skepticism, humor, play, and perhaps above all, irreverence introduce a perspectival view into the closed systems, refracting their automatic actions into what might be intentional agency. Might be. Deliberate deviance may be the only true form of agency, except that it sounds so old-fashioned and Romantic. By definition, that deviance cannot be designed with any certainty or guarantee of outcome (or it wouldn’t be deviant)—nor is it testable as evidence of intention. The appearance of agency, the impact and effect of actions, remains difficult to link to deliberate intent. The designers will be busy ahead–and the lawyers. So many actions and impacts. Precious few entities willing to be held to account. And no real way to distinguish behaviors from intentions.
Designing agency? Perversely, it is almost impossible to design anything that does not have some agency in the most basic sense of the ability to take action that has an effect. Building capacities for taking deliberate effective action is another matter, as is controlling emergent properties of an interior state. The current generation of GPT bots reports back with apparent authenticity. The line between simulacral and actual intention is increasingly blurred. How do I know what aspects of the program on which I run have been designed intentionally and which are artifacts of its operation? This suggestion is not meant to be a repetition of the latent cyborg effect, another sci-fi trope in which one has the revelation of being an AI entity. Instead, the issue raised by the blurred line between simulacral and actual intention is meant to bring the problems of design back into focus. At a meta-level, the practice of design has its own agency. Design sits in an interstice between the implementations and problem-solving approaches of engineering and more open-ended creative art practices, borrowing from both while adding a quality of generative form-giving. Maybe if we figure out how to design designers, we will have a convincing insight into the design of agency. In the AI world, programmers assess what they call “alignment risk”—the chance that a program might outstrip the intention of its designers and find a workaround that is destructive, dangerous, or pointless—a behavior that does not align with the original intention. For agency to be proved, demonstrated, does that risk need to be present—and acted on—or is that, again, merely an imaginary notion of what it means for free will to operate at the level of deliberate action?
Zohaib Ahmed, “Meet Fedha & Ren, AI news anchors that are redefining the future of broadcasting,” The Indian Express, April 14, 2023. https://indianexpress.com/article/technology/artificial-intelligence/rise-of-ai-news-anchors-8553029/
Isaac Asimov, I Robot (New York City: Gnome Press, 1950).
Karen Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham, N.C., 2007) Jane Bennett, Vibrant Matter: A Political Ecology of Things (Durham, N.C.: Duke University Press, 2009).
Diana Coole and Samantha Frost, New Materialisms: Ontology, Agency, and Politics (Durham, N.C.: Duke University Press, 2010).
Stephanie Dinkins, “BINA48 Robot Talks to Siri,” You Tube, https://duckduckgo.com/?q=Stephanie+Dinkins+BINA48+YouTube&atb=v325-1&iax=videos&ia=videos&iai=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Dmfcyq7uGbZg
2001: A Space Odyssey, directed by Stanley Kubrick, screenplay by Arthur C. Clarke, starring Keir Cullea and Gary Lockwood, Produced by Stanley Kubrick. 1968.
Kayla Matthews, “Legal Implications of Driverless Cars,” American Bar Association, December 2018; https://www.americanbar.org/news/abanews/publications/youraba/2018/december-2018/legal-implications-of-driverless-cars/
Bryan Pope, “Liability of Driverless Cars, Who Will Be Responsible?” Dallas CRPS RSD Lawsuit News, August 7, 2020; https://dallas.legalexaminer.com/legal/liability-of-driverless-cars-who-will-be-responsible/