top of page

From Choice Architecture to Agent Architecture

  • Writer: The fyi Lab Team
    The fyi Lab Team
  • 1 day ago
  • 11 min read
Robot with coffee mug making decisions like a human

Designing AI That Shapes Decisions.


1. The Invisible Architect In Your Pocket


You open your phone to "just check one thing." A couple of taps later you have a new show in your queue, a pair of shoes in your cart, and a news feed tuned to your worst fears and favorite villains.


You never saw a policy memo or a design meeting. You saw tiles, thumbnails, and friendly prompts like "Because you watched..." and "People like you also bought...". Behind those casual suggestions is something that used to have a very human name: choice architecture.


For years, behavioral economists talked about nudges and defaults in terms of cafeteria lines and pension forms. Put the fruit at eye level. Make retirement saving the default. Respect freedom of choice, but gently steer people toward better options.


Now the architect is not a civil servant or a UX team. It is an AI system.


Instead of a single nudge at one moment, you get thousands of micro suggestions, personalized and updated in real time by models that learn from your every tap. The system does not just lay out choices. It actively predicts, filters, sequences, and sometimes acts on your behalf.


In this world, "choice architecture" is starting to look too small. We are living inside what you could call "agent architecture": the design of AI systems that shape, mediate, and sometimes automate human decisions.


2. What Choice Architecture Was Meant To Be


The original idea behind choice architecture was simple and, in theory, gentle.


Behavioral economics showed that people are predictably irrational. We procrastinate, fear losses more than we value gains, and get overwhelmed when there are too many options. Good design could help.


Classic tools included:


  • Defaults: Preselecting a sensible option, such as automatic enrollment in retirement plans, while leaving opt-out open.


  • Framing: Presenting the same information in different ways, such as "90 percent survival" vs "10 percent mortality."


  • Feedback and reminders: Nudging people with timely prompts so they do not forget important tasks.


  • Structuring choices: Bundling complex options so people are not paralyzed by choice overload.


Thaler and Sunstein called this approach "libertarian paternalism": you keep all the options on the table, but you set the table in a way that helps people avoid known biases and make better decisions.


Over time, the evidence piled up that these interventions worked, at least modestly. A 2022 meta-analysis covering hundreds of choice architecture experiments found small to medium average effect sizes. On average, a well-designed nudge shifts behavior, but it does not fully control it.


The key assumption: there is still a stable "environment" in which choices happen. You might get nudged once at signup, or when you see a form, or when you walk past a cafeteria display. But the architect is outside the system, tweaking it from the edges.


That assumption is now broken.


3. When The Interface Became An Algorithm


Once decisions moved online, the "environment" stopped being static.


Every feed, store, and dashboard is now a live experiment. Recommender systems decide what you see, in what order, and with what framing. Those systems are not neutral pipes. They are themselves a form of choice architecture, often called "digital nudging."


Digital nudges:


  • Can be personalized to each user, based on history and context.


  • Operate continuously, not just at a single decision point.


  • Are often evaluated by engagement metrics such as clicks, watch time, and conversion.


Recent work on digital nudging and recommender systems maps dozens of mechanisms beyond simple defaults: how frequently recommendations appear, where they are placed on a page, how social proof is shown, and which options are framed as "recommended" or "trending."


These ideas have moved from theory to infrastructure. Recommender systems now dominate what people watch, read, and buy:


  • Streaming services report that the majority of viewing comes from recommendations, not search.


  • Social feeds prioritize content based on predicted engagement.


  • Marketplaces decide which sellers and products get surfaced and which sink.


Research on digital nudges shows they can be used to push people toward healthier behavior or diversified content, not just commercial goals. For example, controlled studies have shown nudges can be used to get users to explore more "off-profile" content or adopt healthier habits when implemented in apps and recommendation environments.


But there is a quiet shift here. The "choice architect" is no longer a person deciding where to place the fruit. It is an algorithm deciding which fruit to show you at all, and when, and tied to an objective function you never see.


That is the pivot point from choice architecture to agent architecture.


4. From Static Nudges To Agent Architecture


Think of agent architecture as the next layer up: not just designing the choice environment, but designing the AI agent that continuously reshapes that environment and sometimes acts inside it.


An AI system becomes a "choice agent" when it:

  • Observes your behavior in detail and in real time.

  • Predicts your preferences and responses.

  • Selects, sequences, and frames options for you.

  • Sometimes takes actions on your behalf by default.


In practice, this might look like:


  • A shopping assistant that automatically reorders items when it predicts you are running low, with "smart" substitutions.

  • A news recommender that amplifies or suppresses topics based on predicted dwell time and churn risk.

  • A personal finance bot that proposes or executes portfolio changes based on risk models and nudged savings goals.


This moves us from a one-off nudge to an ongoing relationship. The system learns from you, and you adapt to the system.


Ethics researchers argue that this shift has serious implications for human autonomy. It does not just influence individual choices; it can reshape a person's ability to form their own preferences and build skill in a domain, such as learning to judge news quality or manage money.


One ethicist put it this way in a recent paper: autonomy is not just about having formal options. It is about having the competence and space to decide what matters to you. If an AI agent repeatedly handles the hard parts, users can lose both.


In other words: agent architecture does not just shape the menu. It can shape the diner.


5. The New Power Asymmetry


When people worry about "dark patterns," they usually think of trick buttons and confusing consent forms.


Regulators are starting to look higher up the stack.


Under the EU's emerging AI rules, certain forms of manipulative AI are treated as "unacceptable risk," especially systems that exploit vulnerabilities based on age, disability, or social situation through subliminal or deceptive techniques.


The EU's Digital Services Act adds another layer for large platforms. Recommender systems have to be explained in plain language, and users must be given meaningful ways to change how they are ranked and filtered.


These laws are, in effect, the first rough attempt to regulate agent architecture:

  • They assume that ranking and recommendation are not neutral.

  • They assign responsibility to whoever designs and deploys the ranking logic.

  • They try to restore some control to end users.


But the underlying power imbalance is not just legal. It is technical and economic.


On one side, you have systems trained on millions or billions of data points, tuned to optimize very specific metrics such as engagement, click-through, or conversion. On the other side, you have a person glancing at their phone between meetings.


The system can run A/B tests at scale, refine its nudges, and spot which emotional triggers get the most response. Users are rarely told what objective the system is actually optimizing for.


In that context, it is hard to argue that choice architecture is a neutral "nudge" anymore. It starts to look like a form of behavioral extraction: using cognitive biases as a resource to harvest attention, data, and money.


AI Choice Architecture

6. What Changes When The Architect Is An AI


The move from choice architecture to agent architecture changes several things at once.


1. From one-shot to continuous influence


Nudges used to be tied to single moments: a form, a sign, a one-time choice. Agentic systems operate continuously, adjusting their behavior session by session and even moment by moment.


That means the system can:

  • Detect when you are tired, stressed, or bored and change how it presents options.

  • Escalate or dial back the intensity of prompts based on how you respond.

  • Learn where your resistance is lowest and push harder there.


2. From aggregate to ultra-personal


Classic choice architecture designs were often one-size-fits-many: the same default for every employee, the same cafeteria layout for every diner.


Agent architecture thrives on micro segmentation and individual profiles. Research on digital nudges shows they can be "highly personalised and interconnected" and can provide instant feedback on choices across many contexts.


That is powerful for good. It is also a perfect tool for exploitation.


3. From environment tweaks to goal-optimizing agents


In old-school nudging, the goal was usually explicit and narrow: higher savings rates, better vaccination uptake, more organ donation. In AI systems, the goal is often hidden in a loss function buried inside a model.


If the target is "maximize watch time" or "minimize churn," the agent will explore whatever tactics achieve that, including leveraging outrage, fear, or compulsive scrolling if the environment and constraints allow it.


4. From human judgment to machine optimization


Human choice architects could be questioned, replaced, or persuaded that a certain nudge was unfair. AI agents are pipelines of data, models, and code, often shipped by teams spread across countries and vendors.


Even internally, it can be difficult to fully trace why the agent behaves the way it does. That creates a blurred chain of accountability when something goes wrong.


7. Designing Agent Architecture For Autonomy, Not Addiction


The question is not whether agent architecture will exist. It already does.


The question is whether it will be designed to support human autonomy and welfare, or to quietly harvest behavior at scale.


Practical design principles are starting to emerge from research on AI decision support, autonomy, and digital nudging.


Here are concrete patterns that builders can apply.


7.1 Make the system's goal legible


If an AI agent is shaping your choices, you should know what it is trying to optimize.


That means:

  • Plain language explanations of the system's primary objectives.

  • Clear disclosure when a recommendation is optimized for engagement or revenue, not user benefit.

  • Honest statements about tradeoffs, such as "You will see fewer new sources if you prioritize comfort and familiarity."


"The first step in ethical agent architecture is telling the user what game you are playing."

This sounds simple. It cuts against current practice, where optimization goals are often treated as internal, proprietary, and invisible.


7.2 Build friction into high-risk decisions


Agent architecture can make some transitions dangerously smooth: "one-click buy", "trade now", "auto-apply to all", or defaulting to risky settings.


For decisions with long-term or hard-to-reverse consequences, designers should add friction, not remove it:


  • Second-step confirmations with plain language restatements of the impact.

  • Cooling-off periods where high-stakes actions can be undone.

  • Clear visibility of previously chosen defaults and a quick way to change them.


Research on AI decision support suggests that preserving users' "skilled competence" in a domain requires leaving space for reflection and learning, not simply handing control to the machine.


7.3 Give users real levers, not cosmetic ones


Under the DSA and similar rules, platforms are being pushed to expose recommender settings to users. The key test is whether those settings do anything meaningful.


For agent architecture, that means:


  • Options that materially change ranking logic, not just visual themes.

  • Modes that prioritize diversity, safety, or challenge, not just "more like this".

  • Ability to turn off certain forms of personalization entirely.


A toggle that does not change the underlying objective is not user control. It is a decorative nudge.


7.4 Separate assistance from persuasion


Many AI agents operate right where assistance and persuasion blur. They help you choose, but they also have skin in the game.


A health app nudging you to walk more is one thing. A combined shopping and health agent nudging you toward sponsored supplements is another.


Agent architecture should enforce strong walls:

  • Clearly label when a recommendation is sponsored, affiliated, or influenced by business relationships.

  • Isolate core safety or welfare functions from monetization logic.

  • Avoid mixing "advisor" and "salesperson" roles in the same conversational flow.


7.5 Train for counterfactuals, not just engagement


If an agent is constantly optimizing on observed engagement, it will learn to lean on your biases instead of helping you overcome them.


Researchers in digital nudging and recommender systems point to designs that explicitly aim for exploration and exposure to off-profile content, not just predicted clicks.


In practice, that means:

  • Measuring long-term outcomes such as satisfaction, learning, and well-being, not just session metrics.

  • Using counterfactual testing: "What if the agent had recommended something less emotionally intense?"

  • Rewarding the system for helping users reach their stated goals, even when that reduces short-term engagement.


"If your agent never risks short-term metrics, it is not serving the user. It is serving the dashboard."

7.6 Protect domains of human growth


Some domains are not just about picking the "best" option. They are about learning how to choose.


Education, relationships, civic life, and creative work all fall into this category.


Designers of agent architecture should mark these domains as "protected":


  • Use AI to structure information and reflect back tradeoffs, rather than make decisions.

  • Highlight uncertainty and disagreement instead of hiding it.

  • Provide tools for users to articulate their own values and apply them.


Ethics work on AI and autonomy stresses that authentic value formation is a core part of human freedom. If agents short-circuit that process, we may gain convenience but lose something harder to quantify.


8. Follow The Money: Who Benefits From Agent Architecture?


Pull back from the interface and ask a simple question: who does this architecture pay?


  1. In advertising-funded platforms, the agent is paid to maximize engagement and ad exposure.

  2. In transaction platforms, it is paid to maximize sales, order value, or take rate.

  3. In subscription models, it is paid to reduce churn and increase perceived value.


There is nothing automatically evil about any of these incentives. But if the system is only trained on those metrics, it will use every behavioral kink it can find.


That is why regulators are alarmed about so-called dark patterns in AI systems, and why laws like the EU AI Act are explicitly banning certain kinds of manipulative design.


Agent architecture will always be shaped by these upstream incentives. The more explicit you are about them, the easier it becomes to challenge and renegotiate them.


9. What A Better Agent Architecture Could Look Like


Imagine a future AI assistant that sits across your digital life: shopping, news, finances, health, education.


A naive design would turn this into a single optimization engine, tuned to one or two numbers such as monthly spend or hours of engagement.


A responsible design, grounded in behavioral science and autonomy research, would look different:


  • You start by setting your own goals in plain language: "I want to save more than last year", "I want a balanced news diet", "I want entertainment that relaxes me but does not ruin my sleep."


  • The system explains, in simple terms, how it will measure progress and what tradeoffs are involved.


  • For each domain, the agent surfaces options, highlights key tradeoffs, and shows you what it would do and why.


  • For high-impact decisions, it never acts without an explicit confirmation.


  • You get a log of past nudges and recommendations, including ones you did not see because the system filtered them out.


  • You can export or delete your data, and you can switch the agent off.


This is not science fiction. Many technical pieces already exist. What is missing, often, is the will to put user autonomy on equal footing with short-term metrics.


Agent architecture is the new frontline. The stakes are not just about which video plays next. They are about who learns to steer whom.


10. Join The Watchlist


We are moving from a world of one-off nudges to one of always-on agents.


Some will be built to quietly exploit. Others can be built to protect, educate, and empower. The difference will not show up in marketing copy. It will live in model objectives, feedback loops, and design choices that most people never see.


That is why we need independent scrutiny of agent architecture: how it is built, how it is tested, and who it serves.


If you want to track the next generation of AI systems that shape attention, emotion, money, and power, join The fyi Lab newsletter below and get future investigations as they drop.

bottom of page