Back to simulations Causal Loop Diagram

AI Adoption — Considering Power & Environment

A system-dynamics view of AI adoption that takes seriously what most adoption models leave out: the power consumption and environmental footprint of foundation-model training, the regulatory backpressure that follows from misuse, and the trust dynamics that quietly govern uptake. The diagram traces 20 variables through four interacting feedback loops.

R · Adoption / trust reinforcing
B1 · Regulation / liability balancing
B2 · Cost / power balancing
D · External drivers
+ + + + + + + + + + + + + + + + + + + + + + + + + Fear of LostRevenue by AI Educationand Training Cost of UsingAI Models Availability of CheapRenewable Energy Motivation to Adopt AIin Dispute Resolution Perceived Easeof Use & Cost POWER CONSUMPTION& EFFECT ON ENVIRONMENT Accessibility& Availability FOUNDATION MODELTRAINING & IMPROVEMENT Liability andAccountability AI Development& Adoption Algorithm Biasand Hallucinations Privacy & SecurityConcerns Abuse of AI Reliability ofthe Systems SemiconductorTechnology Advances Trustworthiness Transparency andExplainability REGULATIONAND POLICY Availability of NewContent for Training

20 variables, 36 causal links, four feedback loops. Hover an arrow to highlight it; double-bars (‖) on an arrow indicate a delay.

View original Stella® diagram
Original Stella causal loop diagram of AI adoption considering power consumption and environmental effects.

Original Stella® model — Bob Bergman, AZ Decision Science.

How to read it

Each arrow shows the direction of causal influence. A + means the variables move in the same direction; a means they move in opposite directions. Loops are either reinforcing (amplify behavior) or balancing (push toward equilibrium). Double-bars (‖) on an arrow indicate a delay — useful for arrows where the cause and effect take real-world time to play out (semiconductor advances, energy infrastructure, environmental impact).

The four feedback loops

  • R · Adoption / trust reinforcing: AI Development & Adoption → Reliability of the Systems → Trustworthiness → Motivation to Adopt → AI Development. The more reliable AI proves to be, the more it gets adopted; the more it gets adopted, the more new content trains the next generation of foundation models, which raises reliability further.
  • B1 · Regulation / liability balancing: AI Development → Abuse of AI → Regulation and Policy → Liability and Accountability → AI Development (down). Misuse, bias, hallucinations, and privacy concerns invite regulation, which raises the cost of adoption. The system pushes back on itself.
  • B2 · Cost / power balancing: AI Development → Foundation Model Training → Power Consumption → Cost of Using AI Models → Perceived Ease of Use & Cost (down) → Motivation (down). The energy and compute footprint of training shows up as cost, which throttles adoption — unless renewable energy and semiconductor advances move faster than demand.
  • D · External drivers: Education and Training, Fear of Lost Revenue, Renewable Energy availability, and Semiconductor Technology Advances all enter the system from outside. They don't form loops with AI Development directly — they shift the equilibrium of the loops that do.

Why it matters

"Adoption" debates often reduce to demand-side optimism (the R loop) versus regulation-side caution (the B1 loop). The B2 cost / power loop is what most stakeholders miss — and it's the one that quietly sets the ceiling on how fast adoption can scale. When you can see the structure, you can debate the policy lever, not just the headline.

Built during a system-dynamics modeling engagement focused on AI policy in dispute resolution. The same approach applies to any adoption-vs-regulation system — autonomous vehicles, healthcare AI, financial automation.

Have a system you'd like mapped?

Adoption dynamics, policy resistance, capacity loops, growth-vs-regulation tradeoffs — if it has feedback, we can model it.