Home/Explainers/Eigenmorality

A Mathematical Framework for Ethics

What If Morality
Had an Eigenvector?

Scott Aaronson proposed treating moral weight like Google's PageRank: you matter morally to the extent that moral agents value you.

Here's the puzzle: we want to say “good people value good things.” But that's circular. How do we know who the good people are?

Eigenmorality cuts through this by finding a fixed point. We look for moral weights that are self-consistent: if we calculate who the good people are based on these weights, we get the same weights back.

Your moral weight = how much moral agents value you,
weighted by their moral weights.

The principal eigenvector of the valuation matrix.

1

Build a Graph

Each agent assigns moral weight to other agents. This creates a directed graph.

2

Find the Eigenvector

Use power iteration to find the principal eigenvector of the valuation matrix.

3

Get Moral Weights

The eigenvector entries are the “eigenmorality” scores for each agent.

PART I

Build Your Moral Graph

Create a network of moral agents and specify how much each values the others. Watch how eigenmorality scores emerge from the structure.

Moral Graph Editor
0.80.60.70.50.90.4A38.9%B31.4%C29.7%
Valuation Matrix (row values column)
ABC
Alice
-
Bob
-
Carol
-
Eigenmorality Scores:
Alice
38.9%
Bob
31.4%
Carol
29.7%
Click nodes to remove them. Drag sliders to change valuations.

Notice how agents who are highly valued by other high-value agents get higher eigenmorality. It's recursive: your moral weight depends on who values you, weighted by their moral weight.

PART II

The PageRank Connection

Google's PageRank algorithm solved a similar circularity: a webpage is important if important pages link to it. How do you know which pages are important?

The answer: find the eigenvector. The same math that ranks websites can rank moral agents.

PageRank AnalogyHover over nodes
News Site28%Blog24%Forum18%Wiki30%

PageRank: A page is important if important pages link to it.
Eigenmorality: An agent is moral if moral agents value them.

PageRank

  • Nodes = web pages
  • Edges = hyperlinks
  • Score = importance for search
  • A page is important if important pages link to it

Eigenmorality

  • Nodes = moral agents
  • Edges = moral valuations
  • Score = moral weight
  • An agent matters if valued by agents who matter

The key insight: both problems have circular definitions that resolve into a well-defined eigenvector. The math doesn't care if you're ranking websites or moral status.

PART III

Watch It Converge

Power iteration is how we find the eigenvector. Start with equal weights, then repeatedly apply the valuation matrix. The scores converge to the eigenmorality values.

Power Iteration
Speed:
Iteration 0 of 33
0.80.60.70.50.90.4A33.3%B33.3%C33.3%
Convergence Over Time
0%25%50%75%100%Iteration
Alice
Bob
Carol
Converged

How Power Iteration Works

  1. Start with equal weights: everyone has the same moral weight
  2. Each agent's new weight = sum of (valuation from others * their weight)
  3. Normalize so weights sum to 1
  4. Repeat until convergence (weights stop changing)

The final weights are the eigenmorality scores - the fixed point of the “good people value good people” equation.

PART IV

Edge Cases and Pathologies

What happens in unusual situations? Eigenmorality has interesting behavior at the edges - which reveals both its strengths and limitations.

Edge Case Explorer

The Sociopath

An agent who values no one. They give zero moral weight to all others.

0.80.70.30.2A37.6%B37.6%S24.8%

Eigenmorality Scores

Alice
37.6%
Bob
37.6%
Sociopath
24.8%

The sociopath gets low eigenmorality because they contribute nothing to the system. Others may value them, but their zero valuations mean they cannot "pass on" moral weight.

These edge cases aren't bugs - they're features that reveal what eigenmorality actually measures: recursive endorsement within a network of moral valuers.

PART V

Objections and Responses

Eigenmorality is provocative. Here are the main objections and how Aaronson and others have responded.

“Eigenmorality doesn't tell you what's morally true. It tells you what's consistent with a community's moral valuations.”

It's a tool for aggregating moral intuitions, not discovering moral facts.

PART VI

What It Means

Eigenmorality isn't meant to replace moral philosophy. It's a thought experiment that makes precise one intuition: moral status comes from being valued by morally serious agents.

For AI Alignment

How should we weight human preferences when training AI? Eigenmorality suggests: weight each person by how much morally serious people value them. This naturally downweights preferences from those the moral community considers untrustworthy.

For Moral Circle Expansion

As we extend moral consideration to animals, AI, future generations - eigenmorality provides a framework. Include them in the graph, let current agents value them, and the math handles the rest.

For Meta-Ethics

Eigenmorality sidesteps debates about moral realism. It doesn't claim moral facts exist - only that given any set of valuations, there's a consistent way to aggregate them into moral weights.

Key Takeaways:

  • Circular definitions can have well-defined fixed points (eigenvectors)
  • The same math that ranks web pages can formalize moral weight
  • Eigenmorality captures “good people value good people” precisely
  • It's a tool for aggregation, not a theory of moral truth
  • Edge cases reveal both strengths and limitations

Want More Explainers Like This?

We build interactive, intuition-first explanations of complex math and science concepts.

Back to Home

Reference: Aaronson, “Eigenmorality” (Shtetl-Optimized, 2008)