Friday, December 9, 2011

How do you balance rewards between current productivity and anticipated future productivity?

An interesting book review, That Eternal Question of Fairness by Nancy F. Koehn. The book being reviewed is The Ajax Dilemma by Paul Woodruff.
The Ajax quandary arises after Achilles is slain in battle, and his armor is to be given to the army’s most valuable soldier. Ajax and Odysseus compete for it.

Ajax, a courageous, loyal and hard-working warrior, demands the armor on the grounds that he has saved the lives of many comrades on the battlefield. Odysseus is innovative and articulate but not completely trustworthy; his values seem to fluctuate to suit his interests. He claims the prize as a strategist who can outthink the enemy.

The men square off in a speaking contest in front of King Agamemnon and a panel of army jurors. It is, Mr. Woodruff writes, a conflict we all recognize — that of “loyalty and brawn versus brains and trickery.” Ajax loses and his anger explodes, damaging his position in the army and destroying his life, family and reputation.

The author argues that this myth revolves around the issue of rewards, which “mark the difference between winners and losers.” He adds: “Rewards are public recognition for contributions made. They express the values of a community.” But which, he asks, do we value more: “Cleverness or hard work? Strength or intelligence? Loyalty or inventiveness?”
Interesting argument but I don't think that it is the whole story because it only focuses on the reward for past contributions. This is actually an argument about how to reward productivity (past and anticipated future) - how does our system of governance and values allow rewards of community productivity to be distributed to members of the community, not all of whom contributed to the generation of the increased productivity? Everyone contributes at least something to the context of communal productivity but not everyone contributes equally nor is everyone's contribution equally indispensable.

When put in terms of productivity, contra the review, you have to also look at anticipated productivity as well. There are tactical actions to achieve present productivity - the actions that we take today to meet our needs and purposes today. Then there are strategic actions to achieve future productivity. Strategic actions require trade-offs (I eat less seed corn today so I have a larger crop tomorrow) and risks (if the rains are gentle and the sun shines lightly) which are hard to evaluate compared to tactical actions.

The reason the distribution of rewards is so critical is that it is an acknowledgement of past contributions but also a collective bet on future probabilities. Who is most likely to contribute most critically to future increases in productivity? Those are the ones you want to reward. To increase anything, you have to reward it more (resources, status, mates, etc.), make it easier (reduce barriers) or make it cheaper (bring down the relative cost). If you want less of something, make it harder to do, make it more expensive, or punish it.

Ignoring the rules for a minute ("justice is much broader than a legal function and much messier than a set of rules or large principles"), there are different activities going on when we make these two different assessments.

When we try and determine the relative contribution (degree of participation, indispensability of participation, and non-fungability of participation) to past productivity, we have to agree on rules and measurements and definitions. As hard as that is to do, once we have reached agreement, then the actual calculation is relatively straightforward. We may disagree with the values reflected in the process of measuring, but it is usually relatively easy to do (as long as we have appropriate data).

In contrast, judging what someone's relative contribution to future productivity will be is a minefield not only of data but estimations of risk and probability and causation. Is this person, looking like they will be holed up in their room for the next ten years, a genius in the process of producing the next silicon chip or Moby Dick, or are they a misanthropic sociopath. Even if we are confident they are a genius, how likely is it that they will actually deliver the kind of innovation and value of which we think they are capable and how do we measure that value? Value, estimation, and risk all call heavily upon shared culture, worldview and values. Hard as this is to do (estimate future value), it is moderately achievable in an environment of shared values. Where values are not shared, it becomes extraordinarily hard to arrive at a consensus as to probable future value. If we cannot reach agreement on future productivity, we cannot then agree on how to reward those actions now necessary for future value. If we don't agree, then it is likely we won't reward present actions for future productivity. And if we don't reward it, then we are most likely locking ourselves into a future without increases in productivity.

This is probably the greatest risk in a country that is governed by a creed (as reflected in the constitutional structure of governance) which allows for and encourages pluralism (diversity).

In a reasonably transparent, data rich environment, we can probably get to some first order approximation of historical contribution of productivity for an individual regardless of how much or little we like the outcome. Some groups, as defined by any of the common divisions - gender, orientation, race, class, religion, etc., will benefit to a greater degree than others by having a better mix of epistemological preparedness and value alignment with the nature of decisions that have to be made. With that knowledge of actual contribution, we can make some rough distribution of rewards that corresponds with those past contributions to productivity.

But that is only half the equation. We aren't interested only in the past, we are interested in the future. Those that contributed the most in the past may not be those most likely to contribute to increased productivity in the future. If we all believe in hard work, saving, moderate risk taking, rewarding agency (eg. those willing to take extreme risks for extreme rewards), then there is a framework for balancing the distribution of rewards. For example, we extend loans (a form of reward) to those that seem most likely to succeed with a risky venture that will yield high productivity the future. So a person can have a low contribution to productivity in the past (and be commensurately rewarded) but may also be highly rewarded for their anticipated prospects in the future. But only if we share common cultural values.

However, if a significant part of the community does not believe in agency, or the connection between risk and reward, are fatalistic (outcomes are random and not associated with effort), etc. then there is no capacity to reach consensus on probable future outcomes. Without agreement, the element of rewarding future productivity disappears, destroying or substantially reducing the incentives for growth.

If we reward only the past, we subvert future productivity growth and are unfair to those that are low productivity but high potential. If we reward only future productivity growth (which opens up the issue of corruption and coercion), we punish those that got us to where we are.

The line between these estimable goals (reward the past contributions to productivity and rewarding probable future contributions to productivity) is a fine one, a dynamic one, and a zero-sum one. You can only distribute those rewards that you currently have. What you give to those that produced the current productivity, you are not giving to those focused on the future - and vice-versa.

In times of stability you weight the rewards to the future. In times of crisis, you focus on the here-and-now. The hungrier you are for increased productivity the more you focus on rewarding future efforts towards productivity. The more complacent you are about future productivity, the more likely you are to over-weight rewards to past contributions.

I think this is a pretty interesting way of looking at many of the issues with which we are struggling at the moment, and particularly the recurring of what is just(complies with the rules) and what is Just (some cosmic assessment of what is right). Everything can be just (compliance with rules) and highly productive but there will be individuals that are poorly rewarded because they have contributed little to current productivity and they have low probability of contributing to future productivity. Even if everyone shares the same values, those individuals will be on the low end of the rewards curve but everyone will see it as both just and Just.

However, in a heterogeneous population (in terms of culture and values), even though the distribution of rewards will be seen as just (in compliance with the rules) it will also be seen, by some, as not Just.

Pictorially, I think this model would look like this.

No comments:

Post a Comment