Bengt Holmström Prize Lecture: Pay for Performance and Beyond



Yüklə 2,49 Mb.
Pdf görüntüsü
səhifə6/11
tarix09.08.2018
ölçüsü2,49 Mb.
#62134
1   2   3   4   5   6   7   8   9   10   11

Pay For Performance and Beyond 

425


instant. Given this, the distribution of the final position of the process at t = 1 

is normally distributed with the agent’s constant choice of effort determining 

the mean and the variance being a constant. In other words, we are back to the 

Mirrlees example x = e + ε with ε normally distributed and the agent choosing e. 

The only difference is that we can limit the principal’s choice to a linear incentive 

scheme with the optimal slope and constant term to be determined.

Remarkably, by enriching the agent’s choice judiciously, we have arrived at a 

very simple solution to the Mirrlees problem.

15

B. Extensions and Alternative Approaches

There is an emerging literature that studies the robustness of linear incentive 

schemes, where robustness is defined as the max–min outcome in a larger envi-

ronment, which may or may not be known to the principal. Hurwicz and Sha-

piro (1978) is the first paper in this vein showing that the linear 50–50 split 

widely observed in sharecropping can be rationalized along these lines. Dia-

mond (1998), Edmans and Gabaix (2011), Chassang (2013) and Carroll (2015) 

are more recent variants on this theme. Carroll’s paper is especially elegant and at 

heart simple enough to be applied in richer economic environments. The paper 

captures robustness in a strong sense (a guaranteed minimum payoff for the 

principal in a largely unknown environment), but it does not appear as tractable 

as the Holmström-Milgrom model, at least not yet.

Yuliy Sannikov has greatly advanced and popularized the use of continuous 

time models to study incentive problems that are both tractable and relevant. 

Sannikov (2008) solves a general, nonstationary agency problem using powerful 

techniques from stochastic control theory. These techniques require a keen eye 

for finding just the right assumptions to make the analysis go through, but the 

payoff can be high as illustrated in Edmans et al.’s (2012) dynamic model of CEO 

compensation. The model makes a compelling case for using dynamic incentive 

accounts that keep funds in escrow and adjust the ratio of debt and equity in 

response to incoming information. Intuitively, and also in practice, the solution 

makes a lot of sense. Another example of a continuous time model that resonates 

with reality is DeMarzo and Sannikov (2006).

C. The Linear Model with One Task

It is straightforward to solve the Mirrlees example with a linear incentive scheme 

s(x) = αx + β. The agent’s payoff is normally distributed with mean αe + β and 



426 

The Nobel Prizes

variance α

2

σ



2

. Given an exponential utility function, the agent’s utility can be 

written in terms of his certain equivalent:

 

CE



A

 = αe +  β − 12rα

2

σ

2



.

The agent has mean-variance preferences where r is the charge for risk bear-

ing. The agent chooses e so that the marginal cost of effort equals the marginal 

return: c′(e) = α.

The principal is risk neutral so her certain equivalent is:

 

CE



P

 = (1 − α)e − β.

Since the model is one with transferable utility one can first solve the opti-

mal slope α by maximizing the sum of the two certain equivalents CE

A

 + CE


P

This gives the maximum total surplus which then is divided between the parties 



using β. Using the first-order condition for the agent’s choice of e we get then 

the optimal value of α

 

α



 = [1 + rσ

2

c″]



–1

(6)



where the dependence of c″on e has been suppressed (c″ is constant if the cost 

function is quadratic).

The logic of this model is refreshingly simple. The agent works harder the 

stronger the incentive (higher α). According to (6) the optimal incentive strength 

α



 always falls between zero (no incentive) and one (first-best incentive), because 



of the risk that the agent has to bear. If the agent is more risk-averse or the perfor-

mance measurement is less precise the financial incentive is weaker. What about 

c″? From the agent’s first order condition we get 1/c″ = de/dα. The derivative 

measures the agent’s responsiveness to an increased incentive. The agent is more 

responsive to incentives if the cost function is flatter (smaller c″), resulting in a 

higher commission rate as seen in (6).

The one-dimensional action of the agent and the one-dimensional control 

of the principal are evenly matched, making this model very well-behaved. One 

can extend the model in many dimensions. One can study the costs and benefits 

of (jointly chosen) different projects, production technologies and monitoring 

systems, for instance, and get simple answers. The most interesting variation in 

such thought experiments concerns the opportunity cost function c(e). There are 

many ways in which the principal can vary the agent’s opportunity cost function 



Pay For Performance and Beyond 

427


so that the cost of incentives is reduced. This insight was central in initiating my 

work on multitasking with Paul Milgrom (Holmström and Milgrom 1991, 1994.)



IV. MULTITASKING

Multitasking—the reality that an agent’s job consists of many tasks—led to a 

major change in mindset and focus. Instead of studying how to get the agent to 

work hard enough on a single task, attention turned to how the agent allocates 

his effort across tasks in a manner that aligns with the principal’s objectives. 

When tasks are interdependent, the optimal design needs to consider the agent’s 

incentives in totality. Knowing the agent’s full portfolio of activities—what his 

authority and responsibilities are—is essential for designing a coherent, balanced 

solution that takes into account the interdependencies. This is challenging when 

easy-to-measure and hard-to-measure activities compete for the agent’s attention 

or if the available performance measures are poorly aligned with the principal’s 

objectives.



A. Easy versus Hard to Measure Tasks

Consider the case of an agent with two tasks. One task can be perfectly mea-

sured—think about quantity sold. The other task is very hard to measure—think 

about the reputation of the firm. There may be some measures available for the 

latter, for instance consumer feedback. But such information is selective and 

often biased. People with unhappy sales experiences are likely to complain more 

often than people with happy experiences. Some customers may just have had 

a bad day. And some important customers may not have the time for feedback. 

All this makes it hard to assess how consumer feedback genuinely translates into 

valuable reputation down the road.

To be a bit more formal about this, we can use a multitask extension of the 

linear model (see the Appendix for a general version). Let the performance in 

each task be separately measured, quantity by x

1

 = e



1

 and reputation by x

2

 = e


2

+ ε


2

. There is no noise term ε

1

 because I assume that quantity can be measured 



perfectly. The variance of ε

2

 is larger the noisier the consumer feedback is.



Let the principal’s objective be B(e

1

,e



2

) = p


1

e

1



 + p

2

e



2

, where p

1

 and p


2

 measure 

the value of quantity and reputation and suppose for a moment that the agent’s 

cost function is separable: C(e

1

,e

2



) = c

1

(e



1

) + c


2

(e

2



). In this case the agent’s incen-

tives can also be analyzed separately and it is optimal to set each coefficient 

independently according to the formula for the single-task case. The commission 

rate on x

1

 will be α



1

= p


1

, giving first-best incentives for quantity choice, while 




Yüklə 2,49 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   10   11




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©genderi.org 2024
rəhbərliyinə müraciət

    Ana səhifə