How do you make such systems at least as great as the sum of their parts?
These are questions that I have been captivated by and struggling with this past year and two major conferences signaled to me that more research and analytics are needed to identify the right incentives to make organizations succeed.
First, I attended John Birge's OMEGA RHO DISTINGUISHED LECTURE, ORMS and Risk Management Failures: What Are We Doing Wrong? that took place at the INFORMS Annual Meeting in Austin in November.
Second, I was an invited panelist on Financial Networks at the Measuring Systemic Risk Conference hosted by the University of Chicago and the Federal Reserve Bank of Chicago in December that was organized by Andrew Lo of the Sloan School of MIT, Lars Hansen of the University of Chicago, and David Marshall of the Chicago Fed. Joining me on the financial networks panel were: my colleague Mila Getmansky Sherman, Sujit Kapadia of the Bank of England, and Kimmo Soramaki of Financial Network Analyttics. (And John Birge was in the audience.)
As was vividly brought out in these and accompanying presentations and ensuing discussions: events contributing to the global financial crisis and the Gulf of Mexico oil spill appear to represent cataclysmic failures of risk management within some of the most technologically capable organizations. In retrospect, even the most basic analysis should have avoided these disasters and their enduring consequences. Why then did these catastrophes occur and what can be done to prevent such (and other) disasters?
Clearly, these are system issues.
As someone who researches and teaches about network systems, there is a striking difference in the outcomes under user-optimization, when individuals seek to determine their best allocation of resources subject to their own desires and goals, and those under system-optimization, in which the best allocation of resources so as to achieve the system's goals are determined.
In transportation, we know that we can identify the proper policies, in the form of prices, as in the form of tolls, so that when assigned, individuals will behave in a way that is now optimal from a system (or societal) perspective.
When we look back on financial disasters and institutional failures (or underachievement) there is often a misalignment between the stated goals of the organization and those of individuals who work for the organization. So, shadow systems evolve, in which individuals, think stockbrokers, and even professors, maximize their own utility and, as a consequence, the system is driven to states unreflective of its goals.
Indeed, stockbrokers were rewarded/paid for not maximizing either shareholders' wealth or their firm's profits, but rather their own wealth, thus precipitating the financial collapse (and as Paul Krugman said because of malign neglect).
Similarly, faculty at some universities are rewarded through additional financial compensation for teaching online courses over and above their regular teaching loads; in some cases, doubling or tripling their salaries. What then happens to the research output of such faculty?
There are only so many hours in a day and a week and faculty may choose the risk-free option of teaching extra online. The stature of the school then suffers as research declines.
Without the right incentives, or regulatory controls, shadow banks and shadow schools are operating, rewarding those who have the personal and political connections internally, and sacrificing the integrity and sustainability of the system.
As a graduate student who is very observant recently said to me, "Why is the personal becoming professional? Shouldn't professional accomplishments merit the promotions and rewards instead?"
Leaders of corporations and universities must identify the right incentives or face failure.