A model of the system; this model may in
turn have its own info-gaps.
Performance requirements. These are usually
satisfying requirements, i.e., a statement of
exactly what kind of performance is good
enough, rather than focusing on a metric
whose value ought to be maximized.
The three elements listed previously are
combined to create two decision functions:
a robustness function that characterizes
a candidate solution’s robustness as how
wrong we can be in our assumptions and
still meet performance requirements, and an
opportuneness function that measures how
wrong we must be to reap windfall benefits.
The decision functions of info-gap are
performance measurement functions and,
thus, are subject to the fundamental performance
measurement problem discussed
earlier. Info-gap, consistent with its goal of
robust solutions, frames this performance
measurement in terms of how wrong the
assumptions can be while still meeting
some stipulated baseline performance.
22 SOURCE spring 2012
Like RDM, info-gap relies on human judgment
and cannot create information out of
thin air. Two key differences are the order in
which the various judgments and calculations
are made, and the way in which the
key questions are framed (Hall, 2011). For
example, RDM defers its judgments on how
important a scenario is until after it’s been
analyzed, in the hope that many scenarios
turn out to be benign and thus don’t need to
be judged by humans. Info-gap, by contrast,
might frame the needed human judgments
as part of the modeling, without knowing
which of the various judgments might turn
out to have far-reaching consequences.
Each approach has its possible benefits and
downsides, in terms of the ways the needed
judgments might be biased more or less.
In choosing which model to use for situations
of deep uncertainty, we should recall
the proverbial hiker, lost in the Alps, who
would rather rely on a map of the Pyrenees
than admit he is lost. Like that proverbial
hiker, we may find that admitting the depth
of our ignorance does not come easily. Even
when we acknowledge uncertainty, we tend
to understate both its magnitude and its
potential consequences (Kahneman et al.,
1983; Taleb, 2010). As a result, we may tend
to draw false comfort from approaches that
eliminate uncertainty by either ignoring its
existence altogether, or by making unjustified
simplifying assumptions so classical
methods can deliver an optimized solution.
Robustness methods take a different
approach and explicitly acknowledge the
depths of our ignorance. At their best, the
robustness-centered methods don’t seek to
eliminate uncertainties, but rather to help
the analyst zero in on key uncertainties and
explore their significance. Instead of optimizing
our strategy based on a single future
scenario that is unlikely to occur, robustness
methods drive at solutions that make sense
even if our models, and our predictions,
turn out to be imperfect. Along the way,
they may improve the understanding on
the part of the stakeholders on what the key
uncertainties are, and their consequences.
For these reasons, robustness methods
should be part of our toolkit even as they
continue to mature.
Kahneman, D., and Tversky, A., (1983).
Choices, Values and Frames, Cambridge
University Press, Cambridge: England.
Robustness, Continued from page 21
At their best, the robustness-centered
methods don’t seek to eliminate uncertainties,
but rather to help the analyst zero in on key
uncertainties and explore their significance.