[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Swarm-Modelling] foundation of ABMs
From: |
Joshua O'Madadhain |
Subject: |
Re: [Swarm-Modelling] foundation of ABMs |
Date: |
Tue, 5 Apr 2005 16:12:08 -0700 |
A couple of brief responses...
On 5 Apr 2005, at 13:11, Darren Schreiber wrote:
1) There are lots of different kinds of ways to evaluate a model. (A
paper that I read from the engineering literature on validation
catalogues 23, but there are many more, I'm sure).
2) There are many different reasons that you want to evaluate a model.
3) Items 1 & 2 are, or at least, should be, highly inter-related.
You should choose the methods (note that I use the plural, because you
probably want multiple methods) for evaluation (1) based upon your
reasons for evaluating the model (2).
This is similar to the evaluation of models in the context of machine
learning: in order to compare models' performance, you have to choose
an evaluation function (often called an "error function" in this
context)--and the choice of function is, or should be, based on what
you want the evaluation to tell you.
"Convergence to some solution" does not make sense for many of the
problems that I am interested in as a political scientist. It looks
like progress is being made in Iraq right now, but I wouldn't contend
that this real world phenomena will "converge" or that there is "some
solution." The social world, just isn't like that. And, there are
deep problems with an ontology that constructs the world as having
point solutions, equilibrium, etc. For instance, economics wanders
into moral quagmires when it suggests that everything will reach
equilibrium. Empirically, there are reasons to believe that this is
not true. Normatively, lots of people may suffer while we wait for a
social system to converge.
I saw an interesting talk on this by Brian Skryms recently on some
work he's done with Robin Pemantle (a mathematician friend of mine).
They gave an example of the stag hunt problem that can be demonstrated
to converge mathematically. However, in extremely long time periods
(millions and millions of iterations) the problem doesn't converge.
So what kind of conclusions would we draw from a mathematical
convergence and a lack of computational convergence? For problems
where people might suffer and die due to policy choices that are made
based upon our models, this actually matters a lot.
If the model has been shown to converge mathematically, but a
simulation of it doesn't converge if you iterate for long enough, then
it seems quite likely to me that the problem is numerical instability,
caused by roundoff error, rather than anything particularly mysterious
or interesting.
"Rigor" means very different things to different people. I dare you
to fly on a plane that has only been evaluated with analytic proof.
Or, to take a drug that only passes the face validity test. Or, to
forecast your return on investment using only historic data.
Unless I'm missing something, forecasts are either based on (models
that are informed by) historic data, or on models that are constructed
solely from intuition.
Joshua O'Madadhain
address@hidden Per Obscurius...www.ics.uci.edu/~jmadden
Joshua O'Madadhain: Information Scientist, Musician, and
Philosopher-At-Tall
It's that moment of dawning comprehension that I live for--Bill
Watterson
My opinions are too rational and insightful to be those of any
organization.