Gavin Schmidt on data modeling in general and on modeling for climate science in particular, On mismatches between models and observations:
Just as no map can capture the real landscape and no portrait the true self, numerical models by necessity have to contain approximations to the complexity of the real world and so can never be perfect replications of reality. [Note: I had a stat mech professor who liked to say that “The difference between physicists and physical chemists is that physical chemists know when to approximate.” I am a physical chemist.] Similarly, any specific observations are only partial reflections of what is actually happening and have multiple sources of error. It is therefore to be expected that there will be discrepancies between models and observations. However, why these arise and what one should conclude from them are interesting and more subtle than most people realise. Indeed, such discrepancies are the classic way we learn something new – and it often isn’t what people first thought of.
The first thing to note is that any climate model-observation mismatch can have multiple (non-exclusive) causes which (simply put) are:
1. The observations are in error
2. The models are in error
3. The comparison is flawed
He then touches on observational error, modeling error, and flawed comparison and suggests implications:
The implications of any specific discrepancy [between model and observation often] aren’t immediately obvious (for those who like their philosophy a little more academic, this is basically a rephrasing of the Quine/Duhem position scientific underdetermination). Since any actual model prediction depends on a collection of hypotheses together, as do the ‘observation’ and the comparison, there are multiple chances for errors to creep in. It takes work to figure out where though.
The alternative ‘Popperian’ view – well encapsulated by Richard Feynman:
… we compare the result of the computation to nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong.
actually doesn’t work except in the purest of circumstances…
My work has tended to sample “the purest of circumstances” but I think his point is well-taken.
Also on the theme of data modeling is Andrew Gelman’s, Do you ever have that I-just-fit-a-model feeling?:
Didier Ruedin writes:
Here’s something I’ve been wondering for a while, and I thought your blog might be the right place to get the views from a wider group, too. How would you describe that feeling when—after going through the theory, collecting data, specifying the model, perhaps debugging the code—you hit enter and get the first results (of a new study) on your screen?
I find this quite an exciting moment in research, somehow akin making a (silly) bet with a friend, but at the same time more serious as I’m wagering (part) of my view how the world functions.
Anyhow, I thought it could be interesting to hear from others how they feel in that moment.
For me it’s often an anticlimax, in that once I’ve gone through all the effort to successfully fit a model, then I have to go through another long set of steps to understand what I have in front of me. Every once in awhile the results just jump out and are exciting, but usually it takes a lot of work to see what I’m looking for. Then when I finally find it, I can often step back and reformulate the problem more directly.
Yes, on several occasions I’ve experienced the excitement Ruedin refers to. The two occasions which immediately come to mind were here when I got my Gauss-Newton algorithm to work and here when I got the effective Hamiltonian right. In both cases the models captured non-linear behavior to a degree which obviously wasn’t dumb luck.
(It’s worth noting that Quantum Defect Theory does a much better job of accounting for observations than my effective Hamiltonian approach did. My method worked great for data involving one symmetry but clearly showed systematic errors for the other whereas MQDT works great for both symmetries. That noted, the details of the MQDT analysis hadn’t been worked out yet so I applied the method I had available to me. “When you don’t know anything, do what you know.” That’s not a suggestion to do nothing. When you apply what you know to make a prediction you give yourself the opportunity to make a constructive mistake. The discrepancies between predictions and observations can provide insight into why your model is failing and possibly how to correct it to more accurately capture cause and effect.)
With respect to Andrew’s comment about initial results often being an anticlimax and needing to do a lot more work to see what he’s looking for, I’ll speculate that’s a consequence of working on problems where he’s trying to estimate values of many parameters. I generally work with low-dimensionality models, i.e., the data may have high dimension but there are only a handful of parameters I’m interested in. There may be lots of nuisance parameters but the values of the parameters I’m interested in (and the associated uncertainties) are generally insensitive to them. When I’ve got my code debugged and fit results start rolling out there’s often a “Yes!” moment because it’s pretty obvious whether or not the data is telling me something interesting. If I was working high-d models instead of low-d ones then I imagine that would be less likely to be the case.