Saturday, April 28, 2012

3616.txt

cc: Martin Juckes <m.n.juckesatXYZxyzac.uk>, anders@misu.su.se, Eduardo.Zorita@gkss.de, hegerl@duke.edu, esper@wsl.ch, k.briffa@uea.ac.uk, m.allen1atXYZxyzsics.ox.ac.uk
date: Thu, 01 Feb 2007 13:59:11 +0000
from: Nanne Weber <weberatXYZxyzi.nl>
subject: Re: mitrie: revision
to: Tim Osborn <t.osbornatXYZxyz.ac.uk>

<x-flowed>

Hi Martin,

started to look at your replies. Below follow my suggestions for the
reply to referee 1. I think that it could be more precise and also
stress more that we do present new results (this person is
over-presumptuous about the Jones/Mann and Mann papers, which are purely
review and repeat each other).
I was also wondering about Tim's point. Just 1 referee is below CPD
standards. What did the editor say?
That is all for now,
Nanne


Tim Osborn wrote:
> Hi Martin -- will look in detail at this soon. I looked on COPD and
> could find comments from only one solicited reviewer and no feedback
> from the editor, plus a number of unsolicited comments. Did you get
> more reviews and/or editors comments directly to you? Apologies if
> you did and already circulated them, but I couldn't find them on COPD
> or in my email. With regards climate2003.com, I suspect this may be
> just temporary downtime as it has disappeared at times in the past
> only to subsequently re-appear. I don't think it's worth making much
> of an issue over it. Cheers, Tim
>
==============================================================================
REPLY to referee #1.
The format is point of criticism, followed by reply to that point.

1) First of all, in my opinion this paper does not provide much new results

Sections 2 and 3 are purely intended as reviews of recent
reconstructions and the criticism of the IPCC consensus, respectively.
As such, they do not contain new results. We do take a line of approach
which is different from earlier reviews by Jones and Mann (2004) and
Mann (2007) and for this reason our review is included in the paper.

In section 4 we compare the impact of varying methods versus varying data
collections. We are not aware that this issue is covered by the review
papers
of Jones and Mann (2004) and Mann (2007) or in any other published study.

2) Actually the introduction until page 20 is quite interesting and gives a
nice overview, however, also here there is not much new. It does possibly
not need to be so as it should be a review.

See reply to 1)

3) The authors selected two reconstruction techniques and compared them
with each other. That choice is arbitrary and the results and
interpretations
are not convincing. What are the arguments that one method should be used
in favor of the other?

The two selected methods (inverse regression and scaled composites), or
variants
thereof, are used in all reconstructions considered except those based
on low-resolution
records alone (HPS2000 and OER2005). Scaled composites is used by JBB,
ECS, MSH
and HCA (?), inverse regression by MBH (?). A trawl through a sample of
text books on statistical theory will reveal a huge range of
potential techniques, some of which are listed by the reviewer. However,
they
have not been used in NH temperature reconstructions up till now and
therefore
we do not discuss (or evaluate) them.

The two reconstruction techniques represent different assumptions
about the quality of the data. This needs to be explained
more clearly in the manuscript (change [1] below).
In our comparison it appears that, for millennial reconstructions, the
simplest technique, which makes the smallest number of assumptions about
the data, works best.

What about filtered uncertainties???

4) The second related point to check whether a specific method
seems to perform 'better' would be to use coupled paleo runs. There are
two 1000 year long runs which can be used.

The difficulty with using model paleo runs to test the method is that
we do not have a comprehensive predictive model of how the
proxies respond to temperature. Such tests typically model the proxies
by prescribing
a linear dependence on temperature and adding random (white or red) noise.
Reality is clearly more messy. There are, however, important
issues which can be addressed with such paleo runs (see eg Mann, 2007).
As explained in the Appendix, it is possible to construct a situation in
which
one or the other method is optimal.

5) I suppose, the authors show annual averaged temperature data? That is
not clear. Further, I have my doubts about the choice of the predictor data
in the new union reconstructions.

Yes, we show annual averages (now clarified in the introduction).
What about the choice of predictor data???

6) Other issues that might be worth addressing are: Sensitivity to the
calibration period, whether to detrend or not as well
as the color of noise related to the model data.

We do not aim to address all issues involved in millennial temperature
reconstructions. That would indeed be repeating work that has been done
by others (and references are given in our paper). Sensitivity to
calibration
period: we refer to one sensitivity test, extending
the calibration period to 1985. This issue has been dealt with in more
detail
in a paper by Zorita, Gonzalez-Rouco and von Storch which is now accepted
for publication in the Journal of Climate.

7) It would also help if the authors
could come up with some recommendation concerning the use of those
methods for
different applications.
Also, how do those methods perform if, like in the case of Moberg et
al. (2005, Nature) proxies with different temporal resolutions are
combined?
Could the authors say anything about the methods that aim at
sub-hemispheric
reconstruction, resolve seasons and other climate parameters such as
rainfall?

We do give a clear recommendation on the choice of method (eg in the
abstract).
What about the combination of high/low resolution proxies???
We do not intend to talk about sub-hemispheric reconstructions or
rainfall reconstructions in this study. There is clearly valuable work
being done in that direction.


</x-flowed>

No comments:

Post a Comment