Skip to main content

View Diary: Science Friday: Real Climate (73 comments)

Comment Preferences

  •  Interview glosses over some ongoing problems (none)
    One thing that readers of this interview are missing is the fact that Dr. Mann's dismissal of the criticism of the "Hockey Stick" science is papering over some serious, ongoing concerns about data analysis and methodology in the paleoclimate proxy studies.  I say this as a strong science advocate of climate change and the related problems -- meaning that I understand the science, and I know the problems we're facing.  However, in reading and following the Hockey Stick debate since it began, and attempting to be as fair as possible to all parties, I can now tell that Mann's responses are defensive and his public posturing is at odds with the reality of deficient scientific practice.  McIntyre (of McIntyre and McKitrick) has continued to critically address the statistical problems of paleoclimate data analysis, facing an uphill struggle and some unfair commentary, and his criticism is valid.  If the debate in this arena is to be resolved, Dr. Mann and his seconds are going to have to improve their data handling significantly.  Recent publications have indicated, in the peer review press, that the certainty of the "Hockey Stick" portrayal is not what it seems.  In order to advance the science properly, it must be practiced properly.
    •  your assertions are out of date and incorrect. (none)
      A very similar comment was made on RealClimate.

      Here was the response given:

      [Response: You need to distinguish the criticism of the scientific results (which have been debunked - see here, or here), from more general criticisms of 'scientific practice'. There is not yet a perfect system for  archiving raw data - something that has very little to do with the hockeystick issue since MBH only used publically available data themselves, and it is only recently that advances in multi-proxy methodologies have made the issue more relevant in the field. Valid scientific criticism is always to be welcomed; inappropriate personalisation and baseless accusations are not. - gavin]

      In addition, readers may want to read the RealClimate posting New Analysis Reproduces Late 20th Century Temperature Rise which describes an independent analysis by NCAR scientists.

      •  Response to Burger and Cubasch 2005 needed (none)
        Thanks for the response here and the direct response on RealClimate!  I have just posted a reply on RealClimate that is summarized in the Subject line of this post.  Outdated?  Not when a critical (i.e., important) paper was published two months ago.  

        As for the link supplied in your reply here, note that the paper cited in the press release was rejected and has not yet been published (but as I understand, has been reconsidered and is still pending publication).   So I don't think that the issue is fully resolved.  I hope that it will be, particularly in time for the next IPCC report.

      •  Second reply (none)
        I should also state that my characterization of Dr. Mann's public posturing is based on his public statements, which seem needlessly personal at times, which strikes as defensiveness (notwithstanding the fact that the skeptical side has certainly not always addressed Dr. Mann appropriately).  Regarding deficient scientific practice, this addresses all of the uncertainties surrounding the data such as this and analysis thereof;  it was not a baseless accusation, but an observation.  Perhaps I should have said "scientific imprecision" rather than "deficient scientific practice".  

        I also understand why at times it is useful to argue with a strong sense of positivism and certainty, and that scientists do have a duty to defend their results -- but when there is uncertainty, it should be at least acknowledged.

        •  you are still out of date, and in error (none)
          While legitimate scientific criticism is healthly, as was said before "inappropriate personalisation and baseless accusations are not." Your comments remain somewhat ad hominem. But lets get to the substance of them.

          1. Burger and Cubasch: Would have been a useful contribution to the literature about 10 years ago, when Mann et al ("MBH98") and other groups were using simple EOF-based approaches. The primary criticism of Burger and Cubasch is that such approaches lack regularization or an explicit model of the error covariance structure of the data. This is fair enough. However, the method used by Mann and Colleagues for roughly the past 6 years now, Regularized Expectation-Maximization, is not subject to either criticism. This method yields essentially the same reconstruction when applied to the same proxy data [Rutherford, S., Mann, M.E., Osborn, T.J., Bradley, R.S., Briffa, K.R., Hughes, M.K., Jones, P.D., Proxy-based Northern Hemisphere Surface Temperature Reconstructions: Sensitivity to Methodology, Predictor Network, Target Season and Target Domain, Journal of Climate, 18, 2308-2329, 2005], indicating that the original approach was robust in practice, despite the legitimate theoretical limitations of using a truncated EOF basis.  This method furthermore has been demonstrated to accurately reconstruct multi-century timescale variability based on applications to model simulation data (which rebuts another criticism that has been leveled against the MBH98 method).

          Mann, M.E., Rutherford, S., Wahl, E., Ammann, C., Testing the Fidelity of Methods Used in Proxy-based Reconstructions of Past Climate, Journal of Climate, 18, 4097-4107, 2005.

          So Burger and Cubasch is effectively pre-empted by this more recent work, which was highlighted by Science last November and in the Feburary issue to appear of the Bulletin of the American Meteorological Society. More information can be found here.

          2. You incorrectly referred to a GRL paper article by the NCAR group as "rejected". Actually, that original decision (made by the same editor who presided over the publication of McIntyre and McKitrick and Burger and Cubasch) was over-ruled by the new GRL editor-in-chief Jay Famiglietti, which is itself quite telling. Famiglietti's comments on the ordeal, along with those of other leading scientists can be found here.

          However, the GRL paper only dealt with some technical side issues. The content of the NCAR press release was almost entirely based on a far more substantial paper in press in Climatic Change that dismantles each of the criticisms of the MBH98 approach made by McIntyre and McKitrick.

          3. There are several more papers "in the mill" which we are not at liberty to discuss right now, which insure that the weight of peer-reviewed studies available for consideration in the next IPCC report will point towards a strengthening, not a weakening, of the IPCC '01 conclusions regarding the anomalous nature of  recent hemispheric and global warmth in a long-term context.

          "Skepticism" is a good thing, but corrupted by innuendo with an agenda behind it (which you, unwittingly I believe, have fallen victim to), it can be twisted into a tool for disinformation.  At RealClimate, we like to distinguish between healthy "skepticism" (which is good for science) and "contrarianism" (which can be misguided).

          I believe your efforts are honest ones, that your arguments are in good faith, and that you are wiling to be persuaded by the available evidence. For this, I thank you.

          •  I appreciate your reply (none)
            Thank you for the response, which is enlightening.  I have never been fond of the "contrarian" position or tactics, and my efforts to understand the data, the science, and the controversy are hopefully not misguided.  Furthermore, I don't believe that I've been taken in, though I do apologize for comments that were interpreted as ad hominem.  Reading this, you might be surprised at the strongly pro-"Hockey Stick" comments I have made in the past in other venues.

            Nonetheless, I will remain concerned (though a bit quieter about it) about our full understanding of the complexities of paleoclimate and our ability to  statistically understand it.

          •  MBH98 vs. RegEM (none)
            1. I understand that MBH98 is out of date. Assuming now that RegEM is still up to date, it is, like MBH98, a parameter-intensive scheme with no defined error model (cf. Schneider 2001). Why should it be immune from the data processing and extrapolation issues raised by Bürger and Cubasch?

            2. Which proxy study proves that RegEM is superior to EOF based approaches? Certainly not Schneider 2001 himself: "Hence, any claim that the regularized EM algorithm or any other technique for the imputation of missing values in climate data is ``optimal'' in some general sense would be unjustified. The performance of the regularized EM algorithm must be assessed in practice." - I am unaware of such an assessment.

            3. What is the agenda behind Bürger and Cubasch?
            •  The Shills Have Arrived! (none)
              Where to start? A common tactic of shills (a good example is the stunt pulled by shill for hire Steve Milloy) is to truncate a quote so as to completely distort its original meaning.

              Let us consider what the Schneider (2001) paper actually said (emphasis added):


              ...there are no general, problem-independent criteria according to which the optimality of a method for ill-posed problems can be established (Linz 1984). Hence, any claim that the regularized EM algorithm or any other technique for the imputation of missing values in climate data is ``optimal'' in some general sense would be unjustified.

              In other words, Schneider(2001) was making a very general, trivially true point that no method can be claimed to be a priori superior to any another in some context-independent generality. It depends on the application at hand. The commenter is probably fully aware that the Mann et al (2005) paper cited above specifically tested the performance of the RegEM method in the context of paleoclimate reconstruction based on application to a long-term model simulation where the answer is known beforehand, and the performance of the method can be precisely tested for synthetically designed proxy data with a range of possible signal-to-noise ratios. In this context, the RegEM method was shown to give the correct result within the estimated uncertainties.

              The commenter is also either extremely misinformed (i.e., didn't actually read Schneider 2001) or just plain dishonest when he/she claims that the Schneider (2001) "RegEM" algorithm provides "no defined error model".
              In fact, Schneider (2001) spends a good deal of the paper describing the iterative procedure (based on the statistical principle of Generalized Cross Validation) by which the estimated data matrix is explicitly and objectively separated into a signal and residual error component.

              So, the "RegEM" method is both "regularized" (obviously) and explicitly models the imputation error, pre-empting the two central criticisms of Burger and Cubasch(2005).

              But why trust either of us? The Schneider (2001) paper (and algorithm) are publically available anyway. Or didn't the commenter know that? Readers (warning, some background in statistics required) ought to take a look themselves, and decide who is giving them the straight story, and who might simply be lying to them.

              As for the agenda behind Burger and Cubasch, I'll leave it to others to speculate. But the agenda of the commenter--to disinform the readers of this thread--seems quite obvious.

              •  Keep cool (none)
                Does your comment exemplify what you understand as being "ad hominem"?

                Now to the point.

                1.

                ...that the Mann et al (2005) paper cited above specifically tested the performance of the RegEM method...

                Yes, but didn't I asked for a paleo study that compares RegEM with the original MBH98 (EOF) approach?

                2.

                ...he/she claims that the Schneider (2001) "RegEM" algorithm provides "no defined error model".

                The extrapolative error described by Bürger and Cubasch depends on the error of the model coefficients (of the regression) and not of the data. No such thing for RegEM, as seen here:

                [Schneider 2001]...The uncertainties about the adequacy of the regression model (1), of the regularization method, and of the regularization parameter all contribute to the imputation error, but the error estimate [...] does not account for these uncertainties.

                Consequently,

                Covariance matrices estimated with the regularized EM algorithm and statistics derived from them must therefore be interpreted cautiously, particularly when the fraction of missing values in an incomplete dataset is large.

                In Schneider, that fraction is 3%. You (you?) apply the method to 2000+ unknown grid points times 1000+ years back in time and don't ever even mention to be cautious. That is really strong!

                •  the last time we're going to discredit your claims (none)
                  This is getting tiring, and we won't encourage you on any further than this.

                  We wil discredit your main new point: You now ask for a study that compares the MBH98 and RegEM approaches. Why don't you take another look at the Mann et al (2005)paper provided above, and actually read it. What does figure 2 show? Ah yes, a comparison of applications of RegEM and MBH98 approach to the same precise data set, showing the two methods give very nearly the same result, well within the mutual uncertainties.

                  Now, we could pick apart everything else you just said, just as we did in the first round(especially your comment about the multiple contributions to the estimated error term which is a strength, not a weakness as you seem to imply, of RegEM). But this is now just getting semantic and boring.

                  We'd rather spend our time helping to educate the thousands of visitors a day that visit RealClimate who are often genuinely interested in learning about the science.

                  Your cherry-picking and deceptive quotation are probably not welcome by the DailyKos readers, and we're not going to encourage you by responding further.

                  •  Ah no (none)
                    I'm sorry, but I really can't see how Figure 2 could prove that RegEM is superior to MBH98 - which was the outstanding issue, wasn't it?

                    It is becoming confusing now, I agree.

                    Good luck with RealClimate! I hope not too many people have been turned off that site by the general tone of this discussion.

Subscribe or Donate to support Daily Kos.

Click here for the mobile view of the site