You are here

Real Climate - (elva amerikanska klimatforskare)

Subscribe to Real Climate - (elva amerikanska klimatforskare) feed Real Climate - (elva amerikanska klimatforskare)
Climate science from climate scientists...
Updated: 2 hours 8 min ago

The Climate Science reference they don’t want Judges to read

Mon, 02/09/2026 - 23:16

For the first time, the Federal Judicial Center (FJC) commissioned a chapter on climate science for the manual they put out (with the NASEM) for judges, the Reference on Scientific Evidence (4th Edition). This week, a month after it was published, they pulled the chapter out after being pressured by 27 Republican Attorneys General. You can nonetheless read it here.

Some background. The FJC is “the research and education agency of the judicial branch of the United States Government”. As one of its roles, it is tasked to provide educational materials to judges and other court workers about issues that might come up in court, and in particular, on scientific matters that one might not expect judges or lawyers to be expert in. They have codified this information in the Reference Manual on Scientific Evidence, which is now in it’s Fourth Edition. (Previous editions were issued in 1994, 2000, and 2011).

The 4th Edition had its genesis in a workshop in 2021, and was finally published (after extensive peer review) on Dec 31st 2025. It covers legal scholarship on the use of expert testimony in court cases (noting the Supreme Court’s Daubert standard), as well as primers in the current state of the science across multiple fields (forensics, DNA evidence, mental health, neurology, epidemiology, exposure, statistics, regression, eye witnesses, engineering, computer science, AI, etc.). Notably, it included a chapter on climate science, covering topics such as the greenhouse effect, atmospheric circulation, detection and attribution, and the issues being raised in an increasing number of climate-related cases in the courts. The authors, Jessica Wentz and Radley Horton are a respected and mainstream lawyer/scientist team and the resulting chapter is a clear and concise summary of the topic. So far so good.

Of course, there are groups that would rather not have climate change discussed knowledgeably in the courts, and after the publication of the 4th Edition of the manual, the Republican-led House Judiciary Committee started sending threatening letters to all involved (FN – sorry!) (Jan 16th). Additionally, a group of 27 Republican Attorneys General (led by West Virginia) sent a letter (Jan 29) to the FJC claiming that Wentz and Horton were biased because they have (correctly) stated that the “political sphere in the United States continues to be clouded with false debates over the validity of climate change”. Additionally, they were upset that there are no references to the recent DOE CWG report (Lol).

The real target of the AGs ire is the discussion of attribution, and the notion that there is an emerging consensus that partial attribution of climate damages can be assessed on emitters. This line of thinking is exemplified by recent papers (such as Callahan and Mankin (2025), but is based on more than a decade of work on this topic, and of course is a direct threat to the fossil fuel companies that the WV AG is trying to protect.

The Republican AGs demanded that the FJC remove the chapter, arguing that any official acknowledgement of the science in the Manual would prejudice their cases that are based on, let’s say, “contrary” interpretations of the scientific evidence (or no evidence at all). And without much ado, or even consultation, the FJC did exactly that, putting out an amended Manual on Feb 6th. The only note to mark the deletion is:

No explanation or excuse was noted.

As stated above, this chapter is actually well-written, appropriately peer-reviewed, and deserves a far better fate than to be cowardly disappeared into a memory hole for being inconvenient, so you can download it here. The nice thing about science is that it doesn’t change based on whether a report is published here or there, so feel free to share.

References
  1. C.W. Callahan, and J.S. Mankin, "Carbon majors and the scientific case for climate liability", Nature, vol. 640, pp. 893-901, 2025. http://dx.doi.org/10.1038/s41586-025-08751-3

The post The Climate Science reference they don’t want Judges to read first appeared on RealClimate.

Koonin’s Continuing Calumnies

Sat, 02/07/2026 - 19:37

At a public event debating the DOE CWG report, Steve Koonin embarrasses himself further.

This week there was a bit of a peculiar event at the Civitas Institute at UT Austin, with three of the CWG authors (John Christy, Steve Koonin and Ross McKitrick) being rebutted by Andy Dessler (working solo).

The event itself was a rehash of the CWG’s reports ‘findings’ (or rather, a repeat of their cherry picks, uncontextualized statements, and ignoring of the literature), and Dessler somewhat successfully pointing this out. The event seemed a bit rushed (too much content being crammed into too short a time) and is a great example of the applicability of the Brandolini’s Law.

There would be a lot to criticise in the presentations if one wanted (most of this was gone over in the Scientists response to the DOE report that Andy helped organise), but the presentation by Koonin went even further into nonsense territory than the CWG report itself. Apparently, “internal variability” (something noticeably ignored in many claims by the CWG) is the “last refuge of fools and scoundrels” (at least according to Koonin)!

What this stems from is Koonin’s reliance on Nicola Scafetta’s work on evaluating climate models – readers here will know that is a very bad idea, and we went through a lot of this in respect to a GRL paper that Scafetta published in 2022. That led to a whole saga, which took so long that while we were trying to get the 2022 paper retracted on the grounds of being totally wrong, Scafetta basically published the same analysis again (with almost all the same errors and some new ones) in another journal. Our enthusiasm to go another round pointing out his mistakes was limited, and so the second paper still stands nominally unrebutted in the literature despite having been pre-rebbutted by our comment on the first paper (Schmidt et al., 2023). This came up in the ‘internal review’ of the CWG report, where one of the reviewers said that the CWG should deal with our criticism of Scafetta’s work (pointing to the published comment), and were blown off by the CWG who claimed that because they cited the second paper (not the first), our comment was moot. Classic dissembling.

Anyway, Koonin’s presentation at the Civitas event (starts around 20:20 in the video) repeats the errors, but goes even further. First, he notes that some CMIP6 models have climate sensitivities that are too high. That’s fine – I have made the same point here, and in Nature Hausfather et al., 2022. But then he elides from ‘some models’ to ‘the models’ without even taking a breath (Hmm…). He doubles down on Spencer’s cherrypicking (itself not peer-reviewed of course), and claims that people pointing out that something has been cherry-picked are trying to “change the topic”. Yes, that metric that no-one had ever mentioned before Spencer did this analysis is *the* topic that the assessment was designed to address /sarc.

Koonin additionally claims that the mainstream scientists are blaming model-observation discrepancies on internal variability for the last twenty years, while ignoring it for the previous twenty years. Of course, he provides no citation nor evidence that anyone has ever done such a thing. Worse, in response to a suggestion that they utilise the uncertainty in the modeling (esp. the internal variability), he makes an incredible statement (starting at 30:47):

Well, if you do that, it effectively broadens the uncertainty so much as to be almost essentially useless.

Let’s parse this out. He isn’t claiming that the internal variability isn’t real (it is of course). He is claiming that his model-observation comparison doesn’t show any discrepancy if you include the uncertainties and that therefore it’s useless! To repeat, Koonin is stating that he isn’t including the uncertainties because it would undermine the conclusion he is trying to draw.

This is as clear an admission of scientific misconduct as I’ve heard.

He then illustrates this with reference to Tokarska et al. (2020) (Fig 3, Panel A) which is not really trying to do the same thing, but fine. [I think there must be a second half to that slide showing individual runs – but I’m not sure where that would have been from]. However, we addressed this exact issue with the comment on the first Scafetta paper:

Multi-decade temperature differences in ERA5 and CMIP6, showing individual simulations and ensemble means, plotted against Climate Sensitivity.

The question being asked is whether there is a discrepancy between any specific model and the observations. An initial condition (IC) ensemble starts the model with a different weather pattern, but each run has the same forcing. The standard deviation of the IC ensemble is a reasonable measure of the internal variability (i.e. the spread that could occur only as a function of the (unpredictable) weather. The real world can be considered a single realization of the real world climate, so the standard way to assess whether the a model is consistent with the real world is to estimate the probability that the real world result could be part of the model distribution. In practice, one can calculate the 95% confidence interval for the model (based on it’s ensemble) and ask whether the real world data falls within that range. Wherever it is, you can calculate the probability of getting that result, assuming that model distribution. The further away the observation from the model spread, the less likely that it could have generated by that distribution.

So if the real world falls inside the 95% CI, it is clearly consistent with the distribution, even if the ensemble mean is different from the observations. As the signal grows, the spread due to the internal variability will shrink, and discrepancies might emerge more clearly. But no-one is arguing that internal variability should be ignored for one period, and used in another. Rather, it should be used consistently at all times. If that prevents Steve Koonin from trashing the models, so be it.

To go back to the claim though, there are multiple models with sensitivities up to about 5ºC that have surface temperature trends that are compatible with the observations. A few models don’t have sufficient simulations to say, and a few are clearly incompatible. This is what Koonin says:

They say that the fact I can find one starting point that agrees with the data is enough to validated that model. In fact that doesn’t sound right at all. I don’t think that would pass peer review – at least among my peers.

This is not quite an accurate reflection of the mainstream position, nor do his feelings on the issue make sense. The mainstream position is first more nuanced (as explained above); it is not that seeing that observations fall within the spread validates the model, rather if this happens you should not reject that model (a much less onerous claim). But why does this sound strange to Koonin? Is he in the habit of rejecting models that are consistent with observations? And of course, this position has passed peer-review many times, though I will accept that his peers might not agree (which is a statement about his peers, not the claims).

To wrap this up, I updated the figure above to look at a slightly longer period (the change to 2015-2025) using the latest observations from ERA5.

As above, but for a period extending to 2025.

It is still clear that some models are not consistent with ERA5 (notably the five models with the highest sensitivity), but it is also clear that many of them are – and that Koonin’s claims (like Scafetta’s before him) are hogwash. His implicit claim that you should ignore uncertainty if that gets in the way of your preferred conclusion is simply embarrassing for someone who likes to think of himself as an “eminent” scientist.

References
  1. G.A. Schmidt, G.S. Jones, and J.J. Kennedy, "Comment on “Advanced Testing of Low, Medium, and High ECS CMIP6 GCM Simulations Versus ERA5‐T2m” by N. Scafetta (2022)", Geophysical Research Letters, vol. 50, 2023. http://dx.doi.org/10.1029/2022GL102530
  2. Z. Hausfather, K. Marvel, G.A. Schmidt, J.W. Nielsen-Gammon, and M. Zelinka, "Climate simulations: recognize the ‘hot model’ problem", Nature, vol. 605, pp. 26-29, 2022. http://dx.doi.org/10.1038/d41586-022-01192-2
  3. K.B. Tokarska, M.B. Stolpe, S. Sippel, E.M. Fischer, C.J. Smith, F. Lehner, and R. Knutti, "Past warming trend constrains future warming in CMIP6 models", Science Advances, vol. 6, 2020. http://dx.doi.org/10.1126/sciadv.aaz9549

The post Koonin’s Continuing Calumnies first appeared on RealClimate.