Thursday, October 8, 2015

Rapid rise in NCEP/NCAR index

The local NCEP/NCAR index has risen rapidly in recent days. Not too much can be made of this, because it is a volatile index. But on 5 October, it reached 0.792°C. That is on an anomaly base of 1994-2013. It's about 0.2°C higher than anything in 2014. I've put a CSV file of daily values from start 2014 here.
Update: I have replaced the CSV at that link with a zipfile that contains the 2014/5 csv, a 1994-2013 csv, and a readme.

I see the associated WebGL map noticed the heat in Southern Australia. In Melbourne, we had two days at 35°C, which is very high for just two weeks after the equinox. And bad bushfires, also very unusual for early October.

Early results from TempLS mesh for Sept show a fall relative to August; TempLS grid is little changed. I'll post on that when more data is in.

Update. Today 0.865. Extraordinary. I naturally wonder if something is going wrong with my program, but Joe Bastardi is noticing too:

Update. Ned W has made a histogram (see comments) showing how unusual these readings are.

Wednesday, October 7, 2015

On partial derivatives

People have been arguing about partial derivatives (ATTP, Stoat, Lucia). It arises from a series of posts by David Evans. He is someone who has a tendency to find that climate science is all wrong, and he has discovered the right way. See Stoat for the usual unvarnished unbiased account. Anyway, DE has been saying that there is some fundamental issue with partial derivatives. This can resonate, because a lot of people, like DE, do not understand them.

I don't want to spend much time on DE's whole series. The reason is that, as noted by many, he creates hopeless confusion about the actual models he is talking about. He expounds the "basic model" of climate science, with no reference to a location where the reader can find out who advances such a model or what they say about it. It is a straw man. It may well be that it is a reasonable model. That seems to be his defence. But there is no use setting up a model, justifying it as reasonable, then criticising it for flaws, unless you do relate it to what someone else is saying. And of course, his sympathetic readers think he's talking about GCMs. When challenged on this, he just says that GCM's inherit the same faulty structure, or some such. With no justification. He actually writes nothing on how a real GCM works, and I don't think he knows.

So I'll focus on the partial derivatives issue, which has attracted discussion. Episode 4, is headlined Error 1: partial derivatives. His wife says, in the intro:
"The big problem here is that a model built on the misuse of a basic maths technique that cannot be tested, should not ever, as in never, be described as 95% certain. Resting a theory on unverifiable and hypothetical quantities is asking for trouble. "
Sounds bad, and was duly written up in ominous fashion by WUWT and Bishop Hill, and even echoed in the Murdoch press. The main text says:
The partial derivatives of dependent variables are strictly hypothetical and not empirically verifiable
He expands:
When a quantity depends on dependent variables (variables that depend on or affect one another), a partial derivative of the quantity “has no definite meaning” (from Auroux 2010, who gives a worked example), because of ambiguity over which variables are truly held constant and which change because they depend on the variable allowed to change.

So even if a mathematical expression for the net TOA downward flux G as a function of surface temperature and the other climate variables somehow existed, and a technical application of the partial differentiation rules produced something, we would not be sure what that something was — so it would be of little use in a model, let alone for determining something as vital as climate sensitivity.<
So I looked up Auroux. The story is here. DE has just taken an elementary introduction, which pointed out the ambiguity of the initial notation and explained what more was required (a suffix) to specify properly, and assumed, because he did not read to the bottom of the page, that it was describing an inadequacy of the PD concept.

Saturday, October 3, 2015

NCEP/NCAR index up 0.06°C in September

The Moyhu NCEP/NCAR index from the reanalysis data was up from 0.306°C to 0.368°C in September. That makes September warmer by a large margin (0.05°C) than anything in that index in recent years. It looked likely to be even warmer, but cooled off a bit at the end.

A similar rise in GISS would bring it to 0.87°C. Putting the NCEP index on the 1951-1980 base (using GISS) would make it 0.95°C. I'd expect something in between. GISS' hottest month anomaly was Jan 2007 at 0.97°C. Hottest September (GISS) was in 2014, at 0.90°C. It was the hottest month of 2014.

The global map shows something unusual - warmth in the US and Eastern Canada. And a huge warm patch in the E Pacific. Mostly cold in Antarctica and Australia, but very warm in E Europe up to the Urals, and in Middle East.

Thursday, October 1, 2015

Optimised gridding for temperature

In a previous post I showed how a grid based on projecting a gridded cube onto a sphere could improve on a lat/lon grid, with a much less extreme singularity, and sensible neighbor relations between cells, which I could use for diffusion infilling. Victor Venema suggested that an icosahedron would be better. That is because when you project a face onto the sphere, element distortion gets worse away from the center, and icosahedron faces projected have 2/5 the area of cubes.

I have been revising my thinking and coding to have enough generality to make icosahedrons easy. But I also thought of a way to fix most of the distortion in a cube mapping. But first I'll just review why we want that uniformity.

Grid criteria

The main reason why uniformity is good is that the error in integrating is determined by the largest cells. So with size variation, you need more cells in total. This becomes more significant with using a grid for integration of scattered points, because we expect that there is an optimum size. Too big and you have to worry about sample distribution within a cell; too small and there are too many empty cells. Even though I'm not sure where the optimum is, it's clear that you need reasonable uniformity to implement such an optimum.

I wrote a while ago about tesselation that created equal area cells, but did not have the grid aspect of each cell exactly adjoining four others. This is not so useful for my diffusion infill, where I need to recognise neighbors. That also creates sensitivity to uniformity, since stepping forward (in diffusion) should spread over equal distances.

Optimised grid

I'll jump ahead at this stage to show the new grid. I'll explain below the fold how it is derived and of course, there will be a WebGL display. Both grids are based on a similarly placed cube. The left is the direct projection; you can see better detail in the previous post. Top row is just the geometry (16x16), the bottom shows the effect of varying data (as before 24x24, April 2015 TempLS). I've kept the coloring convention of s different checkerboard on each face, with drab colors for empty cells, and white lines showing neighbor connections that re-weight for empty cells.

The right is the same with the new mapping. You can see that near the cube corner, SW, in the left pic the cells get small, and a lot become empty. IOn the right, the corner cells actually have larger area than in the face centre, and there is a minimum size in between. Area is between +-15% of the center value. In the old grid, corner cells were about 20% area relative to central. So there are no longer a lot of empty cells near the corner. Instead, there are a few more in the interior (where cell size is minimum).

In that previous post, I showed a table of discrepancies in integrating a set of spherical harmonics over the irregularly distributed stations:
L 12345
Full grid00001e-06
Infilled grid8.8e-050.000290.0010450.0020150.003635
No infill0.0076320.0273350.0493270.0644930.075291

In the new grid, the corresponding results are:
L 12345
Full grid00001e-06
Infilled grid8e-060.000190.0006450.0013480.002529
No infill0.0047580.025580.0477940.06010.069348

Simply integrating the SH on the grid (top row) works very well in either. Just omitting the empty cells (bottom row), the new grid gives a modest improvement. But for the case of interest, with the infilling scheme, the result is considerably better than with the old grid.

Friday, September 25, 2015

GWPF wimps out

In April, there was a big story summarized by a headline in the Telegraph: "Top Scientists Start To Examine Fiddled Global Warming Figures" I wrote about it here. The GWPF announced an inquiry into global temperature adjustments. They had assembled a panel of apparently reasonably well qualified scientists. They promulgated terms of reference (which seemed to borrow heavily from the terminology of Paul Homewood). They put out a call for submissions, with a deadline of 30 June. They said "After review by the panel, all submissions will be published and can be examined and commented upon by anyone who is interested." and set up a page for this purpose here. All this promptly echoed, eg here and here.

I immediately thought about putting in a submission, and did in fact write one. I sent it to the prescribed email address on 2 June. No response. So I emailed again to ask if it had been received, on 14 June. Still nothing. So then I wrote to the GWPF general email, and got a prompt and courteous response from none other than Andrew Montford. He said he couldn't find my submission there, so I sent him a copy, which he received. Encouraging.

Still no response from the actual panel though. I kept an eye on the site, especially the submissions page. I thought they might say that they had received (with thanks) x submissions, or some such. But AFAICS, the site didn't change at all.

There was one link on the page that said "news", which was an obvious place to try, but it didn't seem to connect to anything. Three months later, still wondering, I got a helpful direct link from a correspondent, to this page. And there I find, dated July 22, this information:
"The team has decided that its principal output will be peer-reviewed papers rather than a report.
Further announcements will follow in due course." report! So what happens to the terms of reference? The submissions? How do they interact with "peer-reviewed papers"?

And of course one may ask who (if anyone) will ever write those papers? And about what?

I wonder what changed their minds?

BTW, here is a Wayback snapshot from June. I don't think anything has changed, except for the "news".

Tuesday, September 22, 2015

Better gridding for global temperature

Computing global temperature is an exercise in spatial integration with scattered data. I have written a lot about it previously, eg here or earlier here. A spatial integral is a weighted average, so it comes down to calculating the weights. With TempLS, I first used a grid method, as is traditional. Then, to overcome the problem of empty cells, I used an irregular triangular mesh, as in finite element integration. I have also developed, and will soon describe, a method using spherical harmonics. I think the later methods are better. But grids also have some advantages, and I have long wanted to get a rational infilling basis.

Numerical integrtaion

Integration is usually defined as a limiting process, whereby the region is subdivided into finer and finer regions, which can then each be evaluated with some local estimate of the integrand. There is theory about whether that converges. With a finite amount of numerical data, you can't go to a limit. But the same idea applies. You can subdivide until you get a result that seems to depend little on changing the subdivision. Sometimes that won't happen before you run out of data to meaningfully estimate the many subdivisions. That's one reason why temperature anomalies are important. With absolute temperature, you would have to divide very finely indeed to be independent of topographic variation, and there just aren't enpigh reading locations to do that. But anomalies take out a lot of that variation, making practical convergence possible.

You might ask - why bother with different methods, rather than finding just one good one? The answer is with this idea of reaching an invariant zone. If we can find an integral estimate that agrees over several different methods, that will give the greatest confidence in the result.

Grid considerations

With gridding, you can choose a coarse grid so that every cell has data. But then the data may not be a good estimate of the whole grid area. You lose resolution. A finer grid will start to have cells with no data. Traditionally, these are just omitted, meaning in effect that they are assumed to have the global average value. This was improved by Cowtan and Way 2013, using kriging. I proposed a simpler approach using latitude band averaging, which gave some of the same benefit. In this post I'll look at upgrading the infill process, using numnerics similar to solving the diffusion equation. It tries to find a local average to use for each missing cell.

Improving on lat/lon grids

To do this, I need a better grid system than lat/lon. That creates a big problem at the poles, where cells become very small and skewed. The essential requirement of a grid is that you can quickly allocate a set of scattered data to the right cells, and you know the area of the cells. There are many other ways of doing this. Lat/lon is based on gridding the sphere as if it were a flat surface, which it very much isn't. You can do much better using a somewhat similar 3D object. Regular platonic polyhedra are attractive, and an icosahedron would be best of these. But a cube is more familiar, and good enough. That is what I'll use here. The cube is gridded in the normal way, with a square grid on each face. The sphere surface is radially projected onto the cube.

I'll give details, with the infill process, and tests of the improvement of the results, using spherical harmonics, below the fold. And of course there will be the usual WebGL active picture. It will show the cube grid projected on the sphere, and infill for a typical month, with lines to show the infill dependency.

Thursday, September 17, 2015

Land Sea interface in global temperature averaging

Inhomogeneity is a problem when estimating an average. You have to sample carefully. In political polling, for example, men and women tend to think differently. So you need to get the proportions right in the sample (or re-weight).

In a global temperature average, a big inhomogeneity is the land/SST difference. For grid-based estimates, a land mask is often used. This tells how much of each cell is land and how much sea.

I haven't used a land mask with TempLS grid, because I think grid weighting has bigger problems. And with mesh weighting, there isn't any clear way to mask, especially as there is typically a new mesh for each monthly set of stations.

The mitigation is that there is a lot of cancelling effects. Some land areas may be in effect represented by sea, but also vice versa. Island temperatures tends to influence surrounding sea, but then again, they really should. If they were only representative of their own land area, it would generally not be worth including them.

I may still try to do something more elaborate. But in the meantime, I thought I would test what the current algorithm does, using what maths call a color function. This is 1 (red) for land stations, 0 (blue) for SST. I plot it as if it were temperature. I hope to see that land areas uniformly red, sea blue, and an in-between color tracking the shore. Insofar as it fails to track, I hope the failure is balanced, so that neither sea nor land is over represented on average. The result is an active WebGL plot below the fold.