Friday, February 16, 2018

GISS January global down 0.11°C from December.

GISS cooled, going from 0.89°C in December to 0.78°C in January (GISS report here). That is a smaller drop than TempLS mesh; I originally reported a 0.2°C fall, but later data changed that to a 0.16°C fall. GISS says that January 2018 was the fifth warmest in the record, and was cooler due to La Niña.
Update. I wrote this post based on the GISS report, as the data file was not posted for some time. It is now there, and I see that the December average has been adjusted up from 0.89°C to 0.91°C. That means the drop Dec-Jan is now 0.13°C. This brings it close to the TempLS change of 0.16°C. TempLS also incrased in December due to later data, so in both months GISS and TempLS now track well. 

The overall pattern was similar to that in TempLS. Cool in east N America and cold in far East Siberia. Very warm in west of Russia, and in the Arctic. A cool La Nina-ish plume, but warm in the Tasman sea and nearby land. The W US was warm, more so than TempLS showed. Also the W Russia hotspot extended well into central Europe.

As usual here, I will compare the GISS and previous TempLS plots below the jump.

Wednesday, February 7, 2018

January global surface tempLS down 0.2 °C from December.

TempLS mesh anomaly (1961-90 base) fell from 0.762°C in December to 0.565°C in January. This compares with the fall of 0.097°C in the NCEP/NCAR index, and a similar fall (0.16) in the RSS LT satellite index.
According to the reanalysis, the cause was a deep dip late in the month, which seems to be extending into February.

The main cool areas were central asia into Mongolia , and Eastern N America. Also Central Africa. Warm were NW Russia, extending into Europe, and N Canada/Alaska. Also a band right across temperate SH, but still very warm Tasman Sea and around.

Here is the temperature map:


Tuesday, February 6, 2018

Weirdness from Armstrong/Green and conservative media.

This was new to me when it popped up at WUWT. But apparently is has already been around for a week or so, at Fox News and The Australian. Economists Scott Armstrong and Kesten Green, who regard themselves (pompously) as authorities on forecasting, made a splash ten years ago when they made a challenge of a bet with Al Gore that their forecast of global temperature, based on no change, would be better over the next ten years than his. They even set up a website, to track his response, and presumably track the bet. And they got a good run in the conservative media at the time.

Gore never showed any interest - he just said that he doesn't bet. So it was empty noise. However, for some strange reason, there is now a volley of fantasy articles, attributing a bet to Gore (in which he had no say) and declaring him the loser. The terms of the bet are exceedingly arcane. Since they have to make up some warming forecast, they picked an "IPCC" forecast of 0.3°C warming for the decade.

The first thing to say is that the IPCC made no such forecast. They did, in the AR4 SPM that came out at abaout that time, say this:
For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.
And it was for surface temperature, not the troposphere measure that Armstrong/Green decided on.

But that itself turns out oddly. They nominate UAH as a measure, which actually has a slightly higher trend over the period than surface measures. Here is the plot

And they are betting with zero trend (blue). The actual was more than double the "IPCC trend". They lose by a mile. But in the WUWT article, at least, they say the OLS trend for the period is 1.53°C/Century. But they prefer a measure they make up called Least Absolute Deviation (LAD).

Well, they would, wouldn't they, because they say that comes out to 1.14°C/Century, and on that basis they declare themselves the winner (since they have pinned Gore with the "IPCC" 3 C/century. Of course it is 2, not 3, so even on that basis they lose. But since they seem to have miscalculated the OLS trend by a factor of 3, I have no faith in their LAD calculation.

On the plot, I've marked a line with slope 1.14°C/century in green, and the Armstrong/Green zero forecast in blue. They all, as fitted lines should, pass through the mean of x and y. See if you think the "LAD" line (1.14) is a better fit. Or whether the no trend forecast is the winner.

Scott Armstrong is a professor of marketing. Kesten Green is a lecturer in Commerce. Neither seems to know much about what the IPCC actually says. And they seem pretty weak in statistics.









Saturday, February 3, 2018

January NCEP/NCAR global anomaly down by 0.097°C from December

In the Moyhu NCEP/NCAR index, the monthly reanalysis anomaly average dropped from 0.328°C in December to 0.231°C in January, 2018. There was a big dip late in the month; there are some signs that it is ending. The month was close to November and June, 2017, but a little lower than both. You have to go back to July 2015 (0.164°C) to find a colder month.

AS we have heard quite a lot, it was cold in eastern N America (but warm in W). Also central Asia and Sahara/Sahel. Warm in Arctic and a lot of Europe, and still warm sea around New Zealand. Cool in tropical E Pacific (ENSO).


Housekeeping post.

A couple of housekeeping issues. One is an apparent problem with Blogger. I have posted the January results for NCEP/NCAR, but they don't show on the actual blog page, or in the archive. The post does have a place on the web here">, but you can't get to it in the normal way, and it hasn't generated the usual RSS notifications. It may be that Blogger is working on this, because for some time I could see and edit the post on the dashboard where all posts are listed, but now it has disappeared.

I'm posting this partly as an experiment, to see if it suffers the same fate.
Update - it worked. I'll try reposting the earlier post.


The other happening, which I have been spending a bit of time on recently, is the HiREs SST page, and it's consequent movie page. This stopped updating at end 2017. End year glitches in my automated system are not uncommon, but this turned out to be at source. NOAA has reorganised its system, using NetCDF 4 and different directories. They also have a new data set which I'll look at. But anyway, the program is over five years old now, and a bit slow, so I tried to improve. That is never smooth, but I think it is OK now.

Friday, January 26, 2018

A new gallery of interactive graphics at Moyhu.

There was an old page, linked to right, on interactive graphics at Moyhu. I have replaced it with a new gallery, which is much more comprehensive. Maybe too comprehensive - I have tried to include every instance to end 2017. It is a tableau of images, and I have placed at the top what I think is a representative selection. These are marked with red borders. Each cell has a passive image, a date, a link which will take you to the original post or page containing the graphic, and a button imploring you to Try it!. This takes you to a rearranged version of the original post with the graphic at the head; it is an active version of what you see in the image.

The radio buttons on the left allow you to choose categories, generally based on the technology used. They are explained below the tableau, and I will expand on them here.

In this post, I want to review the overall progress of interactive graphics here. Interactive requires at least the use of JavaScript, so that pressing buttons, dragging with the mouse etc will modify what you see. This can be augmented with two main technologies - the drawing canvas, which was formally introduced with HTML 5 in 2014, but usable earlier, and WebGL, which is based on the old Silicon Graphics GL of about thirty years ago, which became OpenGL and is now built into browsers as WebGL.

It is important sometimes to remember that Javascript activity is embedded in the HTML scripts that a browser downloads, and is entirely implemented on the users machine. There is no Moyhu server support. This means all the data is downloaded too. There are some security restrictions on this privilege, which adds to the interest of Javascript programming. An underlying technology is the programming language R. I typically sort out the data for plotting or whatever at this level, and the output is a Javascript file often defining masses of data. Javasript itself does not have i/o; you have to express input as code.

Much of my experimenting with graphics has been motivated by a wish to present data plotted on a sphere, to avoid projection distortion. JS makes it possible to do this and allow the user to view the sphere from various directions.

I have recently revamped the Moyhu topic index, and some of the toipics give a more comprehensive list of links.

Javascript and interactivity

My first active graphic was part of a discussion on how to make spaghetti graphs more readable. My first idea was an animated GIF, which overlaid black outlines of each strand in sequence. But a reader TheFordPrefect recommended Javascript and sent me an example made by Dreamweaver.So I learnt some JS, and made a plot of proxy reconstructions here. There was a legend where you could roll the cursor over a name in the legend, and a black overlay of the strand would appear. This is the basic idea that I have set as a separate category - active viewers - described further below.

A common use of JS and buttons was simply to compress information. A lot of images could be accessed at one location in the page, with a choosing mechanism. Alternatively a huge number of links can be sorted into manageable pages with button clicks.

JS Globe

That set me up for the first presentation of a globe plot, as here. In R I made 2D projections from a number of viewpoints of a shaded plot of some variable, usually temperature. The viewpoints were usually from either the 8 corners of a cube containing the sphere, or the 6 face centres. There was a panel of squares you could click to switch views. A lot of my JS graphics involves locating a click point and responding in some way. Dragging is a variant of this.

Google Earth and KML/KMZ

This was rather a dead end, but I did quite a lot of things. KML is a control language for Google Earth. Here is a typical application. I haven't shown the graphics here because, well, they aren't really mine. GE is good for detailed location, which is relevant to individual stations, but not to temperature plots etc. I found that the control capabilities, based on folders were rather limited, so I switched to Google Maps, which offers JS control.

Google Maps

The general idea and working environment is described here. GM provides an app which allows you to embed a GM in your page, but gives ample facilities to control it with JS. Again the main use is for showing land stations, where the lat/lon are fairly precisely known. I typically select subsets of the stations colored according to some criterion. Clicking on them brings up information including the name and often links and history. The selection table is on the right, and can allow quite complex logic. Recent cases show the total in each color. There is, for example, a maintained page which is described here.

Active viewers

This is just a subset of JS-active graphics, in which a spaghetti plot is shown, with a large range of strands, usually proxies. More use of JS is made in that when a strand is marked, a table of information os shown, and also a marker shows where it is on a map. A typical example is here.

Trend viewers

This is another JS subset. A colorful triangle (prepared in R)is shown in which each dot represents the trend over some period of months. There is a coupled time series graph on the right, with two markers representing beginning and end of trend. You can choose a period either by clicking on the triangle or by moving the markers directly. For each chosen period, numerical data is displayed. Buttons allow you to choose the overall display period, the dataset (from one of many), and possibly a different kind of display which says something about the significance or confidence intervals. There is a maintained page here.

XMLHTTPrequest

Normally with JS you have to load the data in advance, which is a nuisance if you want to accommodate a range of user wishes. XMLHTTPrequest is a workaround that lets you download JS files when asked by user. There are security restrictions. But it vastly increases the amount of data that can be supplied. It isn't itself graphics technology, but it enables some of my larger apps

HTML 5

HTML 5, when it came out, included a lot of new elements, but the one particularly useful to me was the canvas element. All the graphics described so far had to be pre-drawn using R and supplied as images. With the canvas, we can draw from numerical data, in response to user input. Further, there is a clunky but not bad capability for shading triangles in response to vertex values.

One liberation is that graphs don't any longer have to have a fixed range of x or y. The user can zoom or extend, if the numerical information is there. My first big use of this was in the climate plotter. This is still kept as a page, although the information update has been spotty. You can choose from a large number of annual sets of climate data. Combinations can be displayed and regressions (including polynomial) performed. Various kinds of arithmetic can be done, But most importantly, the axes are under user control. You can translate curves independently, stretch in the x or y directions (with axes adapting).

A similar application is to superimposing on an image. I rather frequently review the progress of Hansen's 1988 predictions, eg here in 2016. This makes a canvas image from his paper, and the user can choose various datasets to superimpose, and even vary offsets if desired.

Another liberation was in viewing Earth plots. No longer need there be fixed views pre-calculated. The canvas can show shading in response to arbitrary control (there is maths involved). An early version is here. The use of shading improved over time.

Drag plots

This is an extension, using HTML 5, of plotting with variable axes. You can translate just by dragging, or by dragging just behinds each axis, you can shrink or expand it. And there can be the usual selection facilities etc. I maintain such a graph for surface indices in the latest data page.

WebGL

Most recent graphics has been done with WebGL, the origins of which are described above. WebGL is the staple of gaming, and so ample resources are provided. For fast-moving graphics on screen, it provides access to the GPU, for highly parallel operation. It is fully 3D, so keeps track of what is obscuring what. The shading is excellent. It has elaborate capabilities for lighting and perspective, but I don't use those much. And of course it is under full JS and mouse control. In my applications there is a fixed centre point about which everything can be revolved, and a fixed viewpoint at infinity.

A great thing about WebGL is that it deals in objects. In HTML5, you can't unravel a 2D canvas, and it is hard to selectively erase. But in WebGL you can just ask for an object to disappear, or move it, and you see what is underneath.

My first effort is here. But it got better. One that is still one of my favourites is the maintained high resolution SST page. At max resolution of 1/4°, that is a lot of triangles (2073600). But WebGL handles it fairly well, and it really can tell you more if you zoom. The app draws together a lot of technology, as interactive downloads are essential, and since that are about 25 years of data, a lot of it daily, just organising this is a stretch. Like all maintained pages, it downloads data and processes (in R) every night.

But the recent outpouring of WebGL graphics which dominates the gallery is due to the Moyhu WebGL facility. This hides all the parallel programming etc and just allows as input as numeric data. Usually a vector of nodes, linkages (for triangles etc) and values which will become shading. Mass production. A non-spherical example is the Lorenz butterfly..

Animation

This isn't a single technology; in fact I started out with animated GIFs. I tend to use it now where video compression is used, as with MPEG. I typically use FFmpeg with R to string together sequences of PNG or JPEG images. The classic example is a maintained page which is an offshoot of the HiRes SST page. It shows daily or 4-day sequences for regions like the ENSO Pacific plumes or the poles (where it is good for tracking sea ice). Another interesting experiment was the 2012 hurricane season, where I show the hurricanes moving against a SST background. In many cases it clearly shows the tracks of cooling.

An aspiration is to provide a 3D movie, so you can rotate a world as it goes through a temperature or whatever sequence. The problem is that you lose video compression, so it is hard to maintain speed. Here I went back to the JS globe idea above, using stored projections of 6-hour relative humidity plots. It's interesting, but hard work.

Another kind of animation is just zipping through WebGL plots. It's fast enough. Here is a movie display of the various spherical harmonics which I use a lot for Fourier-like representation on the sphere, with various controls. Fun version here.







Wednesday, January 24, 2018

Satellite temperatures are adjusted much more than surface.

I continually come across claims that surface temperatures should be ignored in favour of satellite troposphere temperatures, because the surface temperatures are adjusted. It's an odd argument to conduct, because while at least there is a recognised surface temperature reading that can be adjusted, satellite temperatures are the product of a long and complex calculation sequence, in the course of which many judgement calls are made. Here, for example, is Roy Spencer's (+Christy + Braswell) explanation of the changes that were made in going to UAH version 6. He describes the need for it thus:
One might ask, Why do the satellite data have to be adjusted at all? If we had satellite instruments that (1) had rock-stable calibration, (2) lasted for many decades without any channel failures, and (3) were carried on satellites whose orbits did not change over time, then the satellite data could be processed without adjustment. But none of these things are true.
...
After 25 years of producing the UAH datasets, the reasons for reprocessing are many. For example, years ago we could use certain AMSU-carrying satellites which minimized the effect of diurnal drift, which we did not explicitly correct for. That is no longer possible, and an explicit correction for diurnal drift is now necessary. The correction for diurnal drift is difficult to do well, and we have been committed to it being empirically–based, partly to provide an alternative to the RSS satellite dataset which uses a climate model for the diurnal drift adjustment.
...
So instead of continually making small adjustments, as in the surface dataset, they produce new versions in which these decisions are revisited and often radically revised. The changes are much larger in overall effect than the changes to individual surface station averages.

Two years ago, I wrote a post about the changes that happened when Version 5.6 of the UAH index went to version 6. This decreased trends a lot, and so was popular with contrarians. I was prompted to write by Roy Spencer's claim:
"Of course, everyone has their opinions regarding how good the thermometer temperature trends are, with periodic adjustments that almost always make the present warmer or the past colder."
So I compared the change in TLT (lower troposphere) going from V5.6 to 6.0, to the cumulative effect of changes in GISS from archived time series of 2005 and 2011, with the then current 2015 GISS. GISS was far more stable than UAH, even though the period of changes was much longer.

Meanwhile, RSS also updated their troposphere data, going from V 3.3 to V4. RSS had been a favourite of contrarians, because it had a much lower trend than UAH. Roy spencer noted this, saying:
"But, until the discrepancy [in trend with UAH higher]] is resolved to everyone’s satisfaction, those of you who REALLY REALLY need the global temperature record to show as little warming as possible might want to consider jumping ship, and switch from the UAH to RSS dataset."
They needed little persuasion. Lord Monckton wrote a monthly series at WUWT about the length of the "Pause", which he defined as the maximal period of zero gradient of RSS TLT, starting about 1997. He scorned UAH then, as it was similar to the surface data. But RSS V4 turned that around too, showing much greater trends historically, and severely damaging the "Pause". I commented on some of this here, before any of the new versions.

Lord Monckton did not like it. His tamper tantrum is here. Any change which increases the trend is "tampering". Why going from V3.3 to V4 is tampering, but going from 3.2 to 3.3 or the earlier steps is not, was never explained.

Anyway, I thought it would be worth updating my graphs of Dec 2015 to include the changes to RSS. In fact, the two indices neatly changes places, so that RSS V4 is close to UAH V5.6, and UAH V6 is close to RSS V3.3. So in both cases the change is large.

An amusing sideshow of the more satisfactory UAH V6 is that surface datasets were being accused of fraud for differing from it - eg NOAA’s Fake SST’s Not Supported By Atmospheric Data. But the reviled discrepancies were not there with V5.6, which was far closer to the surface data than to V6. So was V5.6 also "fake"?

Anyway, here are the plots. I'm using the same old versions of GISS as in the previous post and sourced there. They can be got at the Wayback Machine. I convert everything to the same anomaly base, which this time is 1979-2008. I chose that because there isn't quite a 30 year span common to GISS2005 and the sat sets, but this reduces the gap to three years. So I set the other sets to zero average on this span; then I make the GISS2005 match the rebased GISS_current on its range.

First, as before, I just plot the time series. I use reddish colors for RSS versions, bluish for UAH, and greenish for GISS. Because the curves are tangled, there are four different color views of the same plot, which you can access with the buttons below. The text and content are the same for each, but transparency is used so that only one group stands out. Here is the plot:



This plot is good for a general appreciation of the deviations. The GISS variants bunch together, and the upper sat variants, UAH V5,6 and RSS V4.0, tend to follow them. The other pairing, RSS V3.3 and UAH V6, is the outlier, deviating rather markedly below from about 2008 onwards.

The values relative to each other are easier to see if they are expressed as differences from a common value, and for this I chose current GISS. In principle any value will do, but because the satellites respond with big spikes for El Niño, this would be inverted into a negative spike for GISS, which would be confusing. So I'm using the same colors, and choice of variants - GISS shows as the zero line:



Next I plot the difference from one version to the next - ie the "adjustment". In each case, it is new minus old. Again you can use the buttons to cycle through different colors.



This shows most clearly what happened in the recent changes. The trend of UAH went way down, and the trend of RSS went way up. These changes dwarf the minor and fairly trend-free changes to GISS. Interestingly, especially for RSS, most of the change happens post-2000.

Of course, GISS has more changes going further back. But satellites do not have an advantage there. They have no data at all.