Total Pageviews

Saturday, November 30, 2019

ClimateGate's "Harry_Read_Me.txt" file is a smoking gun of science fraud

" … [My attempted corrections] will allow bad databases to pass unnoticed, and good databases to become bad, but I really don’t think people care enough to fix ’em, and it’s the main reason the project is nearly a year late.”
Dr. Ian (Harry) Harris, 


NOTE:
This report is based 
on Harry's direct quotes,
and how other people 
-- often computer experts -- 
interpreted the Harry 
Read Me file in 2009.

Harry's frustrations at work,
are expressed in his own words,
never intended to be read
by anyone else.

His words are an indirect 
way of judging the quality 
of temperature data, and 
one temperature database
that he worked on.

The honesty of his notes 
is hard to judge, since no one 
officially interviewed Harry. 

But I really
liked the fact 
that his notes were 
meant to be private
-- they seem totally 
uncensored.

Harry writes things 
an employee might 
be thinking, that would
be valuable for his
superiors to know,
but would never be
told to them. 

(That also happened 
in the large corporation
I worked for, for over 
27 years -- it took
anonymous hard
copy (on paper)
questionnaires
to get that kind of
raw information
from engineers, and 
even then, they very
rarely left a phone 
number for followup
questions).

Harry's 
"private" notes 
were hacked, 
meaning there was 
absolutely no time 
for him to delete words 
he didn't want others 
to read ("sanitize" them).

This Harry Read Me article 
contains some of the same
quotes as my first article,
plus likely explanations 
of what Harry's words 
actually meant.

I've borrowed the 
explanations from bloggers
who had analyzed the notes 
back in 2009.

Mainly from people 
who said they were 
computer programmers. 



CONCLUSION:
We have uncensored
documentation of a 
three year effort by
a Climate Research 
Unit (CRU) scientist /
computer programmer.

He had complete
access to all the data,
access to all the code, 
access to all the people 
who developed the code, 
and access to the 
climate models.

Yet after three 
years of very
frustrating work,
he could  not fix 
one CRU database,
so that it would 
merely duplicate 
CRU’s previously 
published global
climate numbers. 

If Harry could not 
do it, that means 
CRU’s published
climate data cannot 
be reproduced, 
(even by themselves), 
so there is no point 
in anyone else trying !

Or taking their global
average surface 
temperature seriously !

I would think a database
should be fixed BEFORE
global average temperatures
are calculated and reported, 
but perhaps I'm too logical 
for goobermint work ?


THE  BIG  PICTURE:
The Harry Read Me.txt
file is 274 pages of notes, 
mainly computer code.

It describes efforts 
of a climatologist / 
computer programmer 
at the Climatic Research 
Unit (CRU), of the UK
University of East Anglia. 

Harry was working on 
a huge statistical database 
              (11,000 files) 
of important climate data, 
between 2006 and 2009.

Historical temperature 
data are too important 
to be left in the hands 
of people who have 
been so sloppy 
with the collection 
and management
of the data.

It appears Harry's task 
was to "bend "data 
until until they complied 
with previously published
global climate numbers.


It's possible Harry 
doesn't matter at all,
because historical 
temperature data 
get repeatedly 
"adjusted" to create 
more global warming 
out of thin air.

What difference does 
raw data quality make, 
if there are repeated
and arbitrary 
"adjustments"
almost always 
increasing the rate 
of global warming ?

"Inconvenient" data 
are arbitrarily changed, 
sometimes 50 to 100 
years after the fact.

For one example:
The global cooling 
from 1940 to 1975, 
as CO2 levels rose 
(rising CO2 should have 
cause global warming), 
convinced 
a few scientists 
in the mid-1970s 
that a new ice age 
was coming.   

Today, if the 
very same scientists 
looked back at 1940 
through 1975 data,
one dataset would show 
NO warming, and others 
had cut the previously 
reported warming by 
half to two thirds !

Reason:
Because government 
bureaucrats can do 
anything they want to do, 
with temperature data,
to promote their coming 
global warming "crisis".

And they definitely don't 
want to show 35 years,
from 1940 through 1975,  
with global cooling, while 
CO2 levels increased -- 
that's an inconvenient 
relationship of CO2 and
the global average 
temperature.


SUMMARY:
HARRY_READ_ME.txt file 
               (700kB) 
is a three year journal 
of a programmer describing 
everything he tried to do
with data, and models, 
in an effort to reproduce 
existing results CRU 
had previously published. 

Dr. Ian (Harry) Harris, 
of the CRU, 
had the specialty 
of “dendrochronology” 
( and data manipulation ! ).

Comments in his file 
make it clear “Harry” 
tried for three years 
to recreate CRU’s 
published results, 
but he failed.

The hacked ClimateGate 
emails confirmed what 
we have known all along. 

"Climate" scientists are 
strongly biased to present
a "consistent" narrative of
a coming global warming
crisis, and will lie, mislead,
and generally cut corners 
in an effort to better support 
their computer model-based
global warming predictions.

No one wants
their predictions 
to be wrong !

Historical climate data 
appear to be a mess.

The coming climate 
change crisis is 
clearly not about 
good science.

It's about more
money to spend, 
and more power 
for leftist politicians, 
... and permanent
job security for 
their government 
bureaucrat 
"scientists".

Politicians want
to remake (blow up) 
the U.S. economy 
with a Green New Deal, 
lowering our 
standard of living, 
at great expense, 
to "fix" the climate !

Their scary predictions 
are based on historical 
surface temperature data 
of dubious quality, and a 
never-changing 
CO2 - temperature 
theory from the 1970s,
that produces
ALWAYS  WRONG, 
predictions of the 
future climate.

Are they insane ?

No, they'd love to have 
more power to tell 
everyone else how to 
live their lives.

Leftists love to do that
 ... which they now justify
by (falsely) claiming 
they are trying to save 
the planet for the children 
(a planet that already has
a great climate, and 
doesn't need saving).

The only insane people 
are those who sit back
quietly, and let this 
science fraud continue !

This climate science blog, 
since 2014, is my attempt 
to NOT sit back.

Doing so would allow 
leftist politicians to seize 
more and more power, 
using their wild guess,
always wrong, scary
climate predictions 
(science fraud) 
to virtue signal.

Our U.S. government
is already too powerful.


DETAILS:
The consensus 
in 2009 was that
Harry was handed 
a mess to sort out, 
so his comments 
are reveal
a lot about 
the state of the 
CRU's HadCrut 
temperature data. 

CRU temperature data
are supposed to be the 
gold standard, yet earlier 
paper records are “missing”, 
and the computer records 
are not high quality. 

The READ ME file 
is a personal diary 
of Harry's frustrations,
that doesn't inspire 
any confidence in
CRU data publications.

The computer 
coding, along with
the programmer's 
apparently 
unsuccessful 
efforts to 
complete the project, 
involve data that are 
the foundation of 
climate alarmism.

The database 
included the
temperature data 
from hundreds of 
weather stations 
around the world,  
precipitation 
measurements 
from 1901 to 2006, 
sun / cloud computer 
simulations, etc.

The CRU at East Anglia 
University had been 
considered by many 
as the world's leading 
climate research agency. 

The CRU claimed
to have the world's 
largest temperature 
dataset.

CRU data were 
incorporated into 
the United Nations 
Intergovernmental 
Panel on Climate
Change's 2007 
report. 

The 2007 IPCC report 
is what the U.S.
Environmental 
Protection Agency 
acknowledged 
'relied on most heavily' 
when concluding 
that carbon dioxide 
emissions were 
a pollutant that
endangered public 
health, and should 
be regulated.


The programmer's quotes 
included here, are only 
a fraction of what he wrote. 

What a "Harry 
Read Me" 
comment
REALLY meant 
can only be 
interpreted 
by others.

But we do know 
Harry was not happy 
with the quality of the CRU 
climate data, and the quality 
of the database used to store 
and manipulate the numbers.

Even if source raw data 
had been  "perfect", 
the quality of the database 
could cause serious problems, 
such as:
-- Making individual 
raw data numbers 
impossible to track, 
and verify, 

arbitrarily "adjusting" 
raw numbers, 

filling in missing data 
with inaccurate guesses, 

and combining numbers 
in a way that produces 
an inaccurate global 
average temperature.


QUOTES:
Here are some 
of the most 
popular quotes
 -- many included 
in my first 
"Harry" article, 
but without the
page numbers:


- "But what are all those monthly files? DON'T KNOW, UNDOCUMENTED. Wherever I look, there are data files, no info about what they are other than their names. And that's useless ..." (Page 17)



- "It's botch after botch after botch." (18)



- "The biggest immediate problem was the loss of an hour's edits to the program, when the network died ... no explanation from anyone, I hope it's not a return to last year's troubles ... This surely is the worst project I've ever attempted. Eeeek." (31)



- "Oh, GOD, if I could start this project again and actually argue the case for junking the inherited program suite." (37)




- "... this should all have been rewritten from scratch a year ago!" (45)



- "Am I the first person to attempt to get the CRU databases in working order?!!" (47)



- "As far as I can see, this renders the (weather) station counts totally meaningless." (57)



- "COBAR AIRPORT AWS (data from an Australian weather station) cannot start in 1962, it didn't open until 1993!" (71)



- "What the hell is supposed to happen here? Oh yeah -- there is no 'supposed,' I can make it up. So I have : - )" (98)



- "You can't imagine what this has cost me -- to actually allow the operator to assign false WMO (World Meteorological Organization) codes!! But what else is there in such situations? Especially when dealing with a 'Master' database of dubious provenance ..." (98)



- "So with a somewhat cynical shrug, I added the nuclear option -- to match every WMO possible, and turn the rest into new stations ... In other words what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad ..." (98-9)



- "OH F--- THIS. It's Sunday evening, I've worked all weekend, and just when I thought it was done, I'm hitting yet another problem that's based on the hopeless state of our databases." (241).



- "This whole project is SUCH A MESS ..." (266)




Ian "Harry" Harris appeared 
to be creating a different data set 
of temperature readings, 
that excluded Alpha values 
-- eliminating data
that posed a problem
for the "consensus" 
global warming narrative. 

Ian "Harry" Harris claimed
Dr. Tim Osborne was using 
the wrong temperature values, 
when he was performing 
comparisons with temperature 
anomaly values. 

Harry was able to get 
the precipitation results 
to comply with Dr. Tim 
Osborne's program, 
but only after replacing 
questionable numbers 
with a default filler value 
of "-9999". 

Harry indicated
that he and Tim 
still had results 
that differed by 5% !

Precipitation / temperature 
data file dates were altered, 
but new data were not actually 
entered on the modified dates. 

The final version of the
precipitation files, that were
compiled by Dr. Tim Osborne, 
could not have been using 
the latest precipitation 
database (Harry said so). 

The synthetic (made up) 
cloud precipitation values 
were missing from 1996-2000, 
after having been lost by 
someone named "Mark". 



Not able to find  
a good database 
with precipitation values
(because everything 
was undocumented), 
Harry decided that
he would just pick one 
he thought would be
good enough to compile 
precipitation results, 
into a standard 
grid model.

There had been 6,003 
missing precipitation / 
temperature values, 
out of a possible 
15,942 readings, 
that were never 
recovered. 

There were over 200 
weather stations with
a temperature reading 
of '0' for their grid cells, 
from 1901-1996
(mainly in North Africa 
and the west coast 
of South America). 


Synthetic (made up) 
data were invented
to "infill" early years
for large regions 
with few, if any, 
weather stations.

That was often done
to "extend" the record
for regions where the
for number of weather 
stations had increased 
over the years, until
the coverage was 
finally satisfactory. 



I've focused here 
mainly on land surface 
temperature data.

Ocean temperature 
data quality is 
much worse!

And oceans account for 
71% of Earth's surface !

The problems are 
Insufficient coverage, 
arbitrary "adjustments".
and five or six changes
in ocean temperature
measurement methodologies.

Not once in my 22 years 
of climate science
reading have I ever come
across a logical comparison
of ALL the different ocean
measurement methodologies
used in the past 140 years,
tested at the same time,
at the same location
in the ocean !

The goal would be to determine
if changes in measurement
methodologies artificially 
created ocean warming,
or cooling.

In real science,
you would want
to know that.

But government 
climate "science"
does not care -- 
and that's the mark 
of junk science !