Wednesday, 25 April 2012

Climate Modelling - [based on video notes from Earth: The Climate Wars]

Hello everyone! I hope the revision is going well; only like 8 more weeks of hard work and then it will all be over!!!

Climate modelling is a really, really complicated mathematical area and I got the pleasure of spending some time with many of the worlds best climate modellers whilst at the Met Office. During my time there, understanding modelling was an area that I really struggled to understand and my inquistive nature lead to much confusion on my part (sometimes I really need to just accept things!). Fortunately it seems that we dont really need much knowledge on the modelling but I thought I would summarise what we should have learnt from the documentary we watch - if anyone fanices learning more, this could be a good topic to look into over your long summer holidays (it is on my list for looking at!) and there is some stuff on the blog about it; word of warning though, the maths get very confusing very fast!!!

Here is the links to the three episodes, in order - we only watched the third one in class....
- The worlds first climate models were far from computer based! Instead small scale models were used and these helped to formulate the basics of atmospheric circulation, allowing scientists to generate the basic laws that the atmosphere abides by. However they failed to represent the complex oceanic/atmospheric intergration or predict weather patterns
- Computer modelling was first used to predict the weather on a 24 hour timescale. However, at first it was taking 24 hours to produce a forecast and was not until the 1970s that this became efficient enough and worked reasonably well
- Early on, the models were not deemed that reliable and consequently many climate skeptics used this as a point of attack, saying that as results were not reliable, it could not be said that climate change was happening in reality
- It was not until the 1991 Pinatubo eruption that the models could be tested to see if predictions were accurate. Hansen, a world leader in climate modelling used the eruption to see if the models predictions of the extent of cooling caused was accurate. This event was ideal due to timing of eruption and duration of impacts. The 1980 eruption of Mount St Helens unfortunately came to early to be used to test modelling accurracy due to level of computer technology at that time
- Subsequently, by the late 1990s climate models were deemed, worldwide, as reliable and so their projections viewed with confidence. At this time the models were suggesting that a doubling of CO2 would increase global temperatures by 3C. However, this will never be 100% accurate as resolution of the models still needs improving and modellers still struggle to ensure computers consider influential factors in affecting climate on a smaller, more local scale - this is quite a good explanation of why climate modelling has improved over the years, produced by the Met Office -  Climate Modelling


- There is also the complication of field observations, which are crucial in producing the models, and the increasing level of understanding we are gaining over time. This is an inherent problem with climate modelling and will continue to be as we discover new feedbacks operating in the climate system. The example in the video was based around glacial movements as a once believed 'dead' glacier became 'alive' again and started moving at a faster rate, as the planet has warmed, this has then increased the rate of sea level rise etc. Feedbacks control the extent of change with negative considered stable and positive feedbacks often detrimental due to their amplyfing effect. These feedbacks are very hard to model  so uncertainities will always exisr with modelling; meaning tht changes in sea level, for example, could happen a lot quicker than models predict
- 1961: Lorentz's discovered 'chaos' in the climate sytem by changing degree of rounding used in models and this went on to explain variations in modelling projections. Thousand of runs and run and the general trend is then taken and countries all use different models. The UK uses the Met Offices HadCM3 which was influential in both IPCC AR3 and AR4. Ensemble forecasting is being used to an increasing degree. Ensemble forecasting basically means that all the different runs are started at slightly differing conditions and then by comparing the results it produces a much better idea of what weather events will occur at a given time.
- The discovery of the 'Chaos Theory' made climate scientists realise that there were factors with an influence on climate that they had yet to discover and incorporate in models or to quantify their signigicance
- Proxy data such as Greenland ice cores and pollen and beetles help us formulate the temperature record going back to the Younger Dryas. Understanding how past climates have changed and incorporating this knowledge into modelling helps to make long term prediction more accurate than say the 5 day forecast. The proxy record also indicates that abrupt climate change is possible
- So, far Antartica seems to have experienced the most rapid climate change, especially in terms of sea ice coverage. In 2007 sea ice shrunk by an area 10 times the size of the UK, leading to some scientists predicting that in a decade no sea ice will remain during summertime in this area
- Models say that warming may be slow and steady but history indicates it can be rapid and so we are now experiencing changes happening at a faster rate than model predictions and faster than we originally thought
- Technology has allowed us to deal with some climate condition e.g Las Vegas is built in a desert yet is full of water, thanks to the Hoover Dam which created the 100 mile long Lake Mead. An 8 year drought has been experienced in this region though and models suggest that the drought will continue and, as population expands, water will become scarce and Lake Mead will become ineffective by the late 2020s

So, these are all the notes I managed to take! The basic knowledge we need to have for climate modelling I think is a bit about when they were developed, how they started off etc and then why there are uncertainities and why they are still considered unreliable. Within in this, you need to be able to link in the use of Mount Pinatubo and proxy data to reduce modelling unceratinity but meanwhile realise that they will arguably never be 100% certain. Hopefully I have covered all of this!

1 comment: