Archives for category: background article

A while back, I was discussing the many ways that we can monitor greenhouse gases. One of these methods, the inventory method, involves estimating the greenhouse gases from human activity as an associated factor of how much emissions that activity generates. We can check that these estimates make sense in the context of the atmosphere by also taking measurements of greenhouse gases. If we want to do a really good job of checking the inventory estimates, we can apply both the inventories and some greenhouse gas measurements into a specialised transport model. This method is relatively new, and still under development.

In urban areas there are a number of obstacles to taking greenhouse gas measurements and to applying transport models. Firstly, cities are very rough, rather warm and have a lot of very concentrated emissions sources.  The first two issues cause problems for accurately modelling how greenhouse gases are transported up into the atmosphere and also on how easy it is to place an instrument that can be representative of the whole city. We need both of these things to be done well if we are ever to produce results that are accurate enough to help us check our inventories. The third issue is of interest to the discussion here and I am going to focus particularly on CO2 for this. Cities have a lot of fossil fuel CO2 emissions sources such as traffic and local power generation, but they also have a lot of biological sources; such as plants and humans breathing. So the question is, how do we separate out the part of the measured greenhouse gas that comes from the fossil fuels from the background and from the biological sources?

The solution to this issue comes in the form of isotope analysis. In this case we can think of isotopes as a ‘tag’ for different types of CO2.  You might not know this, but actually there are three slightly different types of carbon. Back to basic chemistry for a few moments now.  All atoms are made up of protons, electrons and neutrons. Protons have a positive charge and make up the atom nucleus along with the neutrons which don’t have a charge.  Electrons (negative charge) then buzz around the outside of the nucleus. The protons and the neutrons are what gives the atom most of its mass. Carbon always has 6 protons and 6 electrons to balance the charge. It usually has 6 neutrons too, and this form of Carbon with 6 of each is very stable. It is common and you’d expect to see it everywhere you see Carbon.  But this is not the only type of Carbon there is. Sometimes an atom has more neutrons than it has protons and the more that it has (heavier it gets), the more it tries to decay back down to its stable form. Carbon can have 6, 7 or 8 neutrons. Add that to the 6 protons and you have carbon with a mass of 12, 13 or 14.



Now, because 12 is the stable form and 14 is the unstable form, you might expect that as time goes on, any Carbon 14 that exists will gradually decay away until it becomes Carbon 12.  The time that this takes is called its radioactive half-life.  The half life of Carbon 14 is about 5,730 years. This is important because the reason fossil fuels are called fossil fuels is because they are old, certainly older than 5,730 years. That means that the carbon that is contained in the fossil fuel will have already decayed to its stable form before it is combusted and releases CO2 to the atmosphere. So when we measure the isotopic composition of the carbon in the atmosphere, we can get a quite good indication of the fossil fuel contribution.

To take this a step further, we might want to attribute the fossil fuel we detect to one type of fossil fuel or another. We can do this using a tracer species, commonly carbon monoxide (CO) is used for this. CO is a tracer of ‘incomplete’ combustion, and usually the more incomplete the combustion, the more ‘dirty’ it is in terms of CO2 and air pollutants. We can use the ratio between the CO tracer and the isotopically derived ‘fossil fuel CO2’ to tell us something about how clean the combustion process is likely to have been. For example, a tar pit has a high CO/fossil fuel CO2 ratio of about 20 ppb/ppm, a car has a medium value of about 14 and a clean modern car has a CO/ fossil fuel CO2 ratio or 8 or 9 ppb/ppm. A clean power station has a CO/ fossil fuel CO2 ratio of about 3 ppb/ppm.

If we have an idea of what the CO/ fossil fuel CO2 ratio is in a sample of air, we can use measurements of CO to tell us how much of the total measured CO2 from the same place can be attributed to fossil fuels. This is important for modelling (which tries to estimate the fossil fuel CO2 from inventories) and for improving estimates from the inventories themselves.

I would like to talk about this topic again another time in more detail, as it is a very interesting area of science. Next time at Ground to Sky, I will discuss the balance of science between studying greenhouse gas emissions from natural and urban environments as a point of interest from my recent trip to the European Geosciences Union (EGU) General Assembly in Vienna.



After a long hiatus, I have returned to Ground to Sky! I have been very busy, dealing with finalising two research publications and spending every lunchtime in the university music practice rooms but now I am pleased to return to this blog. In this article I will provide a brief discussion of my latest research paper to be published. The (open access) paper is available online here.

The work that I will describe took place during my PhD and was located at West Sedgemoor in the (currently terribly flooded) Somerset Levels and Moors. This land is very low lying, and floods every winter as it is part of the floodplain of the River Parrett. This seasonal cycle creates a unique habitat for wetland birds, and the site is managed by the RSPB for their conservation. West Sedgemoor is a system of small fields that are separated by a series of interconnected drainage ditches. These are managed by the RSPB to ensure that the conditions are always good for wetland birds. Part of the management of West Sedgemoor involves short term grazing during the autumn months by young beef cattle. As part of my study into the greenhouse gas emissions from these seasonally waterlogged peatlands, I was interested to see how the cattle’s urine stimulated production of greenhouse gases inside the soil and their emission as the field went from dry to flooded.


West Sedgemoor

To measure the greenhouse gas emissions (I was looking at carbon dioxide, CO2, methane, CH4 and nitrous oxide, N2O), I used ‘flux chambers’. These are boxes that are dug into the soil. A lid is put on the box and you wait a while for the gas to accumulate inside the box and take samples during this time. You can then calculate the emission of the gas from the rate of change of the gas inside the box. To measure greenhouse gases in the soil, I used ‘soil atmosphere collectors’. These are silicone tubes that are porous. Air from in the soil moves into these collectors as if they were a large soil pore and you can then take samples from the air inside through a cap accessible from the surface. Dipwells were used to measure the depth of the water-table from the surface.


Equipment in the field.

Before we could start sampling, we needed some cattle urine. Although cattle were to be loose in the field at the time of sampling, we did not want them getting close to the equipment and actually we didn’t want them to pee near it either! For a controlled experiment, we needed to be sure that every plot (with box, soil atmosphere collector and dipwell) received the same amount of urine. This would be impossible letting the cows loose in the area, so we used urine from cows at the University of Reading farms and fenced the equipment away from the cows in the field. There were ten plots, five to be treated with the urine and five to act as controls and be treated only with water. This allowed us to be sure that it was the urine that caused any changes in the soil and not just the act of the soil getting wet. The experiment ran between September and November in 2010.


This graph shows the effect of cattle urine and water application on CO2 emissions. There was a large emission of CO2 from the urine treated plots on the day that the cattle urine was applied. This is due to ‘hydrolysis’ of the urea in the urine when it impacts the soil. Bacteria make an enzyme called ‘urease’ which is found very commonly in soils and is catalyses the hydrolysis process. Despite this initial CO2 release, there was not much increase in CO2 due to the urine addition over the full period and there was no significant difference at all in CO2 in the soil atmosphere between urine treated and water treated plots.


Here we look at the methane in the cattle urine plots and can see there was a substantial reduction in the methane a few days after applying urine. We are not sure what caused this; it is a very unusual finding. It happened in 3 of the 5 urine treated plots and none of the control plots. Overall, however, adding cattle urine increased the amount of methane that came out of the soil during the experiment. As you can see, the control plots remained sinks of methane – that is, soil bacteria were taking methane from the atmosphere and using it to metabolise. This is called methane oxidation. In the plots treated with urine, this activity was prevented as a result of the urine contents. This supports other studies for this is a known effect of adding urea to soils. Under the soil surface there was also evidence of increased methane in the urine treated soil relative to the controls.

n2oThe most profound effect of adding cattle urine to the peat soil was shown for nitrous oxide, N2O. Here we see that throughout the experiment, the urine caused large increase in N2O compared to the control. This peaked 12 days after application, following rainfall. This shows how important soil moisture and water-table can be in determining what happens to added nutrients in soil. Under the soil surface, the differences between control and urine treated plots were even more interesting.

n2oWhen you look at the above figure, notice the numbers on the y axis. On day 2 after the urine was applied, you could already see the difference in production of N2O in the urine treated soils. By the twelth day, the production in the control soils was dwarfed entirely, with production at 20cm depth dominating. By Day 56, the field was entirely flooded and N2O concentrations were very high. We believe that the reason why this happened so strongly at 20cm was due to the fact that the peat soil was covered by a layer of clay. Clay soils, when saturated, are not very good at letting air pass through them and therefore N2O that was produced at levels lower than 20cm, will also get trapped here.

For more information on this experiment, please see the full paper. This is only a short summary of all of the results that were presented there. But what does this mean for managing greenhouse gases in peat soils? Well, N2O and CH4 emissions will get worse after cattle have been on the field and they will get especially worse if the field then floods. This implies that if you are concerned about the greenhouse gas balance of the field, grazing cattle earlier rather than later is likely to reduce the emissions after the field floods. However, there are far more things to balance than just the greenhouse gas emissions; for example, managing the feed supply for the cattle, managing the field grass level and, in the case of RSPB West Sedgemoor, managing the land for wetland birds. Balancing all of these demands and best practice is never an easy task and will require a carefully considered compromise.

I will write again at Ground to Sky shortly, and will attempt to reduce the long time span between blog entries. Next time, I will write about using trace gases to help us to understand where greenhouse gas emissions come from, in particular the use of carbon monoxide as a tracer for fossil-fuel carbon dioxide in cities.

It has been a while since my last blog article, apologies for that but I have been working hard preparing a couple of papers for publication and I’ve recommenced playing the piano after so many years so lunchtimes are often spent in the university practice rooms at the moment. Today though I’ll be taking a break from the literature and the keys to discuss the ways in which soil can be affected by the type of plant that grows within it. This is a bit of a topic of fascination for me, although admittedly no longer my chosen research area, during my undergraduate years I studied the ecological and agricultural effect of plant on soil and it is good to come back to this now.

As you might expect, plant and soil interaction is a two way process. Farmers and gardeners will tell you that you can’t just put any plant anywhere, even if the climate is correct, the soil must also be correct for the plant to thrive. You can adjust the soil sometimes a little more easily and less expensively than you can the plant’s surrounding climate (unless you have access to greenhouses or other such structures). For example, a wheat crop will need a range of nutrients if it is to grow to its full potential and therefore the farmer will add fertiliser to the field. Nowadays, in the advent of precision agriculture, it is no longer necessary to spread the same fertiliser over all your fields. This could be considered wasteful and it certainly can be expensive. Soils are known to be very different from each other even at small scales (heterogeneity) so why should all soil be given the same treatment? Precision agriculture aims to readdress the balance between soil and additive, to ensure better uniformity of nutrient availability across the field and save the farmer time and money on fertiliser applications.

Agriculture and forestry has done much to change the landscape and the soils that form it. By selecting what plants will grow on a patch of soil, people have made long-standing changes to the chemical, biological and physical nature of the soils below. For example, let’s take a fictional pine plantation. Forty years ago, a patch of deciduous woodland was chopped down for timber and because they are fast growing and easy to mill, tall pine trees were planted uniformly in its place. Quickly the acidic and slow to decompose pine needles get to work on changing the chemical balance of the soil. The efficient roots of the pine soon reduce the diversity of plants below the forest canopy and therefore addition of other types of decomposing plant matter to the topsoil diminishes. Eventually a new topsoil will develop that is shallow, acidic and slowly decomposing. The soil type will be very different from how it was when the naturally occuring trees occupied the soil forty years ago.

A reverse example is the reclaimation of heathland by deciduous woodland in the UK. Heathlands are a man-made landscape, created due to continuous grazing by livestock of previously wooded often sandy soils. Today, heathlands are less regularly grazed and slowly they revert to their natural state as trees seek to reinstate their claim on the soil. Managers of heathland have the perennial problem now of preventing this recession to woodland and they do so by various methods; burning, manual removal of trees and reintroduction of grazing to name the most common. Depending on the usage of the heathland site, each of these methodologies have their own pros and cons. It is important to remove the trees at an early stage partly because of the effect that the trees have on the underlying soil type. A good heathland needs nutrient poor, sandy and well drained soils to establish long-lasting heather and other heathland vegetation. An example of this type of soil is shown in the below picture.


This was taken in 2008 on Chobham Common, an established heathland site in the South East of England. You can see there is a thin dark top layer and then a white-grey sandy layer. Below this is another shallow dark layer followed by gravelly sand.

The following graph shows a walk through a section of Chobham Common that is slowly reverting to birch woodland. The colour bars represent different ‘layers’ in the soil profile and the key thing to notice is that there are two different soil types here. The black-grey-black type (correctly called a ‘podzol’) as described above is represented by colour bars with light blue and yellow in them. A more uniform brown soil type with a thicker and lighter organic top layer is shown by bars that are missing the light blue and yellow.


The research showed that the birch saplings at 0-2m and 6-10m had already had a noticeable impact on the soil type despite being young trees (probably less than 5 years old). Established birch trees were positioned at 18 to 20m into the ‘transect’ walk. A later study by a consultancy company looked into this data alongside their own investigations and provided advice to managers of Chobham Common to help them manage their heathland effectively. Chobham Common is now in the midst of grazing trials and hopefully, reintroduction of animals to the site will help to keep the trees back and protect the man-made environment that has become a home for so many specialised plants and animals.

In my next article on Ground to Sky, I will talk a little about one of my current research articles which focuses on the effect of cattle on the greenhouse gas emissions from a peatland. I will discuss how the experiment was designed and some of the conclusions, which give you a little more insight into how land management can affect the soil and atmosphere.

Soil is quite probably the resource that is most taken for granted by modern society. How many people who aren’t farmers or gardeners really think about soil and how important it is to the human race? Soil provides the nutrients for crops to grow, it’s the building block of forests, it provides a home for countless animals and micro organisms. It even has a part to play in regulating the climate. Soil is an invaluable resource that provides invaluable services and if we want to preserve it, we need to know where it comes from and how to make more of it.

When I was an undergraduate environmental scientist, I was taught the following equation as a way to visualise the many things that contribute to the formation of soil and determines what type of soil you get. I like the simplicity of this little function, so have a look at it and I’ll introduce all of its terms.


Soil is a function of:

  • Climate
  • Parent Material
  • Topography
  • Biota
  • Time

Of these, I will begin with Parent Material. The parent material is the surface from which the soil will be formed. Soil doesn’t come out of nowhere and all soil has a ‘mineral component’ even if this is very small in the case of peat (will come back to these later). Parent materials are usually rocks, though in a few cases they could be animal or man made structures such as shells or concrete. The type of parent material is highly important in determining what the soil is going to be like. For example soil that forms from a red sandstone is likely to be red and gritty whereas soil that forms from a mudstone is likely to be dark and clammy. The sandstone soil probably drains better than the mudstone soil but bear in mind that the soil you get is a function of all of the contributing factors not just one or two.

So how much soil do you get for your parent material and how do you get it? Climate is an important control on the rate of the soil formation, Soil from rock is usually formed by weathering. Weathering can be physical (e.g. cracking from water seeping into cracks and freezing, or abrasion from water movement dragging pebbles over the parent material) or chemical (e.g. chemical reactions with air, water or some other chemical causing the parent material to break down into smaller components) or biological (e.g. ‘digestion’ of rock by lichens or pressure from plant roots). You can probably imagine how these weathering examples are variable according to the climate.

The topography of the land also controls the rate of soil formation and what type of soil you’re going to get. Imagine three different landscapes: a mountain range, a hilly heathland and a flat plain. The mountain range has steep slopes with very few places for soil to collect. The hilly heathland has a range of hills where the soil could be shallow and valleys where soil can accumulate. The flat plain is more uniform in soil depth.

The biota that are present in the soil forming environment is also key. Biota such as earthworms play an important part in moving established soil around and in aerating it (allowing oxygen to pass through it). Lichens have the remarkable ability to turn a rock into a soil and survive on bare rock alone. They are a symbiosis of two organisms; a plant that can photosynthesise and a fungus that clings to the rock and stores moisture. You can see soil making in action on old dry stone walls, where lichens first create a thin layer of soil and then mosses colonise this and as they eventually die away, through their decay become soil themselves. This, of course, is another major contribution of biota. Not only do they help to make soil from rock, they also become soil themselves when they die. Peats are soils that have a very high percentage of ‘organic matter’ as opposed to minerals. Peats usually form in waterlogged environments where the usual process of decay cannot occur as frequently due to a lack of oxygen. Therefore decay is very slow and soil accumulates over the years. The type of plant that grows on a soil also controls the soil type. For example, an oak woodland soil will have a rich ‘humus’ (organic) layer on the surface that is contributed to every autumn. A pine woodland will have a shallow organic layer that is highly acidic (because pine needles are acidic and because the trees don’t shed their leaves at a particular time of year).

Finally, time. The influence of time as a soil forming factor cannot be ignored. The amount of time that it takes to form a soil will depend on all the factors I have already discussed. What is the parent material? Does it break down quickly? Is the weather likely to help it break down? Are lichens likely to colonise it? What else grows nearby? Is it on a slope or in a hollow? Peat can accumulate fairly quickly by soil’s standards, it can take about 10 years to form a centimetre whereas from a bare rock, it could take between 200 and 500 years to accumulate the same amount.

When we think about the rate of soil erosion in the areas most prone, we should be concerned. Soil is lost much faster than it is replenished, so soil conservation is of major importance worldwide. In my next article, I will continue to discuss soil and will write some more about the influence of plants on soil type. I will use heathlands as an example to describe how drastically plants can change the soil and the effect that this can have on land use (this was the topic of my undergraduate dissertation).

Today I’m going to tackle one of the most important topics relating to the Ground to Sky interface. That is how our planet stays warm and what happens to heat and radiation at the Earth’s surface. I have touched on this before, explaining a little about the differences between vegetated and tarmaced surfaces. Today I will briefly describe the surface energy balance in much of its entirety.

The most important concept I will introduce here is that of the surface energy balance equation. H = K + (Sd – Sr – Su) + (Ld -Lu) – (G – λ E). This can be simplified as:

Heat = Storage + net (total including positives and negatives) Solar radiation + net longwave radiation + net heat flux.

Use the diagram below as reference as I introduce each of these terms and a small amount of discussion as to the meaning of each.


Heat term (H).  This is the overall heat of the surface when all of the terms in the equation have been accounted for. This is the value we are trying to find out. How hot is our surface – how hot is it during the day? During the night? During cloudy or clear conditions? How hot the surface is affects our environment and is crucial for weather forecasting and other types of process modelling.

Storage term (K). This is how much heat is being stored by the surface. Different types of surfaces have a different ability to store heat. Man made surfaces are often capable of storing more heat than natural surfaces and this contributes to the urban heat island effect.

Net solar radiation (Sd – Su – Sr). Solar radiation (also called shortwave radiation) is the input of heat from the sun. Sd is the amount of sunlight coming down from the sun towards the patch of ground we’re interested in. Of Sd, some will be reflected by clouds and aerosols in the atmosphere before it even reaches the surface. This reflected solar radiation is called Sr. So we must subtract Sr from Sd. Some of the solar radiation does reach the surface but not all of it is absorbed by the surface and contributes to its heat balance. Su is the solar radiation that is reflected by the surface and must also be subtracted from the total incoming solar radiation, Sd. Su is controlled by the reflectivity of the surface, this is called the surface’s ‘albedo’. White surfaces reflect more sunlight than black surfaces. Some surfaces reflect only particular wavelengths of light, and this gives them their colour. For example an object that appears red to us reflects red light. The sky is blue because of the scattering of light in the ‘blue’ section of the visible light spectrum. But this is an aside. What matters here is that the amount of sunlight that reaches the surface, is absorbed and contributes to our heat term H, is reduced by reflection in the atmosphere by clouds and aerosols (Sr) and at the surface depending on surface albedo (Su).

Net longwave radiation (Ld – Lu). Longwave radiation is low frequency (infra-red) radiation. This is largely outgoing (Lu) heat from the Earth itself radiating into space. Ld is the term for the reflection of outgoing longwave radiation by clouds and greenhouse gases back down to the surface. Lu is quite close to balancing Sd overall, keeping the Earth’s atmosphere about the same average temperature. But the balance between the two varies from place to place (think the difference between summer and winter). Ld is the term that has caused such debate around the would – the greenhouse effect. The potential changes that could occur to the energy balance because Ld has been artificially altered by human activity is the subject of worldwide scientific attention.

Surface heat loss (G – λ E). These two terms represent the loss of heat from the surface itself. G is the sensible heat flux and λ E is the latent heat flux. Sensible heat (G) means that heat is being exchanged between the surface and the atmosphere and ONLY heat is being exchanged. That is, the only effect is a change in temperature. Minus G (-G) is loss of heat from surface to atmosphere (usually overnight) and +G is heating of a cold surface by warm air. When you think of air, you probably think of wind. Wind can come along in big parcels (air masses) originating from far away from our surface. Say the wind comes from a desert in the early morning? We will expect a +G as the air is hot but our surface is cool from the cold night. Interfaces of air masses and the ground can cause interesting weather patterns such as localised thunderstorms when the temperature difference between the two is high. λ E is the latent heat flux. Latent heat is different from sensible heat in that it is not only a change of temperature that is being brought about, but also a change of state. This term describes the heat that is used to evaporate water from a surface. This may or may not be mediated by vegetation depending on the surface. Plants can control evaporation using holes in their leaves called stomata. This way they can regulate how much water they have in their system to keep them alive.  More heat is required to change the state of water from liquid to gas, so more energy is used up in this process than heating the ground. This explains why conditions feel cooler after it has rained, or near a lake or the sea.

When all these terms are totalled up we have H. H might be a hot surface or a cool surface depending on the balance of all the different terms. Is it a hot sunny day with no cloud? Then Sd will be high and Sr will be low. If we haven’t had much rain in a while λ E will be low too. What’s the difference between the energy balance during the day and during the night? What about a shaded bit of ground compared to an unshaded bit? A city compared to the countryside? Have a think about how H might change during the year. Understanding of the surface energy balance gives us a lot of insight into our environment.

I hope that this brief overview of how energy behaves in the lower atmosphere and at the surface helps you to understand some of the everyday observations that you make whilst wandering about in the ground to sky interface. In my next article I will return to soil science, and will be discussing how soil forms with another simple equation that you can see in action every day.

Climate change is making the planet warmer, though there has been debate recently about just how quickly. The global balance of radiation (heat and light) has started to change as a result of increased greenhouse gases. Due to the urban heat island effect (good definition here), this temperature increase will be stronger and more noticeable in urban areas.

Sim City example City
Created using Sim City 4 (Electronic Arts, 2003)

There are two strands to addressing a problem such as future climate change; adaptation and mitigation. Adaptation means that we change our behaviours or manipulate our environment in order to be able to cope with the changes that occur. Mitigation means that we try and prevent the changes from happening, or we try to slow them down. Of course, for the best outcome we need to achieve both mitigation and adaptation; we must work hard to adapt to the new conditions that the changing climate brings but by doing so we should not worsen the initial problem. Instead we must try to bring adaptation measures in line with measures that will help to lessen the impact of the problem.

In the case of hot cities, it might seem simplest to undertake a ‘business as usual’ approach and focus on regulating the indoor temperature of buildings using air conditioning in the summer whilst enjoying the possible reduced heating demand during the winter months. Air conditioning can be expensive, both in terms of energy demand and monetary cost for energy provision. Reliance on air conditioning alone also does not address the underlying issue: it may be a valid adaptation technique but if the extra energy demand is not met through renewable energy sources, the carbon burden of the business increases. Furthermore, reliance on air conditioning does not address heat pressure on vulnerable communities such as the poor, elderly or infirm. A recent study in Vienna (described here) showed that elderly residents outside of care facilities tend to stay inside during heatwave periods, unaware that their indoor environment may be warmer than it is outdoors. This highlights the importance of adaptative behavioural measures and filtering of knowledge and instruction to those communities that need it.

Adaptive behavioural measures to cope in hot urban areas range from closing curtains/shutters during the hottest time of day to altering activity and working patterns to undertake the greatest activity during the cooler morning and evening hours. These changes to the working day would need to start at the highest levels of business and be rolled out in conjunction with an established heatwave early warning system in order to avoid loss of income from reduced workforce efficiency/availability. Information and social care for heat-stress vulnerable city residents must be made available in affected areas.

Though we can adapt to hot cities, the cost of insufficient or tardy adaptation measures could be high. This study into death risk for elderly city residents showed that where urban areas do not cool down sufficiently at night, elderly residents are twice as likely to die during heat waves than those living in the suburbs. One of the distinguishing features of the urban heat island effect is that temperature increase relative to the rural surroundings is stronger overnight than during the day. This means that whilst in rural areas there may be some reprieve from the heat overnight, in the city a continued heatwave situation is more likely. In order to reduce this risk, we need to focus on mitigation at the same time as adaptation. The direction for mitigation is twofold: lessen the urban heat island by cooling the city and reduce the fossil fuel carbon emitted by the city as a unit.

If we want to cool down the city, we need to address what it is about the city that makes it hotter than the countryside in the first place. Previously on Ground to Sky, I wrote about the difference in moisture and heat balances between vegetated and urban surfaces. Urban surfaces are often darker than vegetated/soil surface and therefore they absorb more sunlight. Not only this but they are capable of storing this heat for a much longer period of time (accounting for increased heat release overnight relative to the rural land surface). Vegetation also cools the surface by allowing the evaporation of water through its leaves (transpiration) and by shading the ground. Greening of the city is widely accepted to improve the heat balance of urban areas and may also benefit air quality.  These areas also contribute to improved quality of life for nearby inhabitants and raise capital for the area (as evidenced quite well in this TED talk by Marjora Carter, which also provides a strong case for addressing the inequity in urban planning policy between rich and poor city areas). Urban planning is a city-wide discipline but is in most cases conducted on a neighbourhood by neighbourhood basis. Additionally, building design is something conducted on a building by building basis but individual buildings affect the surrounding neighbourhood. Mitigation of the urban heat island and efforts to decarbonise and decentralise energy (bring energy production closer to the centre of demand) is something that requires careful consideration at the city scale.

Adaptation and mitigation of urban heat is a current area of research and a driver for engineering and technological development. Sustainable city planning, neighbourhood and building design is higher and higher on the political agenda in the West so we are moving in the right direction. However, in the developing world, urbanisation is occurring at an alarming rate (see this article discussing sustainable urbanisation in India) and monitoring of urban meteorology and air quality is limited or completely lacking in some developing cities. Research focused on methods to accomodate the growing city population in a way that will mitigate urban heat issues for the future is an urgent requirement in developing countries.

In my next article on Ground to Sky, I will discuss the surface energy balance (of radiation and heat) in more detail, hopefully providing an insight into the physics that gives rise to issues such as the greenhouse effect and the urban heat island.

Greenhouse gases are top of the worldwide green agenda and all around the world industrious groups of people are seeking to reduce emissions. Whether by legislating against irresponsible fossil fuel use, encouraging energy efficiency or creating traffic-free zones (and many, many other methods besides), the only way we’ll know whether such schemes are having an effect is by knowing the rate of emission of gases into the atmosphere.  We need to be able to do this to a fine enough level of detail to be able to tell if our reduction strategies are successful or not.

This is no mean feat. The atmosphere is notoriously complicated, as is the land surface with sinks (take up gases) and sources (emitters of gases) springing up all over the place, often as quick as you can count and varying wildly according to environmental conditions.  In spite of these difficulties, scientists, industry and policy have created methodologies and taken steps towards putting some figures on greenhouse gas emission.  There are three main methods, which I will introduce in turn.

1. Emissions inventories.

I like to call this the “let’s just tot up what we know” method.  It relies on accurate figures for the extent of a certain activity (i.e. how many cars there are, how many power stations are running and for how long) and an accurate figure for how much greenhouse gas is emitted by each of these activities. These are called activity data and the emissions factor.

Flux = sum of all (activity data x emissions factor).


I call this the “tot up what we know” method because all that goes into an inventory is the emissions sources we know about.  They also usually only include human activities or have a subjective or modelled term for natural sources and sinks.  We have several uncertainties in the inventory method and they require extensive checking:

  • is the activity data right? i.e.  is our traffic count up to date?, are the power companies telling the truth about what’s coming out of that chimney?, are we up to date with all of the new houses?
  • are the emissions factors right? i.e. are we simply estimating over a large range of possible values?
  • are we summing up all of the sources we know about? what about the sources that we don’t know about? could we be overestimating the total emissions by ignoring the effect of vegetation uptake?

Because of questions like these, inventories work best in a coarse resolution (over a large area, like a country).  They aren’t checked using measurements and are best for making broad decisions (i.e. cutting back on fossil fuel energy in a certain country) than narrow ones (i.e. implementing a neighbourhood road closure and cycling scheme).

2. Direct measurement.

Measurement of greenhouse gas fluxes can happen at all sorts of scales. From a chamber placed over a square metre of soil right up to an instrument mounted on a tall tower above a city.  For the purpose of this discussion I will focus on these tall tower instruments.  The instrument on the new tower in the drawing below is designed to capture fluxes from a wide area.  So all of those terms that we put in our emissions inventory (plus or minus the influence of the sources/sinks that we ‘forgot’) are contributing to the flux measurement.


This particular system measures the difference in concentration of greenhouse gas between two vertical air packets (I will come back to how this works some other time).  This tells us the overall flux.

To be able to interpret this, we need to know where the fluxes measured by the instrument are coming from.  That is, we need to know the area of the ground that is sending gas up to be captured by the instrument.  We don’t want to consider sources that aren’t being included in the measurement and likewise we don’t want to miss any.  To do this, we need a ‘source area’ or ‘footprint’ model.  This can calculate the area of ground we can expect the fluxes to be coming from and then we can investigate how they might be influencing this measurement.

Hm, you might think this is a little vague.  You’re right, when it comes to making decisions about changing the number of cars in your city or increasing the amount of green space, it makes sense to be able to tell more exactly where the sources really are and whether or not the action has made a difference to their extent.  But if you want to monitor what the overall flux is over the city, what it really is, through observations rather than approximations, a direct measurement scheme is what you need.

How about a middle ground between the two?  A method that includes all the knowledge we have about what is out there generating greenhouse gases (inventories) plus real greenhouse gas measurements to tell us if we’re going wrong?  This brings me to the final method, which is newest to science and very much still in development.  This is one of the things that is keeping me busy during the working day right now.

3. Measurements + Model (the “inversion” method)

Inversion methodology is where we start to get really clever.  I will explain this step by step.

a. Take the inventories and make it as accurate as you possibly can for your chosen area.

b. Place an instrument on a tower that measures concentrations of greenhouse gases.  (Note: this is concentration not flux.  The instrument tells us what the concentration of gas is, not the change in that concentration).

c. Use a meteorological forecast model to tell you what the winds are doing throughout the surface km of atmosphere.

d. Place the inventories and the wind data (plus some other information such as the height of the boundary layer and the surface roughness) into a chemistry-transport model.  This model can give you an idea of what the concentration of the greenhouse gas in question is EXPECTED to be.

e. (And here’s the clever bit) Use the real data from the atmosphere, plus the expected concentrations and then run the model BACKWARDS.  This “inversion” of the method uses everything we know to ‘go back to source’ and tell us where we should be checking our inventories a little more closely.


This last method has a lot of potential. But right now it is in its pilot stages.  New science is consistently being published in developing this method and industry is becoming aware of the economic opportunity in providing governments with such a service.

The method that an international group/national govenment/local authority will choose depends on what they wish to get out of the investigation. Benchmark figures? Real measurements? In-depth analysis? Participating in new development of new scientific technology? The future is bright for estimation methodology, so don’t let “how will we even know if it works?” put you off taking your bike to work instead of your car.

In the next article at Ground to Sky, I will be discussing the ups and downs of wading through huge piles of scientific literature and providing some tips from my experience.  Following this I will return to the technical science and will be discussing some of the methods that we can use to cool down our hot cities during the summer months, without compromising winter warmth.