Back in June, Stuff published a report on regional airfares, focusing on the way that prices are affected by major events such as concerts and sports competitions. Now, I’m no airline economist, but I’ve got a general interest in transport pricing so I figured that it would be worth taking a look at the topic.
The point of the article seems to be that airplane tickets are higher during periods of high demand. That doesn’t seem too weird, but this guy in Nelson is absolutely ropeable at the thought:
Nelson man Steffan Eden is furious about Air NZ’s fares from Nelson to Auckland and return for the weekend of March 5 and 6 when Madonna will give her first New Zealand concert at the Vector Arena.
Fares that the previous weekend cost $79 are twice that at $159 on the weekend of the concert, an $89 fare rises to $169; and a $129 fare becomes $209…
“Look at the fares the weekends before and after the concert, they’re normal fares. Then on the concert weekend they’re virtually double. It’s quite blatant.”
Eden said the same thing happened when he wanted to go to the Cricket World Cup match between New Zealand and England on February 20. “I wanted to take my kids but didn’t in the end because of the cost,” he said.
The man quoted in the article seems to argue that these jumps up in fares are due to uncompetitive or discriminatory practices by Air NZ. By contrast, the airline says that the price increases are just due to cheaper tickets selling out faster:
An Air New Zealand statement said it has been experiencing high demand for flights into and out of Nelson that weekend due to both the New Zealand Masters Hockey Tournament which is being held in Nelson from February 28 to March 5 and the Madonna concert in Auckland.
“As you will appreciate, where there are major events on flights tend to sell out well in advance, with the cheaper fares selling out the fastest, so booking as early as possible is recommended.”
Now, as an economist I’m always wary of the potential for companies with few immediate competitors to exercise market power over their customers. But in this particular case, I don’t think that’s happening. What we are seeing is the normal, and in fact beneficial, working of supply and demand.
Let’s start with the supply side. Air NZ doesn’t have an infinite budget for airplanes and staff. It faces constraints. If it wanted to run more services between Nelson and Auckland on particular weekends of high demand, it would have to either:
- Pull airplanes off other regional routes, which would potentially satisfy Nelson’s demand but would in turn lead to similar stories about how unfair Air NZ was being to Napier or Timaru or what-have-you, or
- Buy extra airplanes and hire extra staff that would sit idle most of the time and fly only during a few periods of exceptionally high demand. This is superficially appealing, but it would mean an across-the-board increase in fares to pay for a bunch of empty planes.
This isn’t really related, but it’s an interesting picture (Source)
So that’s the supply side. What about demand?
Air NZ has observed, correctly, that demand for flights is not constant over time. Simply put, more people want to fly at some time periods than during others. Airlines can respond to this in a few different ways. The first would be to keep prices constant, regardless of demand. This would turn air travel into a first-come-first served game, which is great if you always buy tickets months in advance but terrible if you have to take a last-minute trip for work or a medical emergency.
The second approach, which Air NZ may be using, is to charge higher prices during periods of higher demand. This may seem less fair, but it’s actually better for (almost) everyone. It means that airlines aren’t constantly booking out flights well in advance or misallocating resources in a futile attempt to give everyone a cheap flight. Travellers also benefit – they get a choice between paying more to travel at their preferred time or finding a cheaper fare at an off-peak time.
I fly for work on a semi-regular basis so I’ve noticed some of the patterns over time. Between 4-6pm, departure gates fill up with suit-wearing men and women headed home from their meetings in time for dinner. Not surprisingly, prices are highest at this time. Later on, prices drop, planes get a bit emptier, and the suits get replaced with casual clothes. By the end of the night, most of the people who want to get home have gotten there, and for a price that they’re willing to pay.
Occasionally, this means that somebody decides not to go to a Madonna concert. But that’s not a flaw with supply and demand – that’s how it’s supposed to work! If the man quoted in the Stuff article didn’t go, it’s only because someone who valued being there more bought the ticket instead.
Finally, I have to ask: Why are people outraged when the principles of supply and demand are applied to airfares? Perhaps it’s because we routinely ignore those principles everywhere else in our transport system.
As numerous economists have observed, we manage our roads like a Soviet supermarket. The price to use roads is set at a single, low value – i.e. NZ’s comparatively low petrol taxes – and thus people queue up for ages to drive on them every morning and evening. The same thing happens with parking, where we have regulated to make it abundant and free and ended up in a situation where people can never get enough parking.
In economic terms, there is no difference between this:
They are both situations in which scarce resources, including people’s time, are misallocated due to poorly-functioning price signals. So rather than asking “why don’t we price air travel as inefficiently as roads?”, we should ask “why don’t we price roads as efficiently as we price airfares?”
A failure to price roads efficiently badly distorts our supply decisions. We are forever pouring more asphalt and concrete that accommodates a few more slowly-moving cars at peak times and sits idle much of the rest of the time. By contrast, congestion pricing would allow us to avoid many of these expenditures by giving people an incentive to travel differently.
What do you think about airfares – and transport pricing in general?
For those that don’t read Transportblog on a daily basis, this is the third part of a series I’m writing on the economics of public transport fare policies. Part 1 discussed a key rationale for public transport subsidies – lower fares keep people from clogging up already-congested roads. Part 2 considered the case for distance- or zone-based fares to ensure that people taking longer (and hence more expensive) trips pay more.
In the comments on those posts, several sharp readers asked about the relationship between fare levels and ridership, and whether there are any opportunities to improve outcomes by targeting lower fares to highly price-sensitive groups. These are excellent questions to ask!
In this post, I’ll take a look at the first question: In the aggregate, how does ridership respond to changes in fares? Hopefully, this will give us the theoretical tools to take a look at the second question in the next installment of the series.
In economic terms, we are asking about the “price elasticity of demand” for public transport. Fare elasticities measure how responsive people are to higher (or lower) prices. They’re usually estimated empirically by analysing data on changes in fares, patronage, and other control variables (e.g. per capita income or GDP) over time.
There are many studies on fare elasticities from around the world, some of which are summarised in the Australia BITRE elasticities database and this useful summary paper by Todd Litman. NZTA has also commissioned research into the structure of demand for public transport – see e.g. Wang (2011) and Allison, Lupton and Wallis (2013).
These studies don’t always arrive at precisely the same result, but they agree on one key thing: Demand for public transport is relatively “inelastic”. All else being equal, a 10% reduction in fares will increase ridership by less than 10% in the short and long run.
The implication of this is that if a public transport agency reduces fares, it will tend to collect a smaller amount of money from users and hence require a larger subsidy. And, conversely, raising fares can increase overall revenue, albeit at the cost of unintended consequences for increased traffic congestion.
Here’s Litman’s best-guess estimates of elasticities for public transport. The key figures are in the first row – “transit ridership with respect to transit fares” for the overall market. Litman’s estimates a long-run fare elasticity between -0.6 and -0.9. This means that a 10% increase in fares would be expected to reduce ridership by 6-9% in the long run.
Notice that short-run elasticities tend to be smaller, indicating that people take a while to fully respond to changes in prices. For example, if someone’s fares for their bus to work went up significantly, they may tolerate it for a little while but choose to buy a car (or rent a parking space) six months down the line.
Personally, I wonder if Litman’s estimates are a bit on the high side. Figures from Wang (2011) suggest that long-run fare elasticities (in the second row of the following table) are -0.46 in Wellington and -0.34 in Christchurch. This would indicate that a 10% increase in fares would reduce ridership by 3.4-4.6%.
Both of these tables also contain information on how people’s demand for public transport changes in response to other price changes and service changes, which is another interesting topic. Without going into a great deal of depth, I’d note two things:
- First, increasing petrol prices do tend to increase public transport demand, but this effect may be relatively modest. Car ownership, on the other hand, can have a big impact, as people who have already paid the fixed costs to own a car have strong incentives to get as much use out of it as possible.
- Second, improved service quality – meaning better frequency and reliability of buses and trains – has a stronger impact on ridership than lower fares. This has important implications for transport agencies, who are often better off putting their marginal dollar towards upping frequencies.
Lastly, it’s worth considering how this might play out in practice. Let’s assume, for a moment, that fare elasticities of demand are at the low end of Litman’s range, i.e.:
- Short-run fare elasticity = -0.2
- Long-run fare elasticity = -0.6.
Now, let’s consider a hypothetical scenario in which public transport fares are $2 and there are 1,000 daily riders on a given bus route. The public transport agency collects $2,000 in fares every day ($2*1,000 riders).
Now let’s consider what would happen if the agency chose to reduce fares by 10%, from $2 to $1.80. This is obviously great for people who are already on the bus, as they can pay less to get the same service. Daily revenue collected from them drops to $1,800 ($1.80*1,000 riders).
However, the lower fares also attract new riders. In the short run (0-2 years), we predict that a 10% reduction in fares will lead to a 2% increase in ridership (-10%*-0.2). This means that an additional 20 people (1,000 riders*2%) will take the bus and pay a total of $36 in fares every day ($1.80*20).
So far, this is not looking great from a financial perspective. The transport agency has lost $200 in fare revenue from existing riders and gained only $36 from new riders.
Things aren’t much better in the long run, where a 10% reduction in fares is expected to lead to a 6% increase in ridership (-10%*-0.6). This means an added 60 riders who pay $108 in fares every day. Again, this is not enough to cover the loss in revenue from existing riders.
Does this mean that fare reductions are never worth it? Not necessarily – if the reductions in congestion from fewer people driving are sufficiently large, then we should be willing to pay a bit more in subsidies.
A second factor is that different people and different types of journeys respond to higher prices in different ways. In principle, we may be able to increase patronage at a relatively low cost by targeting fare discounts to price-sensitive people. But that is a topic for next time!
What do you make of the data on fare elasticities of demand?
A few months back, I took a critical look at some dodgy arguments about the need for expensive security measures for cities to be “resilient” against terrorism. The whole thing got me thinking: how should we value protection against low-probability events?
This is quite relevant to transport policy at the moment. I’ve seen a number of references to resilience in discussions of major transport projects:
Today’s official construction start on the Transmission Gully Motorway marks a major step towards a safer and more resilient transport link in and out of Wellington.
In 2013 the Government announced its support for a tunnel in preference to a bridge. “With increasing demands on Auckland’s transport network, the Government will continue to work closely with its local government partners to provide a resilient network and wider transport choices,” Mr Bridges says.
But while resilience seems like a good thing, it seems to be more of a slogan than a careful piece of analysis. The whole discussion sometimes reminds me of this commercial featuring my favourite B-movie star, Bruce Campbell:
“If you have it, you don’t need it. If you need it, you don’t have it. If you have it, you need more of it. If you have more of it, you don’t need less of it.”
But is more resilience always good? Do we need more of it? Or do we already have too much of it? And how would we know?
First, it’s worth defining “resilience”. According to Mirriam-Webster dictionary, resilience means
the ability to become strong, healthy, or successful again after something bad happens
In transport, resilience seems to be defined as the ability to respond flexibly to unlikely or low-probability events. For example, here’s a picture of Tamaki Drive during some floods back in April 2014.
Tamaki Drive isn’t built with a massive barrier against the sea, so when a tropical cyclone hit the North Island, things got wet. Drivers were able to get home using other roads further up the hill, but it took much longer than usual. But, as this gent on a paddleboard shows, individuals came up with a range of innovative solutions to the short-term outage:
So with that in mind, how should we value our ability to be resilient to low-probability, potentially high-impact events?
One approach would be to calculate the value of resilience using actuarial techniques. Now, I’m not an actuarial scientist, but I’ve worked with people whose job it is to assess financial risks and picked up a few concepts in the process. An actuarial assessment of risk is conceptually pretty simple. It involves:
- Calculating the impact (i.e. net cost) of a given adverse event,
- Calculating the likelihood (i.e. probability over a given time period) of that event, and
- Multiplying the two together to obtain the expected value of protecting yourself against that risk.
So how does this work in practice. Suppose we’re dealing with a hypothetical case – the Auckland Harbour Bridge example mentioned above. Let’s say that we’re interested in making Auckland “resilient” against volcanic activity knocking out the existing bridge. So let’s make up some figures, and assume that:
- If the bridge was destroyed, it would take 1 year to get another one in place
- There are around 140,000 working people who live north of the bridge. Let’s assume that those people earn an average of $60,000 annually, and, furthermore, that their income would be reduced by 50% if the bridge was out. (Either due to reduced employment or increased cost and inconvenience of longer commutes around the Western Ring Route.)
- The Auckland volcano field has erupted at least 53 times, and the first eruption occurred around 248,000 years ago. This implies that we can expect one eruption every ~4,000 years, on average. So there’s perhaps a 1% chance that an eruption happens within the next 40 years.
Let’s be generous and assume that the next volcano will definitely destroy the existing bridge while leaving an adjacent crossing untouched. Multiplying these figures together, we find that the actuarial value of a second, volcano-proof harbour crossing is: (1 year outage)*(140,000 workers)*($60,000/year)*(50% loss in income)*(1% probability of volcano) = $42 million.
Frankly, that’s not a lot of benefit compared with the cost of a new harbour crossing. This is obviously a rather crude hypothetical example, but so far it doesn’t look like we should place that much weight on resilience to low-probability events. Even if we made a higher estimate of the cost of a bridge-destroying natural disaster, it wouldn’t change the outcome very much as a disaster probably won’t happen within our evaluation period.
However, there are some other factors at work. The first is that people are risk-averse and as a result may be willing to pay “over the odds” to avoid low-probability events. (The existence of profits in the insurance industry is good evidence for this hypothesis – people usually pay more for insurance than insurance companies pay out in claims.)
Returning to the Auckland Harbour Bridge hypothetical, it might be the case that Aucklanders, especially those living on the North Shore, are happy to pay more than $42 million to avoid the unlikely outcome of a volcano cutting one link to the shore. But how much more?
It’s hard to say without data, but I suspect that while people might be willing to pay (say) 20% or 50% above the odds, they wouldn’t be willing to pay 1,000% more. This is probably a productive angle for research, possibly with surveys or psychological studies.
A second factor at work is that we might not be able to accurately predict the likelihood or impact of some events. Nassim Taleb popularised the concept of “black swan events“, which should really be called “white swan events” in Australia and New Zealand. He points out that we often lack a good understanding of the statistical distribution of risks. This can be due to the fact that some events are outside the range that we have previously observed, or due to the fact that there can be a bunch of hard-to-predict indirect impacts in complex systems.
I’m not going to deal with the probability distribution issue that Taleb raises – I’m not a risk expert! – but I’d like to come back to the point about indirect impacts in a later post. Essentially, as the paddle-boarder on Tamaki Drive shows, people have a lot of different ways to react to a transport outage, so the net cost may actually be lower than we might initially assume.
How do you think we should value resilience in our transport system?
The announcement of Auckland Transport’s new fare policy made me curious about the economics of fare policies, so I’m taking a quick look at them. In part 1 of this series, I argued that 100% cost-recovery isn’t a realistic goal for public transport. While charging public transport users for the full costs of their journey may seem appealing, it will result in the perverse outcome of increased congestion on the roads. In the absence of congestion pricing, subsidising public transport can be a useful “second best policy” to improve the efficiency of roads.
In other words, if you like driving, you should also like public transport subsidies, as they make your life a little bit easier.
However, this principle doesn’t tell us much about how we should price different types of public transport trips. For example, should people pay more to take longer journeys on public transport? Some public transport agencies, like the New York Subway or the Los Angeles PT system, don’t think so – they allow you to ride as far as you want for a flat fare. Others, like the San Francisco BART system and most transport agencies in New Zealand, charge higher prices for longer trips.
To give another example, should people pay more to travel at certain times of the day? Most transport agencies in New Zealand don’t think so – Auckland Transport charges users the same price during peak times and the middle of the day. But other agencies, such as the Wellington’s rail system and the Brisbane public transport agency, do raise their prices during peak times.
So we have some choices available to us. What principles should we use to choose relative fares for different routes, once we’ve decided on an overall level of public transport subsidy?
In my view, it’s appropriate to charge a fare that accounts for the marginal cost of using the network at different times and in different locations. For example, if it costs twice as much to get people between points A and B as it does to get them between points A and C, then the trip between A and B should cost twice as much as the trip between A and C.
If we didn’t do that – i.e. if we set fares at the same level for those two trips – we’d expect people to demand more trips between A and B, which are expensive to provide, and fewer cheap trips between A and C. This can in turn make the whole system less efficient.
Similarly, there may be a case to vary prices by time of day. It tends to be more costly to provide public transport capacity to meet peak demands. This is because it’s necessary to buy buses (or trains) and hire drivers that run for two hours in the morning and evening and sit idle the rest of the time. But it might not be possible to go too far in this direction – after all, putting up peak fares too high means pushing more people back onto congested roads.
So if we set aside time-of-use pricing for the moment, we’re left looking at varying charges for different types of trips. In most cases, this means charging more for longer journeys than for shorter journeys. How can we do this?
One option is to use zone-based fares. This is what Auckland Transport has traditionally done, and what it’s proposing in its Simplified Fares policy. The advantage of zones is their simplicity and transparency. You can pinpoint your origin and destination on a map, and know exactly how much you will pay:
However, zone-based fares can result in some odd outcomes near boundaries. For example, under the zones above, if I travelled from Henderson to New Lynn – a four station journey – I’d pay for a single stage. But if I travelled from Fruitvale Road to Avondale – only two stations – I’d have to pay for two stages. Does it really make sense to pay more for a shorter journey just because it crosses a line on a map?
Perhaps it doesn’t. So one alternative would be to move to a fully distance-based fare structure. In effect, you’d pay based on the number of kilometres travelled, regardless of where you were going or how many transfers you made in the process. This has advantages – it eliminates boundary effects, for one – but it’s administratively complex and potentially confusing for users. For example: what happens to paper tickets, which are important for visitors and casual users?
How do you think that we should set prices for different types of public transport trips?
A few months back, Auckland Transport put out its new fare policy for consultation. The draft policy, which they call Simplified Fares, has two main elements:
- Standardised fare zones that ensure that journeys within or between zones cost the same regardless of whether you’re travelling by bus or rail [ferries are excluded]
- No transfer penalties between services, which is a key element in enabling a frequent connective network.
Those are indeed simple principles, but developing and implementing a fare policy is seldom simple. So the whole thing got me thinking: Why do public transport fares work the way they do? And could we do things differently?
As I’m curious, I figured that I should take a quick look at the economics of fare policies. Part one of the series looks at the biggest-picture question: Why do we subsidise public transport?
First, some background. In most developed-world cities, public transport systems are subsidised by taxpayers. Users pay some of the operating costs – ranging from as low as 10% to as high as 80% – but seldom all. In New Zealand, the national farebox recovery policy requires all regional transport agencies to cover 50% of their public transport costs from fares. However, data from the Ministry of Transport suggests that some agencies are closer than others to this target:
Is 50% the right number for all regions? I don’t know – and the answer depends in part on what other goals we’re trying to accomplish with public transport pricing. But it’s clear that some level of subsidy must be provided in order for the entire transport system to work efficiently.
To see why, we need to take a look at what economists call “second-best pricing”. According to Wikipedia, it can be desirable to impose a subsidy to “offset” for an uncorrected market failure elsewhere:
In an economy with some uncorrectable market failure in one sector, actions to correct market failures in another related sector with the intent of increasing economic efficiency may actually decrease overall economic efficiency. In theory, at least, it may be better to let two market imperfections cancel each other out rather than making an effort to fix either one.
In transport, we have a situation where people have multiple options for getting around. They can drive, take the bus (or train), cycle, etc. In this situation, a price change in one market – say, a fare increase for public transport – can encourage people to switch to another mode instead of paying more.
As I argued in a recent post on congestion pricing, road space is usually not priced “efficiently”. All road users pay fuel taxes or road user charges based on the total number of kilometres driven or litres of petrol used. But they don’t pay more to drive on busy roads, where they impose delays on other drivers. As this diagram from a 2012 UK study on the external costs of driving shows, the last 10-20% of car trips impose significant costs on society.
Public transport can play a useful role in smoothing off the big spike at the right hand side of that chart, by providing a more space-efficient option for travelling on popular, congested routes. Another way of saying that is that in the absence of congestion pricing (and in the presence of other subsidies for driving, such as minimum parking requirements), higher public transport fares can result in a perverse outcome – additional congestion and delays for existing road drivers. This is shown in the following diagram:
Effectively, a failure to price roads efficiently means that we have to provide subsidies for public transport to prevent car commutes from being even more painful than they currently are. Public transport subsidies are, in that sense, subsidies for drivers. By making your neighbor’s bus fare cheaper, they in turn make your drive to work a bit easier.
Finally, it’s worth considering how we got into this situation. 80 or 100 years ago, public transport systems tended to cover their operating costs with fares. For example, Auckland’s tram system was profitable, if in need of maintenance and refurbishment, up until its removal in the mid-1950s. (Mees ref?) This changed, in large part, due to the introduction of subsidised motorways.
This article by Joseph Stomberg at Vox describes how the US interstate highway system was developed in the 1950s as an explicitly subsidised – i.e. not tolled – transport mode:
The first step was changing how roads were funded. In the 1930s, there were already privately owned toll roads in the East, and some public toll highways, like the Pennsylvania Turnpike, were under construction. But auto groups recognized that funding public roads through taxes on gasoline would allow highways to expand much more quickly.
They also decided to call these roads “free roads,” a term that was later replaced by “freeways.” Norton argues that this naming shift was essential in persuading the federal government — and the public — to shift away from tolls. “It started with calling the roads drivers pay for ‘toll roads,’ and calling the ones that taxpayers pay for ‘free roads,'” he says. “Of course, there’s no such thing as a free road.”
In other words, the “original sin” of transport subsidies was the construction of non-tolled highways paid for out of general tax revenues. This choice led in turn to a situation in which we must adopt “second best pricing” in public transport, and offer an offsetting subsidy. I’m not necessarily opposed to this… but it does mean that I am skeptical to complaints that buses and trains are subsidised.
What do you think we should do about public transport pricing?
Last week, I took a high-level look at the opportunity cost associated with Auckland’s car-centric transport system. Simply put, cars use up lots of land, and public transport, walking, and cycling don’t. At a time when we’re struggling to find space to accommodate the city’s residential and economic growth, this is likely to be increasingly inefficient.
For example, here’s a graph that shows, roughly speaking, the last 50 years of trends in traffic volumes (using the Auckland Harbour Bridge as a proxy) and land values (using national house prices as a proxy). In the last decade or two, demand for intensive land use has far outstripped demand for driving:
However, this isn’t always acknowledged in the activities of transport agencies. As Matt highlighted two weeks ago, Auckland Transport is currently proceeding with a plan to knock down a house (on the bottom left corner of the intersection, shaded in magenta) for an intersection widening project:
A bit earlier, Stu also pointed out an intersection design down in Hamilton that seems quite hazardous to people on foot. Now, there are certainly reasons to redesign – and even widen – intersections. But what worries me is that intersection layout sometimes seem to be on auto-pilot, without any deep consideration of the conflicting values at play or the opportunity costs associated with particular designs.
Take, for example, this intersection at the junction of St Johns Road and College Road in Remuera. It’s large. Very large. Although there’s only a single lane in each direction on the roads in and out of the intersection, it widens to implausible dimensions in the intersection itself. I can only imagine what it’s like to try to cross the intersection on foot.
I asked my friend Lennart, who originally spotted this intersection, to show me how things could be done differently. He quickly sketched up a simplified design – shown in the green and magenta lines – that eliminated the big islands and the split lanes but still left enough room for buses to turn smoothly.
(Caveats: This is not necessarily a better design from a traffic engineering perspective – just a more space-efficient one. As we haven’t looked at traffic volumes, it’s difficult to say whether a signalised intersection or other safety treatments would be required if the slip-lanes were taken out.)
Overall, we found that there would be up to 2,000 square metres of space left over if the intersection was downsized. That’s enough space for three or four reasonably-sized houses on reasonably-sized lots.
Is this expenditure of space worth it? Would it be better to narrow the intersection and sell off the residual land for housing? Possibly. Possibly not. But no matter what the answer is, I hope that those questions are being asked of Auckland’s road designs.
What do you think of the space occupied by our intersections? Good, bad, indifferent?
Earlier this month, urban policy researcher Todd Litman published a useful summary of some of his new research into the cost of sprawl:
Our analysis indicates that by increasing the distances between homes, businesses, services and jobs, sprawl raises the cost of providing infrastructure and public services by 10-40 percent. Using real world data about these costs, we calculate that the most sprawled quintile cities spend on average $750 annually per capita on public infrastructure, 50 percent more than the $500 in the smartest growth quintile cities. Similarly, sprawl typically increases per capita automobile ownership and use by 20-50 percent, and reduces walking, cycling and public transit use by 40-80 percent, compared with smart growth communities. The increased automobile travel increases direct transportation costs to users, such as vehicle and fuel expenditures, and external costs, such as the costs of building and maintaining roads and parking facilities, congestion, accident risk and pollution emissions.
We estimate that in total, sprawl costs the American economy more than $1 trillion annually, or more than $3,000 per capita, and that Americans living in sprawled communities directly bear $625 billion in extra costs, and impose more than $400 billion in additional external costs. This is economically inefficient and unfair: it wastes valuable resources and imposes costs on people who do not benefit from sprawl.
These findings should not be particularly surprising to regular readers of Transportblog – or, indeed, to anyone with an elementary understanding of geometry. (Serving dispersed suburbs with network infrastructure is more expensive.) But the magnitude of the costs is impressive.
I was particularly struck by the following chart, which illustrates the amount of space required for various different transport modes. Litman estimates that each automobile requires a total of 80 to 240 square metres, mostly for parking. By comparison, walking, cycling, and public transport require less than 20 square metres per passenger:
As I’ve written before, space is expensive in cities, which means that we must use it efficiently. Moreover, the cost of space for cars is rising rapidly, while demand for driving is levelling off. In this situation, devoting more space to roads and parking – or preventing the re-use of road space and parking lots for other purposes – may represent a significant misallocation of resources:
So, we might ask: How much space have we misallocated as a result of our bias towards building roads rather than public transport and cycling options? And what else could we be doing with this space instead?
First, some data. As we tend to build road networks for peak demands, they tend to have spare capacity during the middle of the day and evenings. Consequently, I’m going to focus on trips taken in the morning peak. This is a conservative view on the space required for a car-based transport system, as cars used during off-peak times still require lots of parking.
According to modelling results reported by Wallis and Lupton (2013), in 2006 there were around 450,000 vehicle trips taken during the morning peak. (See Table 4.1 in their report.) Most of these are trips in single-occupant vehicles. How much space do we need to accommodate all these vehicles?
Based on Litman’s figures, travelling by car requires an average of 150 square metres of space – around 40 square metres of roads per car when moving and 110 square metres for parking. This implies that the 450,000 vehicles moving around during the morning peak occupy 67.5 square kilometres of space, including (at minimum) 18 square kilometres of roads.
That’s a lot of land. Urban Auckland covers a total area of around 544 square kilometres, which suggests that we’re using up around 12.4% of the city’s land area simply to move single-occupant vehicles during the morning peak and warehouse them during the day.
If anything, this is probably an under-estimate of the spatial cost of Auckland’s car-based transport system. A 2013 UN-Habitat report on streets as public spaces and drivers of urban prosperity found that Auckland devotes 14% of its land area to roads alone. Parking is likely to cost us even more space, as this map of Manukau central shows. Everything that’s not coloured in red or green is a carpark or a road:
But regardless of whether we’re dealing with 12% of the city’s land area or 30%, we’re talking about a lot of expensive space devoted to moving or storing cars. Even a modest reduction in the share of people travelling by car in the morning peak would save us a large amount of land.
Let’s say, for example, that we’d invested in better transport choices that enabled 10% of the people in cars to shift to public transport or cycling, which require around 10 square metres of space per person. If we had done so, we would have had an extra 6.3 square kilometres of land that didn’t need to be used for roads and carparks. [6.3km2=45,000 vehicles*(150m2-10m2)]
This is valuable land. Assuming an average land price of around $500 per square metre, it’s worth $3.2 billion. And it could be used for so many more things – housing, businesses, public parks, schools, etc – if it hadn’t been gobbled up by our space-hungry transport system. If it had been developed to the same density as Auckland’s average neighbourhood – around 43 residents per hectare – instead, it could have housed 27,000 people.
In a city that’s struggling to find enough space to house it’s growing population, this amounts to a minor scandal. Since the 1950s, local and central governments have spent lavishly on roads and neglected public transport, walking, and cycling. Those decisions have inadvertently contributed to our housing woes today, as they’ve saddled us with a space-inefficient transport system and a shortage of developable land.
At the moment, local and central governments are looking at ways to get best us out of their own properties. Auckland Council’s setting up Development Auckland to manage and develop its substantial land-ownings. At this year’s Budget, the Government announced a more hastily-developed plan to sell off a significant chunk of the land that it owns in Auckland for development.
Given their interest in the subject, they should also be thinking hard about how to minimise the “opportunity cost” of Auckland’s space-hungry car-based transport system. Investing in public transport and safe walking and cycling can allow us to move more people without gobbling up more valuable land.
What do you think about the spatial cost of our car-based transport system?
Last Thursday, the Government shut the door on the idea of road pricing for Auckland, saying that it would prefer to undertake “a year-long negotiation with the council on an agreed 30-year programme focusing on reducing congestion, and boosting public transport where that reduces congestion.”
The following day, the road/infrastructure lobby undertook a bit of a media blitz pushing for more construction. As part of that, we got sent this press release from the Auckland Chamber of Commerce:
12 June 2015
Auckland – defined by congestion
The Auckland Chamber of Commerce strongly supports the initiative of Government to seek a negotiation with Auckland Council on an agreed 30-year programme focusing on reducing congestion, and boosting public transport where that reduces congestion.
Michael Barnett, head of the Auckland Chamber was responding to news reports that Transport Minister Simon Bridges and Finance Minister Bill English have sent Auckland Mayor Len Brown a letter proposing a negotiation and ruling out allowing Auckland to bring in motorway charges to help fund transport projects.
“The Auckland business community overwhelmingly agrees that immediate action to address the City’s transport congestion is required,” said Mr Barnett.
In short, congestion is bad. Really bad. It’s a crisis deserving immediate action… in the form of a year-long talk-fest between local and central government.
Of course, it’s difficult to find reliable empirical evidence that Auckland’s congestion levels really are that bad. Average commute times are a cruisy 25 minutes – well below many other cities. NZTA research has found that the actual cost of congestion is neither (a) largely a monetary cost for businesses or (b) anywhere as large as people claim. While people like to claim that congestion costs “billions” annually, a more realistic figure is $250 million. The one source that does claim that Auckland has world-beating congestion, the TomTom index, has serious methodological flaws.
Nevertheless. Even though its empirical basis is shaky, the Auckland Chamber of Commerce’s recommendations for projects are not crazy. In fact, they seem to be on Auckland Transport investment radar already:
A good outcome from Government and Auckland Council working together would be a package of fast-tracked projects aimed at:
- Improving public transport services’ reliability and frequency
- Getting as much use as possible out of the transportation system we have
- Removing parking from major arterial routes to create more usable road space.
- More high occupancy lanes to encourage a reduction of sole occupancy cars.
- Strengthened integrated traffic management covering arterials and motorways.
- Expanding park and ride facilities at main trunk rail and busway stations.
But even if the ideas are sensible, “fast-tracking” them will be expensive. We simply can’t build everything at once. Even if Government was willing to give Auckland Council more tools to raise revenue – which is unlikely given its refusal to consider road tolls – capacity constraints in the civil engineering business would make it hard to do much more.
To its credit, the Chamber seems to recognise this and agree that we need to prioritise use of our scarce resources:
“Good leadership is about partnership,” said Mr Barnett. “It is about understanding that we have limited resources, so we must learn to prioritise correctly,” he concluded.
Which leads me to my point. If congestion is such a big problem, why don’t we use congestion pricing to make sure that we’re prioritising use of our road network efficiently?
I find it very strange that business groups aren’t more enthusiastic about this idea. If congestion is really as bad as they say it is, why aren’t they loudly advocating a policy solution that would actually address it? (Road-building doesn’t work.) Surely freight companies and construction firms would benefit from the resulting reductions in traffic, even if they had to pay a bit for them.
In my experience, congestion pricing is one of those ideas that virtually all economists agree on. It’s like free trade in that regard – there might be some disagreement about the fine details, but most agree that it’s a good idea. But it hasn’t gotten as much attention in other quarters.
So here, for example, is William Vickrey, who won the Nobel Memorial Prize in Economics for his pioneering work on the topic:
Known among economists as “the father of congestion pricing,” Professor Vickrey sees time-of-day pricing as a classic application of market forces to balance supply and demand. Those who are able can shift their schedules to cheaper hours, reducing congestion, air pollution and energy use — and increasing use of roads or other utilities. “You’re not reducing traffic flow, you’re increasing it, because traffic is spread more evenly over time,” he has said. “Even some proponents of congestion pricing don’t understand that.”
He has admitted that his ideas have sometimes not been well received by those who set public policy because, “People see it as a tax increase, which I think is a gut reaction. When motorists’ time is considered, it’s really a savings.”
And here’s urban economist Edward Glaeser commenting that more megaprojects aren’t the best fix for transport issues:
Infrastructure investment only makes sense when there is a clear problem that needs solving and when benefits exceed costs. U.S. transportation does have problems — traffic delays in airports and on city streets, decaying older structures, excessive dependence on imported oil — but none of these challenges requires the heroics of a 21st century Erie Canal. Instead, they need smart, incremental changes that will demonstrate more wisdom than brute strength…
IMPLEMENT CONGESTION PRICING: We should expect drivers to pay for more than just the physical costs of their travel. We should also expect them to pay for the congestion that they impose on other road users. If you have a scarce commodity, whether groceries or roads, and you insist on charging prices below market rates, the result will be long lines and stock outs, like those that bedeviled the Soviet Union decades ago. Yet U.S. roads are still running a Soviet-style transport policy, where we charge too little for valuable city streets. Traffic congestion is the urban equivalent of a stock out.
And here’s economist Matthew Turner, who co-authored one of the most comprehensive studies of “induced traffic”, which I discussed here:
So what can be done about all this? How could we actually reduce traffic congestion? Turner explained that the way we use roads right now is a bit like the Soviet Union’s method of distributing bread. Under the communist government, goods were given equally to all, with a central authority setting the price for each commodity. Because that price was often far less than what people were willing to pay for that good, comrades would rush to purchase it, forming lines around the block.
The U.S. government is also in the business of providing people with a good they really want: roads. And just like the old Soviets, Uncle Sam is giving this commodity away for next to nothing. Is the solution then to privatize all roads? Not unless you’re living in some libertarian fantasyland. What Turner and Duranton (and many others who’d like to see more rational transportation policy) actually advocate is known as congestion pricing.
And here’s the OECD in its latest country report on New Zealand:
A just-released OECD economic survey blames years of under-investment in infrastructure for the city’s roading problems. It calls for a mix of tolls and congestion charges to alleviate peak-hour traffic pressure and help fund new roads and more public transport.
“Placing a cost on travel during peak periods could incentivise drivers to travel at different times (off-peak), if they are not required to be on the roads, or could encourage more carpooling and use of public transportation,” the report says.
In short, if you’re worried about congestion, you need to take congestion pricing seriously. There are undoubtedly reasons why we may not want to implement congestion pricing, ranging from technical feasibility to equity concerns. But in my view it’s ridiculous for business groups and politicians to get all up in arms about the issue – and promptly rule out one of the few realistic solutions.
What do you think about congestion pricing?
We have been sent more LRT details from AT. Light Rail is undergoing investigation at this point, but slowly more of their thinking is emerging:
Clearly access to Wynyard is the most difficult part of this route. Queen St is so LRT ready and at last a use for that hitherto hopeless little bypass: Ian Mackinnion Drive. The intersection of New North and Dom Rd will need sorting for this too- Is there nothing that LRT doesn’t fix!
They are planning for big machines, 450 pax is at the top end of LRVs around the world.
At 66m, these are either the biggest ever made, or I guess more likely 2 x 33m units. 33m is a standard dimension, and enables flexibility of vehicle size.
The contested road space of Dominion Rd. Light Rail will create the economic conditions for up-zonning the buildings here; apartments and offices above retail along the strip. But the city will have to make sure that the planning regulations support this. Otherwise it will be difficult to justify the investment. Something for those in the area who reflexively oppose any increase in height limits, reduction of mandated parking, or increases in density and site coverage rules to ponder. If they prefer to keep the current restrictions they need to be aware they are also choosing to reject this upgrade. More buses will be as good as it gets, and AT’s investment will have to go elsewhere. I’m not referring to the the large swathes of houses back from the arterials, no need to change these; it’s the properties along the main routes themselves that need to intensify; anyway these are the places that add the new amenity for those in the houses. And not just shops and cafes, also offices with services and employment for locals, and apartments for a variety of dwelling size and price. Real mixed-use like the world that grew up all along they original tram system city wide, before zoning laws enforced separation of all these aspects of life.
As someone who uses statistics (and statistical methods) on a regular basis, I often find that the “headline figures” that get all the attention obscure as much as they reveal. For example, reporting a single benefit-cost ratio (BCR) for a project may conceal uncertainty about potential outcomes.
When talking about data, there’s a strong tendency to focus on the average value, without considering the variation in outcomes. So, for example, we get news articles like this:
Auckland house prices climbed to a fresh record last month, while the number of sales dropped from March’s peak, according to Barfoot & Thompson.
The average sale price rose to $804,282 in April, from March’s previous record $776,729, the city’s largest realtor said.
Averages are certainly useful, but it would also be helpful to know more about how the distribution of house values has changed. For example: perhaps the average is being dragged up by the sale of a small number of really expensive homes? It’s hard to know.
In fairness, the article does provide this data suggesting that there is a fair range of prices. But we don’t know whether the number of homes sold for under $500,000 is increasing, decreasing, or staying the same:
“157 homes sold during the month went for under $500,000, which represents one in seven of all homes sold. There is a good choice of homes in this price category but LVRs often mean potential buyers cannot meet the home deposit requirements.”
As an illustration of why we can’t rely solely upon measures of central tendency, such as the mean or median value, consider two hypothetical cities:
- City A has an average house price of $500,000, and a standard deviation in house prices of $50,000. (As a rule of thumb, if your data follows a normal distribution, 95% of values will be found within two standard deviations of the average. In other words, in city A, 95% of houses are sold for between $400,000 and $600,000.)
- City B, by contrast, has an average house price of $600,000 and a standard deviation in house prices of $150,000. (Implying that 95% of houses are sold for between $300,000 and $900,000.)
I’ve graphed the distribution of house prices in these two cities below. City A is in blue, while city B is in red.
We can immediately see two things. First, the average house in A – found at the peak of the bell curve – is cheaper than the average house in B.
A second key fact, however, is that B actually offers more affordable houses overall, in spite of its higher average prices. This can be seen pretty easily on the chart – B has a much fatter “tail” of low-priced houses than A does.
Let’s think about what these two cities offer for households on lower incomes. Consider what house-hunting looks like for a household earning $50,000 a year.
If these people were basing their decisions on where to live on average house prices alone, they’d clearly prefer to live in city A, where average prices are $100,000 lower. But once they got there, they’d have a lot of trouble finding a home that they could afford.
Because city A has such little variation in house prices, it’s hard to find any houses that sell for less than $400,000. Assuming a 10% down-payment and a 6% mortgage rate, our household would have to pay $26,000 in mortgage repayments every year for the cheapest house on the market. Over 50% of their annual income!
By contrast, if they’d looked behind the headline figures on average house prices, they would find that city B offers many more affordable homes. Around 5% of homes in city B sell for less than $350,000, and it’s possible to find homes for $300,000 or less.
Under the same mortgage assumptions, our household would have to pay around $19-22,000 in mortgage repayments every year to live in a cheaper house in city B. This still isn’t great – it’s around 40% of household income – but it’s better.
In other words, although the first city seems more affordable based on its average house prices, it is actually likely to be considerably less affordable for many of the real human beings that are trying to live in it.
How do you think we should measure and report on house prices?