This is the fourth installment in an ongoing series on the politics and economics of zoning reform. In the previous post, I took a look at the demographic factors underpinning variations in submission rates on the Auckland Unitary Plan between different parts of the city. That analysis showed that age and income matter quite a lot – variations in median personal incomes and the share of residents over the age of 65 explain a large share of the variation in submission rates from local boards.
In a representative democracy, voting matters. On the whole, politicians tend to respond to the interests and desires of the people who they represent. But there’s a caveat: If you don’t vote, they don’t have a good reason to listen to you.
This is important, because elected representatives get to decide how to address a lot of important issues. For instance, local governments’ choices affect:
- The availability, location, design, and price of housing – zoning rules can either facilitate or thwart peoples’ desires for a place to live
- The qualities and locations of the places where we work, shop, and play
- How we get around – local governments make decisions about investments in streets, public transport, walking, and cycling
- The quality of our local environment – air, water, and soil quality is regulated by local government.
Unfortunately, as I found when I looked at data on voter turnout in local government elections, people are increasingly disengaged from local elections. Across New Zealand, turnout appears to be structurally declining:
Why is this happening?
A useful starting place is to ask what we know about the reasons why voter turnout varies between different places. According to statistics published by the Department of Internal Affairs, in 2013 people were much more likely to vote in some local elections than others:
- Mackenzie District had the highest voter turnout – 63.7% of registered voters – followed by Buller District (62.4%) and Wairoa District (62.0%)
- Waikato District had the lowest voter turnout (31.6%) followed by Auckland Council (34.9%) and Waimakariri District (35.0%).
Why do outcomes vary so widely? As I did in my analysis of Unitary Plan submission rates, I’m going to use OLS regression to investigate a set of potential explanations. OLS regression is a statistical technique for investigating relationships between multiple explanatory variables and a single outcome variable – voter turnout in this case. Here are the hypotheses I want to test:
- H1: Size matters. Councils serving larger populations are less likely to be as engaged with the community.
- H2: Council functions matter. Regional councils and district councils serve different functions, which might be more or less ‘salient’ to voters. Regional councils are responsible for regulating environmental outcomes while district councils regulate land uses and provide roads.
- H3: Voting systems matter. Six councils use single transferrable vote (STV) rather than first-past-the-post (FPP) as a voting system. STV is a bit more complicated to understand, but it allows people to vote their conscience when choosing between multiple candidates and hence may result in more competitive, relevant races.
- H4: Competitiveness matters. Elections that are more closely contested are more likely to draw higher turnout. I’ve used candidates standing per open position as a proxy measure for competitiveness, although as the Auckland mayoral race shows, that’s not necessarily always true.
- H5: Age matters. As older people are more likely to turn out to vote, we would expect local governments with higher median ages to have higher voter turnout.
- H6: Home ownership matters. If home ownership is positively correlated with democratic engagement, we’d expect areas with higher home ownership rates to have higher voter participation.
The key findings are reported in the following table. For the non-statisticians in the audience, here’s what this quick analysis says about the hypotheses above:
- It provides support for H1 and H2 – larger councils tend to have lower turnout, while regional councils and unitary councils tend to have higher turnout than district councils.
- It also provides support for H5 – councils with higher median age tend to have higher turnout.
- It does not provide support for the other hypotheses. In 2013, at any rate, councils that used STV, had more candidates per open council position, and had a higher share of renting households did not have statistically significant differences in voter turnout.
[Technical note: I found that there was low multicollinearity between these variables, meaning that you couldn’t predict the majority of variation in, say, home ownership rates as a function of the other variables in the model. This suggests that there is low risk of understating the impact of any individual variable.]
||log(2013 voter turnout)
|Regional Council (1)
|Unitary Council (1)
|STV voting system (2)
|log(Candidates per position)
|log(Share of households renting)
|Residual Std. Error
||0.112 (df = 67)
||8.476*** (df = 7; 67)
||*p<0.1; **p<0.05; ***p<0.01
||(1) Relative to District Council
||(2) Relative to FPP
In short, when we’re looking at determinants of voter turnout in local government elections, size matters, council type matters, and age matters. But there are two important caveats to this analysis:
- First, this model wasn’t very good at explaining variations in voter turnout. The adjusted R2 statistic of 0.414 indicates that this model only explains 41.4% of the total variation in voter turnout between different councils. In other words, the majority of variation is due to other, unobserved factors.
- Second, this is a “cross-sectional” model that tries to predict variations between places at a point in time. It can’t tell us much about how voter turnout might change if we adopted different policies. For instance, we can’t conclude, on the basis of this model, that we should reduce council size in order to raise turnout.
To illustrate the second point, let’s take a look at how voter turnout has changed in Auckland over the last three elections. In 2010, Auckland Council was amalgamated from eight predecessor councils. In effect, it got a lot larger. So did this reduce voter turnout?
The DIA voter turnout data doesn’t seem to support that story, at least not in such a simplistic form. Here’s a chart comparing local election turnout in Auckland with voter turnout in the rest of New Zealand from 2007 to 2013. As this shows, voter turnout in Auckland wasn’t that flash prior to amalgamation – in all predecessor councils except Rodney, it substantially lagged behind turnout in the rest of New Zealand.
Turnout rose significantly after amalgamation in 2010 before falling back again. This probably had more to do with the dynamics of those elections than the nature of the new council. In 2010, Aucklanders were more aware of the elections, which featured a competitive race for mayor. The mayoral candidates – Len Brown and John Banks – were both well-known local body politicians with genuinely different visions for the city.
In 2013, those factors probably weren’t as salient. The mayoral race was less competitive, and the new council had gotten on with doing all the million soporific tasks of local government.
So what does all this data and analysis mean, anyway? What should we do differently to get higher turnout?
I would draw two key conclusions:
- First, demographics matter to voter participation. Different groups vote at different rates, and councils with older populations tend to have higher turnout. This suggests that any attempts to address low voter turnout have to address barriers faced by different types of people.
- Second, there aren’t any obvious structural fixes related to council size, structure, or the like. Most variation in turnout between councils isn’t explained by the factors I’ve measured here, and the evidence for reducing council sizes as a way of raising turnout doesn’t seem too robust. (See the discussion of changes in voter turnout after the late-1980s amalgamations on page 22 of this document.)
In short, if we want a durable solution to low turnout rates, we need to look at some other, harder-to-measure factors, like the information available to people about local elections. But that’s a topic for the next installment.
This week, the Herald on Sunday published an article calling out a dangerous new practice: walking under the influence of a smartphone. According to them, careless walking causes literally dozens of injuries a year and should possibly be criminalised:
Now legislation has been introduced in New Jersey that would slap a US$50 ($72) fine and possible jail time on pedestrians caught using phones while they cross. And in the German city of Augsburg, traffic lights have been embedded in the pavement – so people looking down at their phones will see them.
The Herald on Sunday carried out an unscientific experiment at the busy intersection of Victoria and Queen Sts in central Auckland during the lunchtime rush to discover the scale of the problem here. Observing one of the corners, between 1pm and 1.30pm, we spotted 39 people using their cellphones while crossing.
Some people looked up briefly while crossing. Others kept their heads down, oblivious to what was going on around them.
In the past 10 years, the Accident Compensation Corporation has paid out more than $150,000 for texting-related injuries to a total of 272 Kiwis.
About 90 per cent of injuries were a result of people tripping, falling or walking into things while texting.
Incidentally, I have to admit some guilt here. While I don’t usually walk under the influence of a smartphone, I will often walk around reading a book – a habit I picked up during university. In over a decade of distracted walking, I’ve never fallen over, walked into anything, walked in front of a car, or walked into anybody else.
Let’s take the Herald’s suggestions seriously, and ask whether there is a case to ban other activities that risk injury to participants. Their threshold for “enough harm to consider regulation” appears to be around 27 injuries a year costing ACC at least $15,000.
What else fails that test?
I went to ACC’s injury statistics tool to get a sense. Helpfully, they break out injury claims (and the cost thereof) by cause, activity, and a range of other characteristics.
Here’s a table summarising some of the sports that should be considered for a ban. Rugby and league are obvious candidates, of course, as they result in tens of thousands of claims every year and a total cost in the tens of millions. But would you have suspected that humble, harmless lawn bowls was so hazardous? The sport of septuagenarians injures over 1,000 people a year and costs ACC $1m. Likewise with dancing, golf, and fishing. They’re all too dangerous to be allowed. It’s a miracle that we’ve survived this long with all of this harmful physical activity occurring.
||Average new claims per annum (2011-2015)
||Average annual cost (2011-2015)
But it doesn’t stop with sports. Your home is full of seemingly innocuous items that are eager to kill or maim you. Your stove, for example. Boiling liquids cause almost 5,000 injuries a year, costing ACC $1.9 million. We should definitely ban home cooking. Leave it to the professionals, for pity’s sake! Lifting and carrying objects at home is even more dangerous – over 100,000 claims a year. So don’t pick up that tea-tray or box of knick-knacks: call in someone who’s suitably qualified for handling such dangerous objects.
And let’s not even mention the toll taken by falls, except to strenuously argue for a ban on showers, bathroom tiles, and private ownership of ladders.
|Cause of accident
||Average new claims per annum (2011-2015)
||Average annual cost (2011-2015)
|Boiling liquids (at home)
|Lifting / carrying objects (at home)
|Falls (at home)
|Driving-related accidents (on roads/streets)
Finally, it’s important to remember an important bit of context that the Herald doesn’t mention: Distracted walking is a far, far lesser danger than driving cars (distracted or not). In the average year, ACC receives 13,300 claims for driving-related accidents and pays out a total of $173 million for people who have been injured or killed. That far, far exceeds the injury toll associated with texting while walking.
On the whole, you’re more likely to be killed or injured while in a car than you are while walking. This chart, taken from a Ministry of Transport report on “risk on the road”, shows deaths or injuries in motor vehicle crashes per million hours spent travelling. Drivers experience 8 deaths/injuries per million hours. The two safest modes are walking (4.6 deaths/injuries per million hours) and public transport (0.7).
Because different travel modes are substitutes, measures to discourage walking – i.e. by penalising people who combine walking with smartphone use – may have the unintended consequence of killing or injuring more people.
[As an aside, this chart presents a somewhat misleading picture of cycle safety. People on bicycles experience 31 deaths or injuries per million hours – considerably higher than driving. However, drivers, not cyclists, are at fault in the majority of cycle crashes. According to another recent MoT report, cyclists were primarily responsible for only 22% of crashes. Drivers were partially or fully at fault in the remaining 78% of crashes.
Consequently, if we provided safe cycle infrastructure that kept people on bikes away from people in cars, cycling would get a lot safer. If we could completely eliminate the risk of people on bikes being hit by cars, cycling would be about as safe as driving.]
To conclude, there are two things that the statistics teach us.
The first is that although injuries and ACC claims are bad, it’s essential to put risks in perspective. And the relevant perspective is this: Walking is a safe mode of travel. It’s remained safe in spite of the invention of the smartphone and the existence of hoons like me who walk around with their nose in a book.
It’s always worth looking for effective ways to improve safety. That’s why Transportblog’s advocated for safe, separated cycleways, and also why it’s taken a positive view on cost-effective investments to improve road safety, like the recent announcement of safety improvements to SH2. But it’s also important to remember that the best way to improve safety is to make it easier to travel in comparatively safe ways. Like walking and public transport.
The second lesson is that there are many activities that can injure us, from rugby to lawn bowls to cooking. Walking while texting is a recent invention, so it may seem newsworthy. But it’s only one of the many hazards that people choose to expose themselves to. If you’re not living in a padded room, you’re probably risking your life in some way or another.
As humans, we’re very prone to focus on risks from new activities while ignoring the effects of things that are already common. Status quo bias is a very real thing – and it doesn’t just apply to transport reporting. It’s the reason why people can, say, oppose new three-storey apartment buildings while being perfectly comfortable with the three-storey houses next door to them.
What risks do you think we should pay more (or less) attention to?
Every month we report on what’s happening with public transport patronage however Auckland Transport also report on many other metrics too such as how roads are performing. In this post I’ll look at some of those other metrics.
Instead of just measuring traffic volumes, AT use a measurement called Arterial Road Productivity which is a based on how many vehicles, how fast they’re travelling and how many people they have in them. To me this seems like a very bad metric primary because it only seems to count people in vehicles. It could also be interpreted as encouraging bigger and faster roads as a way of improving the metric which goes against many of the city’s wider goals. In saying that another way to improve the result could be improving vehicle occupancy so therefore bus lanes which speed up buses carrying more people will improve productivity. Regardless it seems that AT is performing quite well and is above target.
The second metric is AM Peak Arterial Road Level of Service and as you can see even then around 80% of arterials aren’t considered congested.
The next set of charts look at how a number of key freight routes are performing. For each of these routes AT have a target for how long it should take to travel. As you can see for almost all of the routes the target has been at least met and some such as on Kaka St/James Fletcher Dr/Favona Rd/Walmsley Rd the target has been significantly exceeded. If the results here are indicative of other parts of the road network then they certainly don’t support calls from the freight industry for significant projects such as the East-West link.
Parking obviously plays a big role in transport and AT measure occupancy rates for both on street and off street carparks. On street parks are only measured quarterly and the result is based on top four busiest hours of the day at each of the three sites around the city centre. The last survey occurred in August and as you can see at the upper level of AT’s target. This suggests there’s possibly room to increase prices to better manage demand.
Occupancy at AT’s off street carparks have dropped quite a bit recently which is almost certainly attributed to the change in parking prices at the beginning of August.
One area that isn’t looking great is road safety with the number of deaths and injuries 6% above target.
While AT don’t publish monthly traffic volumes, the NZTA does for some selected state highways. The one we look at closest is the Harbour Bridge which is currently experiencing a growth rate of around 1% p.a. Compared to the other state highways that the NZTA publish this appears incredibly low as most are experiencing growth of around 5% annually – although off a lower base.
Overall – with the exception of road safety – it seems that our local roads are not performing too badly.
As someone who uses statistics (and statistical methods) on a regular basis, I often find that the “headline figures” that get all the attention obscure as much as they reveal. For example, reporting a single benefit-cost ratio (BCR) for a project may conceal uncertainty about potential outcomes.
When talking about data, there’s a strong tendency to focus on the average value, without considering the variation in outcomes. So, for example, we get news articles like this:
Auckland house prices climbed to a fresh record last month, while the number of sales dropped from March’s peak, according to Barfoot & Thompson.
The average sale price rose to $804,282 in April, from March’s previous record $776,729, the city’s largest realtor said.
Averages are certainly useful, but it would also be helpful to know more about how the distribution of house values has changed. For example: perhaps the average is being dragged up by the sale of a small number of really expensive homes? It’s hard to know.
In fairness, the article does provide this data suggesting that there is a fair range of prices. But we don’t know whether the number of homes sold for under $500,000 is increasing, decreasing, or staying the same:
“157 homes sold during the month went for under $500,000, which represents one in seven of all homes sold. There is a good choice of homes in this price category but LVRs often mean potential buyers cannot meet the home deposit requirements.”
As an illustration of why we can’t rely solely upon measures of central tendency, such as the mean or median value, consider two hypothetical cities:
- City A has an average house price of $500,000, and a standard deviation in house prices of $50,000. (As a rule of thumb, if your data follows a normal distribution, 95% of values will be found within two standard deviations of the average. In other words, in city A, 95% of houses are sold for between $400,000 and $600,000.)
- City B, by contrast, has an average house price of $600,000 and a standard deviation in house prices of $150,000. (Implying that 95% of houses are sold for between $300,000 and $900,000.)
I’ve graphed the distribution of house prices in these two cities below. City A is in blue, while city B is in red.
We can immediately see two things. First, the average house in A – found at the peak of the bell curve – is cheaper than the average house in B.
A second key fact, however, is that B actually offers more affordable houses overall, in spite of its higher average prices. This can be seen pretty easily on the chart – B has a much fatter “tail” of low-priced houses than A does.
Let’s think about what these two cities offer for households on lower incomes. Consider what house-hunting looks like for a household earning $50,000 a year.
If these people were basing their decisions on where to live on average house prices alone, they’d clearly prefer to live in city A, where average prices are $100,000 lower. But once they got there, they’d have a lot of trouble finding a home that they could afford.
Because city A has such little variation in house prices, it’s hard to find any houses that sell for less than $400,000. Assuming a 10% down-payment and a 6% mortgage rate, our household would have to pay $26,000 in mortgage repayments every year for the cheapest house on the market. Over 50% of their annual income!
By contrast, if they’d looked behind the headline figures on average house prices, they would find that city B offers many more affordable homes. Around 5% of homes in city B sell for less than $350,000, and it’s possible to find homes for $300,000 or less.
Under the same mortgage assumptions, our household would have to pay around $19-22,000 in mortgage repayments every year to live in a cheaper house in city B. This still isn’t great – it’s around 40% of household income – but it’s better.
In other words, although the first city seems more affordable based on its average house prices, it is actually likely to be considerably less affordable for many of the real human beings that are trying to live in it.
How do you think we should measure and report on house prices?