Skip to content

The very short-run relationship between exchange rate volatility and exports: Evidence from Iceland

March 10, 2017

Does exchange rate volatility negatively affect exports? This question is of great value to policymakers, especially in small open economies, which often rely heavily on exports and often face a choice of exchange rate regimes. If volatility is found to constrain exports, that could provide an argument in favor of an exchange rate regime in which volatility may be subdued, i.e. a currency peg. If volatility does not negatively affect exports, such arguments are less valid. Another, equally important question, turns the causal relationship on its head: To what extent is exchange rate volatility caused by changes in exports?

In this article, I contribute to the discussion by studying the relationship between exchange rate volatility and goods exports in Iceland. The recent economic history of Iceland has been characterized by different exchange rate regimes and several episodes of turmoil in the currency market. Another interesting aspect of this case study is that the supply of Iceland‘s goods exports industries is by nature relatively inelastic. I focus on short-run effects using high-frequency data.

The effect of exhange rate volatility on exports has been extensively studied. Various estimation methods have been employed in the literature, but error correction models seem to be the most popular. Researchers are now increasingly addressing the issue using sector-level and firm-level data (Héricourt and Poncet, 2013; Serenis and Tsounis, 2015). Estimates of the effect of exchange rate volatility on exports range from being significantly negative (e.g. Asserry and Peel, 1991) to small (e.g. Bahmani-Oskooee, Harvey and Hegerty, 2013).

In this article, I propose a relatively straight-forward method to test for the short-run effect of exchange rate volatility on exports. Daily nominal exchange rates and monthly exports are de-trended using a Hodrick-Prescott filter. Within each month, the standard deviation of the cyclical, or residual, component of the exchange rate is calculated. This is used as a measure of exchange rate volatility, and regressed on detrended monthly exports along with control variables which pick up the annual cyclical component of exports and the short-run effect of exchange rate appreciation or depreciation.

Crucially, I achieve identification by using variation in Iceland‘s exchange rate regime as a source of exogenous variation in exhange rate volatility. Finally, I ask the other question: whether exports affect exchange rate volatility.

Exchange rate volatility in Iceland is found to be positively and significantly associated with the cyclical value of goods exports within a month in the period 1999-2015. When instrumental variables are used in order to address endogeneity, I do not find a significant short-run effect of exchange rate volatility on goods exports. This finding is not surprising given the nature of the Icelandic economy. Furthermore, I find no evidence that exports negatively affect short-run exchange rate volatility.

Background, data and hypotheses

Iceland is a very small, open economy. It has an independent currency, the Icelandic króna (ISK), whose value in terms of a trade-weighted basket of foreign currencies is calculated daily by the Central Bank of Iceland (CBI). For a large part of the 20th century, the market for the ISK was distorted due to capital controls and government interventions. This changed in the 90s and early 2000s and from March 2001 to September 2008, the ISK was free floating. In November 2008, in response to a severe banking and currency crisis, the CBI instituted capital controls which significantly affected the ISK market. These restrictions were in place until 2016-2017, when they were partly lifted.



The natural logarithm of the trade-weighted index of the ISK exchange rate, retrieved from CBI and taken on a daily basis, is shown in Figure 1, along with the trend component of the exchange rate as captured by a Hodrick-Prescott filter with a smoothing parameter of 10 million. As is evident from the graph, the trend component picks up all trends that last more than a couple of months. In Figure 1, a rise in the exchange rate indicates depreciation of the ISK.

The standard deviation of the residual component is shown in Figure 2. Note the large heterogeneity in exchange rate stability over the 15 year period. In the very beginning and towards the end of the period, exchange rates were basically stable, while during some months in 2008 the standard deviation of the residual component of the exchange rate was above 0.05 on the log scale, or about 5 percentage points in terms of the exchange rate itself.

Note also that that my definition of exchange rate volatility is somewhat unorthodox. It pertains to the heterogeneity in deviations from a medium-run trend within a month. This means that during a period in which the exchange rate is appreciating or depreciating fast in the medium to long-run, a stable exchange rate in a given month is interpreted as being more volatile than if the exchange rate would follow the trend. This may raise some eyebrows, but casual observation of the data does not indicate to me that this method is critical to the measure of volatility throughout the sample period.


The natural logarithm of monthly goods exports from Iceland, retrieved from Statistics Iceland, is shown in figure 3 along with the trend component as captured by an HP filter with a smoothing parameter of 14.400 (the standard value in the literature for monthly data). Iceland‘s goods exports are very homogeneous. In both 1999 and 2015, around 75% of the country‘s goods exports were marine products and metals, mostly aluminum. Both industries arguably have relatively inelastic short-run supply. The marine industry is mostly constrained by natural factors such as the size of fish stocks, and for technical reasons aluminum production has to be maintained at a very stable level.

In this article, I will not provide additional empirical support for my claim that the supply of Icelandic exports are inelastic, but simply use the above anecdotal evidence to motivate the following hypothesis:

Hypothesis 1: Exchange rate volatility does not have a significant short-run effect on goods exports from Iceland.

Analysis of the data indicates that exchange rate volatility is higher during periods of currency depreciation than appreciation. This makes some intuitive sense, if one believes that due to risk aversion or an endowment effect financial markets are more volatile during stress than during an upswing. If this story is true, then it would also be true that during periods when exports rise relatively more than can be expected based on secular trends and cyclical factors, the currency market is more calm. This story can be formalized in the following hypothesis:

Hypothesis 2: In the short run, goods exports have a negative effect on exchange rate volatility in Iceland.

At first, the two results can seem contradictory. However, one has to keep in mind that both exchange rate volatility and exports are endogenously determined along with a variety of other variables. To circumvent this issue, I use different instrumental variables to test each hypothesis.

For Hypothesis 1, I use that Iceland has recently undergone periods of dramatically different exchange rate regimes, ranging from a free floating ISK with huge capital movements to a capital controls regime with little activity in the currency market. These regimes provide a source of variation in exchange rate volatility which is completely exogenous to the cyclical component of exports.

For Hypothesis 2, I use that exports are quite cyclical in nature and use monthly dummies as instruments. It is more debatable whether these dummies are valid instruments, but at the very least they do not appear to be correlated with exchange rate volatility.



Table 1 shows the results from OLS regressions on the monthly cyclical component of the log of goods exports. Exchange rate volatility as defined above is the main independent variable of interest. I control for the effect of the overall movement in the exchange rate within the given month and in specification 2, two lagged values of these variables are included as well. Also included, but not reported, are dummies for every month of the year. The standard errors used to compute p-values correct for autocorrelation and heteroskedasticity using the Newey-West method with 6 lags.

The most significant result is that in specification 1, exchange rate volatility is positively related with exports within a given month (p=0.002). The point estimate suggests that an increase in exchange rate volatility within a month by 0.01 on the log scale (≈1% in terms of the exchange rate itself) is associated with 3,4% higher exports within a month. Beyond this, however, these regressions do not tell us a big story since we expect widespread endogeneity and reverse causality issues.


Table 2 shows the results from more interesting GMM regressions in which the log of cyclical goods exports is again the dependent variable. There is a single instrumental variable: a dummy indicating a period spanning roughly 2004-2008 during which the Icelandic economy experienced large capital flows and significant exchange rate volatility. The Kleibergen-Paap F statistics also reported allow us to reject overidentification, but the LM statistics raise some concern about weak identification in the regressions. Ignoring these concerns for the moment, we find that goods exports are not significantly affected by exchange rate volatility, neither in the current month nor in the month before.


Table 3 reports the results from GMM regressions where exports are regressed on exchange rate volatility. The instrumental variables are five monthly dummies which are chosen as they are most strongly associated with cyclical trends in exports. The Kleibergen-Paap statistics raise no concern about identification. Exports do not have a significant effect on exchange rate volatility, neither in the current month nor with a one month lag.

Some robustness checks were performed. The results in Table 2 are not sensitive to the choice of currency regime used as an instrument, although regressions using other regimes exhibit more identification problems. The same goes with the results in Table 3; including all monthly dummies as instruments weakens identification but does not otherwise affect the results. When I exclude March 2008 to February 2009 from the sample – a period of extreme volatility and uncertainty in the Icelandic economy – the effect of volatility in the regressions reported in Table 2 become more statistically significant, attributing a negative effect of volatility on exports with p-values of 0.098 and 0.111, respectively. However, these regressions are weakly identified and highly sensitive to the choice of regime as instrument. The results in Table 3 are not significantly affected by excluding this period. Controlling for quarterly movements in world prices of export products also does not qualitively affect the results. I did not specifically check whether the choice of smoothing parameter in the HP filter would affect the results. Checking this seems like a logical next step.


I have studied the very short-run relationship between exports and exchange rate volatility in Iceland. I hypothesized that due to the inelastic nature of Iceland‘s goods exports, exchange rate volatility would not significantly affect goods exports in the very short run, i.e. within 1-2 months. The above results are consistent with this hypothesis. I also hypothesized that exports would negatively affect exchange rate volatility in the short run. The analysis above does not serve to support this claim.

The analysis in this article contributes to a large discussion about the relationship between exchange rate volatility and exports. I have added yet another case study to this discussion, and demonstrated how the use of HP filters can be useful in identifying short-run fluctuations in exchange rates and exports. For data availability reasons, I only looked at Iceland‘s goods exports industries. With a surge in tourism in recent years, goods industries are a declining share in Iceland‘s total exports profile. One would expect tourism and other service industries to be more sensitive to short-run exchange rate volatility.

The study is an ongoing project and ideally, I would need to perform more robustness checks. In particular, I stress that one has to take the GMM regressions with a grain of salt, as some of them are not strongly identified and results seem to depend somewhat on the sample period and choice of instruments.

Ólafur Heiðar Helgason

Barcelona GSE, Master’s Program in Economics, 2016-2017


Asseery, A. & Peel, D.A. (1991). The effects of exchange rate volatility on exports: Some new estimates. Economics Letters, 37(2), 173-177.

Bahmani-Oskooee, M., Harvey, H. & Hegerty, S.W. (2013). The effects of exchange-rate volatility on commodity trade between the U.S. and Brazil. The North American Journal of Economics and Finance, 25, 70-93.

Héricourt, J. & Poncet, S. (2013). Exchange Rate Volatility, Financial Constraints, and Trade: Empirical Evidence from Chinese Firms. The World Bank Economic Review, 29(3), 550-578.

Serenis, D., & Tsounis, N. (2014). The effects of exchange rate volatility on sectoral exports evidence from Sweden, UK, and Germany. International Journal of Computational Economics and Econometrics, 5(1), 71-107.

Economic impacts of the Northern Sea Route

March 1, 2017

This research paper explores the Northern Sea Route (henceforth NSR), which is a shipping lane located along the Russian Arctic coast, and its predicted impacts on the relevant economies, especially Russia, in 2013, the time when the NSR was about to play an instrumental role in future shipping. This paper will also examine the economic implications.

Russia is one of the largest economies in the world by nominal value and even ranks among the top ten by purchasing power parity (CIA factbook). Nevertheless, it is still classified as a developing country.


Figure 1: Russia’s GDP growth from 2010 to 2013 (Kolyandr and Ostroukh, 2013)

In 2012, Russia’s economic growth was solid, at a comfortable 3.4%, while the global economy was stuck in recession. The Russian rate of growth was faster than that of many other developing countries.  Moreover, unemployment fell to 5.4%, which was a record low for the past 20 years. Not only that, wages grew considerably fast (see Figure 2). In 2013, Russia’s economy started to display signs of weakness. Its economic growth dropped to half the level of what it was in the decade leading up to the 2008 crisis, falling short of economists’ consistent expectation of a 1.9 percent rise. Some analysts and officials even started comparing the situation to a recession. In early 2013, the industrial output dropped for the first time since 2009. Furthermore, inflation increased in the second half of 2012 and was set to remain high in early 2013 (Kolyandr and Ostroukh, 2013).


Figure 2: Real wage growth in Russia in 2012 (trading economics)

One needs to adopt an international perspective when examining this case. The increase in inflation in Russia was related to three factors. First, overall inflation increase was due, in part, to the increase in food inflation triggered by the drought in Russia, exacerbated by a rise in prices among international grain producers, as well as higher consumption taxes on alcohol. Second, the rise in administrative prices in July and September 2012 and January 2013 led to inflated prices for services. Finally, there was an upward bend in core inflation, which excludes food and energy. After it had stabilized at approximately 5.7 percent for a few months, it increased from 5.1 percent in May 2012 to 5.8 percent in October 2012 (World Bank, 2013). Countries with high inflation face high government borrowing costs as a result, since lenders need to be compensated for the loss of their investments’ purchasing power. Low demand for industrial goods remained the main challenge for Russian businesses and, as a consequence, enterprises were not stimulated to invest and expand their production.

Russian official rates are known to have been high ever since the Perestroika. With an official interest rate of 8.25%, money and investment became very expensive relative to Russia’s main competitors. Banks would often ask for two-digit interest rates when lending. Consequently, this resulted in higher costs for loans and borrowed capital, thereby giving Russian stakeholders a competitive disadvantage when making investments. However, interest rates decreased from 15.2% in 2009 to 9.1% in 2012 (see Figure 3).


Figure 3: Interest rates for corporate loans on average (World Bank, 2013)

With regard to Russian shippers and ports, capital costs for investment decisions decreased by 40.13% on average. This certainly played in favor of the Russian shipping and port industry. On the other hand, other countries experienced a record low in interest rates. The EU, which harbors the competing ports as well as the most competitive shippers interested in the Northern Sea Route, will lend money at 0.5% through the European Central Bank. As a reference average lending rates to prime customers would therefore be at 3% (European Central Bank, 2013). Bearing these facts in mind, it is clear that pressure from finance-related costs had eased slightly. At the same time, however, competing companies from the EU would be able to acquire considerably cheaper capital than the companies located in or around St. Petersburg.

As shipping rates are almost exclusively quoted in European Euros and American Dollars, foreign exchange rates against those currencies played a major role in every internal and external calculation for shippers, ports and other stakeholders. In September 2013, 43-45 Russian Ruble could be exchanged for a Euro. Moreover, the Russian currency stood at around 32 RUB against one Dollar. However, the most important factor in foreign exchange related to long-term investment decisions (e.g. financing port infrastructure or new ships) as well as for daily money inflows (i.e. port handling fees, shipping revenue) is / was (present tense if it is a fact in general) volatility. The lower it is, the more predictable and, hence, the easier it becomes to calculate an investment decision for parties involved, i.e. the investor and the lender. Consequently, analyzing the Ruble’s performance and its volatility with Bollinger bands appears to be a suitable approach (Murphy, 1999).

As we see from Figure 4, volatility between the US Dollar and Russian Ruble clearly decreased compared to its trend at the beginning of the economic crisis in 2008 and 2009. However, volatility in this currency pair was still high enough to keep most American companies away from the Northeast Passage and its route. The volatility of the Ruble against the Euro was smaller. This was mainly due to steady, high volume of trade balance based on oil, gas and other raw materials, which offset the effects of an unstable Russian economy. Therefore, banks and especially forex finance were more optimistic about investments. For instance, they committed themselves to in- and outflows of the Russian Ruble.


Figure 4: Rates of US Dollar and Russian Ruble with Bollinger Bands (Yahoo Finance 2013)

To sum up, the NSR was meant to have a huge impact on the involved economies, especially Russia. But its neighboring countries, or countries otherwise depending on shipping, were likely to also be affected. Nevertheless, there will be more time needed to judge whether the predictions were correct.

The True Cost of Polarization

February 27, 2017

“There’s No Such Thing as a Free Lunch” – Milton Friedman


Source: Gary Markstein/Creators Syndicate

In their first lesson of economics, students are introduced to the concept of scarcity – an inherent condition in a world of limited resources – and, as a result, the existence of opportunity costs; Milton Friedman’s famous quote “There’s No Such Thing as a Free Lunch” echoes this idea that everything has a cost, even when it is not obvious. When it comes to government decisions, costs are often scrutinized: the cost of an investment, of giving (or not giving) a public service in concession or implementing a policy; however, the costs of political polarization are rarely analyzed.

What is the cost of political polarization?

Or, rather, which is the most valued asset lost for having political polarization? Certainty. In this essay, the author will provide arguments in favor of the hypothesis that the opportunity cost of the increasing gap between political attitudes of politicians towards major policy dimensions (trade, migration, gender, racial integration, public expenditure) is uncertainty and will discuss its negative effects on economic performance.

A first approach to studying the economic effects of uncertainty resulting from political activities is observing economic markets’ performance during electoral cycles. Brandon and Youngsuk (2012) estimated the effect of elections over corporate investment. Results indicate that, after setting control variables for investment opportunities and economic environment variables, corporate investment rates dropped, on average, by 4.8 percentage points the year prior to elections. In countries with polarization, the effect is expected to increase due to the risk of abrupt changes in policy. The changes may be moderate, for example: contract regulations, taxation, trade policy, or more drastic actions like expropriation of possessions and hostility towards non-supporters. Empirical evidence reveals that political polarization affects investment not only during electoral cycles, but also discourages long-term investments, with investors instead opting  to minimize their risk and making short-term opportunistic solutions such as  asset stripping, and intensive lobbying with state officials (Frye. 2002).

Other negative effects of polarization

Especially in countries with parties that exhibit diverging ideologies such as ex-communists and anticommunists, other negative effects of polarization are the imposed barriers to create consensus. There is a constant conflict over the economic reforms to be implemented, given the conflicting principles, and it does not allow politicians to reach agreements to effectively address economic crisis with coherent policies (Frye. 2002).

The struggle between opposing factions also has a detrimental effect on  the quality of institutions by increasing the state officials’ incentives to make opportunistic decisions, for example populism, clientelistic relationships, bribing and interference of power groups in government policies, just to name a few

According to a growing mass of literature on the subject, when a country lacks strong institutions and has a polarized government, it will be more likely to default on sovereign debt. It is important to bear in mind that sovereign debt crises   do not occur only when governments choose to default, as recent events have shown that crises can arise from investor’s uncertainty about a country’s ability or intentions to honor its responsibilities. Qian (2012) uses an economic model to show the dynamics between the quality of institutions, the level of government polarization and the sovereign default risk, for a sample of 90 countries. Her findings support the premise that the lack of strong institutions and a clear set of rules allows powerful groups to capture government and influence policies to their benefit, without considering their impact on other groups.

Additional evidence of the negative effects of polarization and weak institutions is found when combined with a globalized financial market. In particular, countries with low income and weak institutions are perceived as unreliable by investors and experience a threshold effect that will hinder their access to all the benefits of globalization, as presented by Alfaro, Kalemli-Ozcan and Volosovych (2008), as well as by Kose, Prasad and Taylor (2011).

Moreover, Broner and Ventura (2006) discuss the conditions under which globalization lead to higher financial market volatility. According to their model, the instability of domestic financial markets can be explained by: 1) uncertainty of governments’ behavior (incentives to default on foreign liabilities increased with globalization) and 2) the probability of a financial crisis (i.e., it depends largely on the nature of regulations and strength of judicial systems to enforce contracts). As a result of financial liberalization and the existence of the previously mentioned sources of uncertainty, the economy will alternate between two possible outcomes: an optimistic equilibrium (in which institutions are strong in enforcing contracts) or a pessimistic equilibrium (one with weak, opportunistic institutions). In a polarized government, the effect of the uncertainty sources would be amplified, potentially destroying the possibility of an optimistic equilibrium.

After analyzing polarized countries using these arguments, it is not a surprise to find that some countries have low levels of investment, slow economic growth, high volatility and recurring economic and institutional crises.

 “There’s No Such Thing as a Free Lunch”… especially when it comes from a politician.


Layman, G. C., Carsey, T. M., & Horowitz, J. M. (2006). Party polarization in American politics: Characteristics, causes, and consequences. Annu. Rev. Polit. Sci., 9, 83-110.

Baldassarri, D., & Bearman, P. (2007). Dynamics of political polarization. American sociological review, 72(5), 784-811.

 Qian, Rong. 2012. Why Do Some Countries Default More Often Than Others? The Role of Institutions. Policy Research working paper; no. WPS 5993. World Bank. © World Bank.

Frye, Timothy. 2002. The Perils of Polarization: Economic Performance in the Postcommunist World. World Politics, Volume 54, Number 3, April 2002, pp. 308-337

Brandon. J, Youngsuk, Y. 2012. Political Uncertainty and Corporate Investment Cycles. Journal of Finance, 67 (2012), 45-83.

Broner, F. and Ventura, J., 2006. Rethinking the effects of financial globalization. The Quarterly Journal of Economics, p.qjw010.

Corporación Latinbarómetro, Socio- demographic variables (2015). Retrieved from


2017 Competition Curtain Raiser, Part 1: Excessive Pricing

February 21, 2017

1168_1epanutin-anti-epilepsy-drug-spl-c0068990Photo credit: Pulse.

This is the first in a series of posts highlighting competition issues and cases that are set to drive the debate in Europe this year.

Pfizer and Flynn Pharma: a major decision from the CMA

On 7 December 2016, the United Kingdom’s (UK’s) Competition and Markets Authority (CMA) issued a potentially precedent-setting decision against pharmaceutical producer Pfizer and distributor Flynn Pharma, imposing a fine of nearly £90 million for excessive pricing.

In September 2012, Pfizer sold the distribution rights to its anti-epilepsy drug Epanutin (phenytoin sodium) to Flynn Pharma, which debranded (or “genericised”) the drug, with the effect that it was no longer subject to price regulation. Following the sale, Pfizer increased its price for phenytoin sodium to Flynn Pharma by between 780% and 1,600% relative to the price at which it had previously sold the drug in the UK, and in turn Flynn Pharma increased the wholesale price of the drug to between 2,300% and 2,600% of the former price.

A key feature of phenytoin sodium appears to be that patients taking the drug cannot readily switch to the same drug manufactured by another producer, since even minor differences in production processes could affect the efficacy of the drug in treating epilepsy in individual patients. Therefore, despite the fact that the drug was genericised, the CMA appears to have found that Pfizer and Flynn Pharma retained a de facto monopoly over the sale of the drug to existing patients taking Epanutin. Such a finding would also imply that alternative epilepsy treatments were not viable substitutes for phenytoin sodium in respect of the relevant patients, and were therefore not included in the definition of the relevant market.

The excessive pricing debate

The prohibition of excessive or unfair pricing by dominant firms is a controversial part of UK and European competition law (it has no meaningful counterpart in US legislation or case law). On the one hand, there are strong economic arguments, at least from a static point of view, for preventing a dominant firm from exploiting its monopoly position by charging prices higher than the theoretical competitive price. One of the key results of microeconomic theory (and indeed the foundation of competition policy) is that monopoly pricing lowers overall welfare compared to a competitive market outcome, since the monopoly maximises profits by producing a less-than-efficient quantity of the relevant good and selling it at a higher-than-efficient price.

However, enforcing a prohibition against excessive pricing presents various difficulties. One of these is to establish a benchmark price against which the actual price charged by the dominant firm is to be evaluated, and deciding whether the margin above this benchmark is excessive. According to the CMA press release, it appears to have had regard both to the initial regulated price of phenytoin sodium, and the price charged by Pfizer in other European countries, in reaching a finding of excessive pricing.

It is important to note, however, that there is no inherent reason why such prices should represent useful comparators. In other words, although a price increase of 2,600% naturally appears alarming at first glance, a range of factors could have resulted in the initial price being very low, especially if it was regulated. In this case, Pfizer and Flynn Pharma argue that the regulated price of Epanutin in the UK prior to September 2012 had been loss-making. It remains to be seen how the CMA established a relevant benchmark when its non-confidential decision is made public.

A further risk in enforcing a prohibition of excessive pricing, partly related to the issue discussed above, is that it could have a negative impact on firms’ dynamic incentives to invest across the economy. For example, over-enforcement could prevent a firm from earning economic profits where it has innovated in order to gain a temporary competitive advantage. More generally, over-enforcement runs the risk of creating uncertainty, and thereby lowering incentives to invest, if businesses fear that their future profits will be capped by a competition authority at a level which they cannot predict in advance.

For such reasons, economists such as Massimo Motta and Jorge Padilla (both teaching in the Competition masters at BGSE)  have proposed that excessive pricing provisions should be enforced only in cases where there is little or no prospect of the relevant market eventually correcting itself, and where a sector regulator would not be better placed than the competition authority to intervene (among further restrictive conditions). In this case the CMA may have concluded that the inability of other phenytoin sodium producers to compete for existing Epanutin patients created such a situation where entry is infeasible. Even so, the question remains whether this issue could not better be addressed through amending existing drug price regulation. The release of the CMA’s final decision is likely shed more light on this issue.

What to look out for in 2017

In the meantime, Flynn Pharma has appealed the CMA’s decision to the Competition Appeals Tribunal (CAT). 2017 could therefore reveal how the CAT views the different considerations surrounding excessive pricing, and to what extent the CMA decision will be applicable to other drugs and industries. The finding of excessive pricing also raises the prospect that Flynn Pharma’s customers, and specifically the UK Department of Health, could sue it for damages resulting from the high price, which would raise further interesting issues in so-called “private”  excessive pricing enforcement.


Evans, D.S. & Padilla, A.J. 2005. “Excessive Prices: Using Economics to Define Administrable Legal Rules”. Journal of Competition Law and Economics 1(1), pp. 97–122.

Motta, M & de Streel, A. 2007. “Excessive Pricing in Competition Law: Never say Never?” The Pros and Cons of High Prices, pp. 14-46. Swedish Competition Authority.

What economists do: Talking to Philip Wales, UK Office for National Statistics

February 15, 2017

photo credit: Paul Jacobs, Portsmouth News

Wondering what economists get up to in the real world? The BGSE Voice team spoke to Philip Wales, Head of Productivity at the UK Office for National Statistics about what economists like him get up to, and what makes a good professional economist.

First of all, could you please tell us about yourself: what is your background, and what do you do at the ONS?

My name is Philip Wales; I’m Head of Productivity at the Office for National Statistics. I’ve been working at ONS for just over five years, and have held a range of different posts over that period. Before I started work here, I was studying for my PhD at the London School of Economics, and prior to that I was a graduate economist at a private sector consultancy.

In my current position, I manage a team of 14 economists with responsibility for the production and analysis of the UK’s core productivity statistics. We produce the UK’s main measures of labour productivity as well as estimates of multi-factor productivity. It’s an interesting area which has got a lot of attention since the onset of the economic downturn and the ‘Productivity Puzzle’. We regularly answer queries from the press, academics and a range of other experts; we often contribute to external seminars and talks at big international conferences, and our statistics help to inform the national policy debate.

The part of my job that I enjoy most is working with the detailed survey data that ONS receives to understand developments in the UK economy. Over the last few years, I’ve analysed the UK’s Labour Force Survey, the Annual Business Survey and the Annual Survey of Hours and Earnings. I’ve worked with micro-level information gathered for the Consumer Prices Index and with survey data on household income and expenditure, and we’ve produced a range of outputs using these resources.

What other things do economists do at the ONS? 

There are more than 100 economists at ONS, engaged in a wide variety of activities. In our central team, our economists scrutinise and sense-check our economic data. They are responsible for building a coherent narrative around ONS’ economic statistics, and help the media, outside experts and other interested users to understand recent developments. ONS economists also conduct in-depth research and analysis on our micro-level datasets to support user understanding. At times when an economic aggregate is behaving unusually, or when there is a demand for more detailed information, a clear understanding of recent developments informed by high-level data handling and analytical skills is critical.

Alongside work on these data-driven questions, our economists are also deployed to work on detailed methodological and measurement questions. How do you go about modelling the depreciation of a firm’s capital stock? What is the difference between a democratically- and a plutocratically-weighted price index? How have changes in workforce composition affected average wage growth? In all of these cases, economists work alongside methodologists and statisticians to draw grounded, intellectually-robust conclusions about our approach

Finally, ONS economists are also involved in supporting the corporate functions at ONS. We work on business cases – seeking to value the costs and benefits of big projects – as well as modelling the implications of changes in pay rates for the department’s budget. In all of these roles, the skill set that economists bring to the role is highly valued, and our numbers have grown as a result

What are the main challenges you face as an economist at the ONS?

One of the biggest challenges that economists face – both at ONS and elsewhere – is communicating our findings in a clear, concise and accessible form. Even when speaking to other, technically minded colleagues, communicating the findings of a piece of analysis clearly can be a real challenge – especially if it is at the more complicated end of the spectrum. Communicating our findings to a non-technical user-base or – occasionally – members of the media, can be even more difficult.

In my experience, good preparation is central to achieving a good result. I try to pitch my work in a clear, intuitive manner, using thought-through and interesting graphics to help users to understand the questions that I’m posing and the answers that my analysis provides. Above all, I try to weave a narrative through my work – helping the audience to understand what the broader picture is, as well as our detailed findings. Clear communication is especially important when you work for an institution like ONS – where users are looking for clear messages on recent developments. Expert knowledge is certainly very important and the world is a complicated place – but communicating difficult concepts in a clear and intuitive manner is a key skill for economists in all walks of life.

In your opinion, what makes a good professional/public sector economist? What does the ONS look for in economists?

Besides the good communication skills, a good economist needs solid data-skills, an enquiring and inquisitive mind and the capability to bring together and synthesise information from a range of different sources within a coherent framework.

The best economists that I’ve worked with at ONS have all of these attributes. Confronted with a dataset, they’re interested in exploring it and visualising it in different ways. They ask questions about what the data can tell us about the way that different agents are working in the economy, and whether that accords with our intuition or conceptual approach. They explore how the dataset was constructed, what limitations this imposes and think about how applicable the survey results are for the population as a whole. They deploy their skills of data-analysis and their economic theory to explain what is going on, and they build intuitive examples and graphics into their work to communicate their meaning to others. They have a real desire to learn about the economy through the data that we collect and they have a clear interest in helping others to understand what is going on.

These economists tend to be enthusiastic, inquiring and curious in their job, with a rapacious appetite for detail and knowledge. They have one eye on the datasets, another eye on their methodology and theory, and a clear line of sight to an issue central to economic policy. It’s this blend of economic skills that make a good economist, and that’s what I’m looking for when I interview prospective ONS economists.

Do you have any advice for the current generation of economics students?

I’m wary of giving advice – as one size rarely fits all – but I think there’s a lot of interesting developments taking place in economics that students should look to exploit. Firstly, the economic downturn and the financial crisis created an appetite to revisit what had become established theoretical tenets of the field and the nature of our modelling. A sceptical, but fair-minded assessment of the models and approaches that you learn is really important.

Secondly, as the field has progressed, the gulf between theoreticians and empiricists has grown very large, with relatively few economists now able to bridge the gap between the frontier of theory and the frontier of measurement. Understanding the strengths and weaknesses of both is central – and will be more so in the future. As more data becomes available, having both a decent theoretical foundation and a strong set of applied empirical skills will be critical.

Finally, if I had my time again, I’d want to make the very most of the opportunity that studying offers. Go to the talk that you’re interested in; speak to the lecturer afterwards; think critically about how you would extend someone else’s analysis and try and come up with ideas of your own. It’s a great time, enjoy it!

#ICYMI on the BGSE Data Science blog: Interview with Ioannis Kosmidis

February 10, 2017

In this series we interview professors and contributors to our fields of passion: computational statistics, data science and machine learning. The first interviewee is Dr. Ioannis Kosmidis. He is Senior Lecturer at the University College London and during our first term he taught a workshop on GLMs and statistical learning in the n>>p setting. Furthermore, his research interests also focus on data-analytic applications for sport sciences and health…read the full post by Robert Lange ’17 and Nandan Rao ’17 on Barcelona GSE Data Scientists

Convenience Effect on Birth Timing Manipulation: Evidence from Brazil

February 6, 2017

According to the United Nations Children’s Fund, Brazil ranked first place with the highest cesarean section rate among 139 countries in the world for the period of 2007-2012.[1] In 2009, the number of surgical births surpassed vaginal deliveries. During the years of 2012-2014, cesarean delivery (CD) corresponded to 57% of all registered births in the country. Another less but still invasive medical intervention is labor induction. This is a technique used to bring on or speed up contractions and thus anticipate vaginal births. For the period of 2012-2014, 33% of all registered normal deliveries in the country occurred after induced labor. Therefore, only 29 out of 100 births in Brazil occurred in the form of natural birth, through a spontaneous (non-induced) vaginal delivery.[2]

Such medical interventions (CD and labor induction) allow for manipulation in the timing of birth. Although birth timing can be altered due to medical reasons (e.g., when labor could be dangerously stressful or in case of post-term pregnancies), the existing evidence suggests that it is also manipulated for reasons other than the health of the fetus or of the mother. Mothers’ incentives to intervene in the timing of their deliveries are usually financial when compensations are involved, such as baby bonuses (Gans and Leigh, 2009) or tax savings (Dickert-Conlin and Chandra, 1999), or even related to cultural issues (Lo, 2003). Doctors’ incentives tend to be determined by risk-aversion (Fabbri et. al, 2015) or convenience (Gans et al., 2007).

As CD can be scheduled for medical reasons, a concentration of scheduled CD’s in convenient moments does not constitute enough evidence to suggest that deliveries are being scheduled due to convenience motivations. However, since complications during delivery that require an emergency CD should be randomly distributed across time, a concentration of unplanned CD’s during convenient times indicates that reasons other than the protocol are playing a role. Brown (1996) and Lefèvre (2014) show evidence on this matter. Both papers suggest that physicians induce CD in the labor room during convenient moments. Thus, physicians’ convenience motivations as well as other incentives correlated to convenient moments could be at play.

Convenient times usually coincide with times when it might be safer to deliver. It is also during non-leisure days and usual business hours that the largest capacity of hospital staff is on-shift and medical staff is fresher. If this is the case, then doctors who are risk-averse or altruistic might have preferences to allocate complex deliveries on those moments when risk can be minimized. Fabbri et al. (2015) provide evidence of risk aversion attitudes for a sample of women admitted at the onset of labor in a public hospital in Italy.

In my thesis from UFRJ, I tested whether convenience effects play any relevant role in birth-timing manipulation in Brazil. More specifically, I investigated if births that would have occurred after spontaneous labor during inconvenient times are anticipated to convenient times. I adopted several strategies in order to isolate the convenience effect from potential risk aversion attitude.

First, I used a new type of inconvenient days that may attenuate risk aversion attitudes in manipulating the timing of births: business days in-between holidays. As these are business days, hospitals should be fully-staffed. However, risk-averse physicians may still manipulate the timing of births in order to eliminate the possibility of women going spontaneously into labor on the surrounding leisure days. Second, I analyzed the results by hospital funding. Public funded hospitals provide a context where women do not actively participate in the decision-making process. This scenario enabled me to attribute the results to physicians. Third, I further investigated the results by level of risk. While birth timing manipulation motivated by convenience should happen mostly among low-risk births, timing manipulation guided by risk aversion should be concentrated in high-risk births – as in this latter case the goal is to minimize the risk of low quality hospital services.

Using daily data on birth records, I constructed a daily panel of number of deliveries by hospitals for the period 2012-2014, with information on hospitals, deliveries (e.g. type of birth procedure and nature of labor), pregnancy, mothers and newborns. Having classified births as low-risk and high-risk according to observable variables (e.g. mother’s age below 18 or above 35 years old, multiple pregnancy, newborn with congenital anomaly), I ended up with daily panels of number of high and low-risk deliveries by hospital.

As my goal was to understand if births that would have occurred after spontaneous labor were anticipated, I ran regressions of the number of births after spontaneous labor on days in-between holidays. I found a significant negative result, which suggests that either convenience or risk-aversion motivations were playing a role. Then, I verified that the results were robust to the restricted sample of public funded hospital. Hence, I attributed the results to physicians’ motivations. Finally, I further restricted the sample to low-risk births and re-estimated the results. Having found out that the findings were driven by low-risk deliveries provided further evidence that births were being anticipated due to physicians’ convenience effect. Moreover, I ran the same regressions for the days preceding the leisure period and verified an increase of cesarean sections, which reinforces the previous results that births that would otherwise have happened after spontaneous labor occurred instead by the scheduling of cesarean sections.




[2] CD rates extracted from the Brazilian National System of Information on Birth Records (Datasus/SINASC).

Borra, C., González, L.; Sevilla, A. Birth timing and neonatal health. The American Economic Review, v. 106, n. 5, p. 329-332, 2016.

Borra, C., González, L.; Sevilla, A. The impact of scheduling birth early on infant health. Working Paper presented at Tinbergen Institute, 2016.

Gans, J.S.; Leigh, A. Born on the first of July: An (un)natural experiment in birth timing. Journal of Public Economics, v. 93, n. 1-2, p. 246-263, 2009.

Dickert-Conlin, S.; Chandra A. Taxes and the timing of births. Journal of Political Economy, v. 107, n. 1, p. 161-177, 1999.

Fabbri, D.; Castaldini, I.; Monfardini, C.; Protonotari, A., Caesarean section and the manipulation of exact delivery time. HEDG working paper n.15, University of York, 2015.

Gans, J.S.; Leigh, A.; Varganova, E. Minding the shop: The case of obstetrics conferences. Social Science and Medicine, v. 6, n. 7, p. 1458-1465, 2007.

Brown, H.S. Physician demand for leisure: Implications for cesarean section rates. Journal of Health Economics, v.15, p. 233-242, 1996.

Lefevre, M. Physician induced demand for C-sections: does the convenience incentive matter? HEDG working paper n. 14, University of York, 2014.