Editor’s note: In this post, Greg Ganics (Economics ’12 and PhD candidate at UPF-GPEFM) provides a non-technical summary of his job market paper, “Optimal density forecast combinations,” which has won the 2016 UniCredit & Universities Economics Job Market Best Paper Award.
After the recent Great Recession, major economies found themselves in a situation with low interest rates and fragile economic growth. This combination, along with major political changes in key countries (the US and the UK) makes forecasting more difficult and uncertain. As a consequence, policy makers and researchers have become more interested in density forecasts, which provide a measure of uncertainty around point forecasts (for a non-technical overview of density forecasts, see Rossi (2014)). This facilitates communication between researchers, policy makers, and the wider public. Well-known examples include the fan charts of the Bank of England, and the Surveys of Professional Forecasters of the Philadelphia Fed and the European Central Bank.
Forecasters often use a variety of models to generate density forecasts. Naturally, these forecasts are different, and therefore researchers face the question: how shall we combine these predictions? While there is an extensive literature on both the theoretical and practical aspects of combinations of point forecasts, our knowledge is rather limited about how density forecasts should be combined.
In my job market paper “Optimal density forecast combinations,” I propose a method that answers this question. My main contribution is a consistent estimator of combination weights, which could be used to produce a combined predictive density that is superior to the individual models’ forecasts. This framework is general enough to include a wide range of forecasting methods, from judgmental forecast to structural and non-structural models. Furthermore, the estimated weights provide information on the individual models’ performance over time. This time-variation could further enhance researchers’ and policy makers’ understanding of the relevant drivers of key economic variables, such as GDP growth or unemployment.
Macroeconomists in academia and at central banks often pay special attention to industrial production, as this variable is available at the monthly frequency, therefore it can signal booms and busts in a timely manner. In an empirical example of forecasting monthly US industrial production, I demonstrate that my novel methodology delivers density forecasts which outperform well-known benchmarks, such as the equal weights scheme. Moreover, I show that housing permits had valuable predictive power before and after the Great Recession. Furthermore, stock returns and corporate bond spreads proved to be useful predictors during the recent crisis, suggesting that financial variables help with density forecasting in a highly leveraged economy.
The methodology I propose in my job market paper can be useful in a wide range of applications, for example in macroeconomics and finance, and offers several avenues for further research, both theoretical and applied.
Ganics, G. (2016): Optimal density forecast combinations. Job market paper
Rossi, B. (2014): Density forecasts in economics and policymaking. Els Opuscles del CREI, No. 37
Delimiting the relevant market is a key concept for the analysis of mergers and acquisitions. The theoretical framework introduced by the SNNIP test helps to understand the conditions needed to do it. Nevertheless, there exist so many methods and the scientific community does not coincide in what of them is better to use. In this article based on previous work, some methods grounded on time series are presented.
In general, the concept of relevant market is associated with arbitrage. In this sense, two regions belong to the same market when arbitrage is possible. Therefore, it is possible to check whether the prices of these areas hold a pattern of convergence. As exposed mainly in Haldrup (2003) , we can differentiate two types of convergence:
Absolute convergence: it appears when there is perfect arbitrage with no transportation costs, then the stationary price difference between regions is zero. It can be expressed as:
Relative convergence: it is analogous to the previous concept but, in this case, transportation cost does not completely disappear. It can be expressed as:
Therefore, absolute convergence is a specific case of relative convergence for the case of α=0, which is mainly that transportation costs are equal to zero.
There are several methods used to analyse time series of prices. They are useful to define the relevant market. There are two main dimensions: defining the market of substitute goods and delimiting the area where a company is competing.
Correlation is one of the most common methods used to analyse prices. In this sense, Stigler & Sherwin (1985) proposed to do it with series transformed in logarithms to avoid problems arising from divergences in variance.
Ideally, two prices of goods or regions inside the same market should have high correlation in both logarithms and its first derivative (that works as an approximation of the growth rate).
This method presents many problems. Firstly, high correlations can be produced because of a spurious relation (Granger & Newbold, 1975). Moreover, Bishop & Walker (1996) argue that highly volatile exchange rates can distort the results. Nevertheless, Haldrup (2003) argue the since 90s exchange rates have a stable structure and, therefore, the analysis is not injured.
Cointegration can be determined by the procedure defined by Engle & Granger (1987). In a more general insight, if time series are integrated of order 1, it is possible to use the Johansen’s test (1991). In this sense, Alexander & Wyeth (1994) argue that a common market can be defined with only one cointegration relationship. In contrast, Haldrup (2003) argues that the single market is determined with k-1, the maximum, cointegration relationships. Cointegration cannot be applied when one of the series is not integrated, that is, it is stationary.
Given that cointegration relationships can be understood as a log-run equilibrium, it is possible to define best response functions to find results corresponding to price-based models, as Bertrand’s Oligopoly.
Since cointegration procedure is based on unit roots tests, Forni (2004) defined a way of determining the long-run equilibrium in a more flexible way. This test tries to analyse the stationarity of the logarithm of the ratios of both price series. It is possible to run different unit root test.
|ADF||They do not belong to the same market (non-stationarity)|
|ADF-GLS||They do not belong to the same market (non-stationarity)|
|KPSS||Both goods or regions belong to the same market (stationarity)|
Figure 1 shows the time-series of the logarithm of the ratio of the price of two different goods. It is an example of relative convergence. Even with some outliers it is possible to see how the ratio fluctuates around an equilibrium. In this case, the test allows to conclude that the series is stationary. We could conclude that with the evidence extracted from this procedure, both goods are part of the same relevant market.
Figure 1: Ratio of two prices seeming to be in the same market Source: Own elaboration in previous work 
Figure 2 shows the same time-series but for different goods. It shows an unclear pattern of co-movement between prices. Not only prices seem not be related but also, they seem to move away. In this case, the series is not stationary and thus, according with this test, we could conclude that both goods do not belong to the same market.
Figure 2: Ratio of two prices seeming not to be in the same market.
Source: Own elaboration in previous work
From my point of view, for this purpose, unit root tests can be applied either with or without trend and intercept in the auxiliary regression. Initially, to test whether two goods or regions belong to the same market the trend is not relevant, since they should have a constant long-run equilibrium. In the case that the series were not stationary, repeating the test with trend would be interesting. It could explain if there exists a pattern of divergence between goods or regions. The intercept can be understood as the α coefficient exposed above. If it were zero and the test concluded stationarity, it could be a case of absolute convergence.
Granger causality is based on the analysis of VAR models. In an easy approach, with VAR models we try to estimate the price of one good or area in function of the lags of the other price and its own lags. Granger causality analyses the null of all coefficient of the other price are zero. If the null is rejected, one price causes the other and they seem to belong to the same market.
It is possible to carry out the regressions in both ways, the first one for estimating a price and the second one for estimating the other. There could be causality in both ways but it is not a necessary requirement to conclude that there exists a causality relationship between them.
Prices displayed as the ratio of Figure 1, showed a two-way causality relationship. However, prices of Figure 2 did not show any causality relationship.
There are many methods to analyse if some regions or goods belong to the same relevant market. Apart from the ones exposed above, other price-based ways can be used as VEC models or PCA, and other non-price-based methods as the shock analysis or the Elzinga & Hogarty Test (1973).
In general, different procedures do not use to issue contradictory answers, but they are not self-explanatory by themselves. They need to be complemented with each other to bring back the most accurate conclusion.
 See García García, Alberto (2016). El mercado relevante: técnicas económicas y econométricas para la delimitación. Trabajo Fin de Grado. Universidad de Oviedo.
 Haldrup, N. (2003). “Empirical Analysis of Price Data in the Delineation of the Relevant Geographical Market in Competition Analysis. University of Aarhus, Economic Working Paper .
 Stigler, G. J., & Sherwin, R. A. (1985). The Extent of the Market. Journal of Law and Economics, Vol. 28, No. 3, 555-585.
 Granger, C. W., & Newbold, P. (1974). Spurious Regressions in Econometrics. Journal of Econometrics, 2;, 111-20.
 Bishop, S., & Walker, M. (1996). “Price correlation analysis: still a useful tool for relevant market definition. Lexecon.
 Engle, R. F. & Granger, C.W. (1987). Co-Integration and Error Correction: Representation, Estimation and Testing. Econometrica, 55(2), 251-76.
 Johansen, S. (1988). Statistical Analysis of Cointegration Vectors. Journal of Economic Dynamics and Control, 231-254.
 Alexander, Carol and Wyeth, John (1994) Cointegration and market integration: an application to the Indonesian rice market. The Journal of Development Studies, 30 (2). pp. 303-334. ISSN 0022-0388
 Forni, M. (2004). Using Stationarity Test in Antitrust Market Definition. American Law and Economic Review, 441-64.
 Elzinga, K. G., & Hogarty, T. F. (1973). The Problem of Geographic Market Definition in Antimerger Suits. Antitrust Bulletin, 18(1), pp.45-81.
With Donald Trump voted in as the 45th US President, the world economy has witnessed another sobering reminder of the rise of populism, inward-looking politics and a sweeping anti-establishment wave, having barely recovered from the last with Britain’s vote to leave the EU. Initial market turmoil from Trump’s surprise victory last Thursday has reversed, as fiscal stimulus takes centre stage on Trump’s economic agenda, but does the Trump plan have what it takes to kick-start the US economy?
“We have a great economic plan, we will double our growth and have the strongest economy anywhere in the world” was the promise made by US President –elect Donald Trump; to “make America great again”. Having won the confidence of millions of Americans, against a backdrop of stagnant productivity, weak wage growth and rising inequality, Trump’s “great economic plan” has a lot to deliver.
With the US economy having grown a little over 2 per cent on average over the last six years, and forecast to grow slightly under for the next six[i], doubling of current growth would require something quite extraordinary. Even compared to pre-crisis growth of 3 per cent per year in the decade preceding the financial crisis, this target looks ambitious. If achieved, however, the US economy could close over half of the shortfall compared to its pre-crisis trend by 2021, as in figure 1.
The challenge then becomes finding new drivers of productivity growth to boost economic activity, and lifting lacklustre demand. Since the financial crisis, this has proven difficult, despite ultra-loose monetary policy. Moreover, growth in potential output has failed to materialise during the recovery, as evident in the US Congressional Budget Office’s consecutive downwards revisions to US potential output since 2008, as in figure 2. Figure 2 has come to be closely associated with an idea that the weak recovery may have at its heart a more structural rather than cyclical cause, also known as Secular Stagnation.
The Secular Stagnation Hypothesis
The term secular stagnation was initially coined by Alvin Hanson in 1938, in the aftermath of the Great Depression, questioning whether demand would be sufficient to support future economic growth. After over 70 years, former US Treasury Secretary Larry Summers revived this debate, after observing the weak economic recovery in advanced economies despite historically low interest rates. Summers suggests that the real interest rate, required to keep full employment and balance investment and savings, may actually be negative.
This may be due to a chronic shortfall of demand from a lack of productive investment opportunities and a build-up of savings, driven by demographic factors such as ageing populations or a fall in the relative price of capital. Moreover, with advanced economies experiencing low levels of inflation, boosting demand by reducing real interest rates becomes more difficult with monetary policy becoming ineffective at the zero lower bound, shifting the onus to fiscal and structural policies to support economic growth
The secular stagnation hypothesis finds some support in the evidence, with the real interest rate observed to be declining since the early 2000’s[ii]. However, former US Federal Reserve Chairman Ben Bernanke interprets this as evidence of a temporary or cyclical “global savings glut”[iii]. Considering the open economy, he suggests excess savings have built up due to large current account surpluses held by oil producing and emerging economies. As Bernanke emphasises the root of the problem matters for the policy prescription: a structural demand deficit (under Secular Stagnation) may require a fiscal expansion, while a temporary excess or imbalance of savings would be best addressed through increasing mobility of international capital flows.
Weak growth – a global problem
The challenges of weak growth and productivity are not solely faced by the US. Similar trends have been observed across advanced economies such as the Euro Area and Japan, with potential output consistently disappointing to the downside[iv]. The IMF’s World Economic Outlook published last month revised down again forecasts for growth in advanced economies, to 1.6 per cent in 2016 and 1.8 per cent in 2017, down by -0.5 and -0.3 percentage points from start of this year alone. Furthermore, the IMF warned that “persistent stagnation in advanced economies could further fuel anti-trade sentiment, stifling growth”.
The “great economic plan”….
As it slowly emerges and gains coherence, Trump’s economic plan is a cocktail of fiscal expansion and trade protectionism. In theory, a package of tax cuts and deregulation should incentivise more investment by lowering the marginal tax rate on investment returns. This could provide a pivotal boost to the US economy – provided it doesn’t break the bank first. The Tax Policy Centre estimates a sizeable increase in national debt, by almost 80 per cent [recently revised down to 50 per cent] of GDP over the next 20 years[v], with benefits most likely only for the highest income earners.
Moreover, notable economists including Larry Summers and Adam Posen have criticised the package for being “ill-designed”[vi], providing tax cuts which are likely to be “low-multiplier rather than high-multiplier and budget-busting rather than responsible”[vii]. Worse still, if these tax cuts are funded through cuts in more productive spending such as research & development or education, they could actually undermine growth.
Trump’s protectionist stance on trade would also drag on growth with severe implications for the global economy. Campaign promises of 45 and 35 percent import tariffs from China and Mexico could result in a trade war, and end up costing the US economy 4.8 million[viii] jobs, while tougher foreign investment rules could worsen the “global savings glut”. As the election campaign did not fail to shock and surprise, so developments across these areas continue to present both upside and downside risks, with some forecasters even predicting a recession by the start of 2018[ix].
Don’t forget the Fed
US equity markets, which once dived at the prospect of Hillary Clinton losing, have since rallied over the prospect of Trump’s fiscal expansion plans, while government yields continue rising with expectations of inflationary pressures to follow. With the US economy already close to the 2% inflation target and near full employment, there is a strong case for interest rate hikes starting in December. But as the Federal Reserve Chair Janet Yellen warned, a series of aggressive rate hikes could stall growth, pushing Trump’s doubling of growth target even further out of reach. Much of the success of any fiscal expansion will depend on multipliers and the associated monetary policy response (to be explored in an upcoming post), and as Yellen emphasised a great deal of uncertainty still surrounds the proposed economic policies. For now at least, ‘Trumponomics’ seems unlikely to be the solution to our secular stagnation problems.
[i] IMF World Economic Outlook, “Subdued Demand: Symptoms and Remedies”, October 2016
[ii] King, M., & Low, D., “Measuring the World Real Interest Rate”, NBER Working Paper w19887
[iii] Bernanke, B., “Why are interest rates so low, part 3: The Global Savings Glut”, Brookings, April 2015
[iv] Summers, L., “Reflections on the new ‘Secular Stagnation hypothesis”, VOXEU, 30 October 2014
[v] Nunns., J et al., “Analysis of Donald Trump’s Tax Plan”, Tax Policy Centre Research Report, December 2015
[vi] Gurdus, E., “Larry Summers: Trump’s economic plan is ‘ill-designed’ and harmful”, CNBC, 16th November 2016
[vii] Acton, G., “‘Trumponomics’ is unfunded, open-ended and kind of ridiculous, economist Adam Posen says”, CNBC, 16th November 2016
[viii] Nolan, M., et al. “Assessing Trade Agendas in the US Presidential Campaign”, Peterson Institute for International Economics, September 2016
[ix] Zandi et al, “The Macroeconomic Consequences of Mr. Trump’s Economic Policies”, Moody’s Analytics, June 2016
The Political Future of the European Union
By Giacomo Ponzetto
The next session moved on to the political economy of the EU. Professor Ponzetto started his presentation by sketching the classical theory of fiscal federalism that gives insights on both the current state of the EU and its future.
This theory boils down to the trade off between the benefits of policy coordination and the costs of policy uniformity: is there too much (for example many economists agree that the agricultural pact went too far; the same goes for monetary union with less consensus) or too little (since there is a monetary union, fiscal union needs to be achieved) coordination in the current European framework? The cost-benefit analysis becomes even trickier when the size of the union kicks in: with whom should we coordinate? Is the European Union overstretched, should it continue to expand to new countries? Over time a neoliberal consensus has emerged, embodied by the research work of Alberto Alesina (1999, 2005), pointing at a union that is too small and homogenous. This union has also seized the control of too many policies: this research tells us that any decision maker will take control of a policy whenever he has the possibility, whereas some policies would be best kept off limits as voters do not necessarily agree to delegate them. Furthermore, no matter it does too much or too little, the European Union is doing it wrong according to this neoliberal consensus: the single market is too little enforced, and so are the public goods (failure of coordination of foreign policy, defense); on the other side, it does way too much redistribution and local public services.
Professor Ponzetto highlighted two different scenarios for the future of Europe in the context of global economic integration. In the continuity of Brexit, the first possible outcome is that globalization renders the EU irrelevant, up to its dissolution, since its main realization, the single market, loses its competitive advantage with rising global trade. Here members no longer need to bear the cost of uniformity. On the contrary, it could also strengthen the European Union thanks to the very same single market. It could be the appropriate tool to take advantage of rising trade opportunities and common economic regulation that fosters economic integration as noted by Gancia, Ponzetto and Ventura (2016). In any case, Brexit may offer a natural experiment to check which of these scenarios is correct. Looking at qualitative data published by the Pew Research Center before the vote offers a mixed picture: the short term agreement on an “ever closer” union looks is a stretch, while it could change in the long term as younger adults are more likely to favour the EU.
Source: Euroskepticism Beyond Brexit, Pew Research Center, June 2016
The presentation then turned to the topic that really complicates European Union’s relation to voters: shared responsibility and political accountability. In practice, the EU has exclusive control on a very limited number of policies. Though a flexible combination of European and national responsibility seemed beneficial (Alesina, Angeloni and Etro, 2005), it also generated opacity and loss of accountability (Joanis, 2014): who is responsible for policy outcomes? This question enabled politicians to blame the European Union and holding it responsible for any policy failure. As such, a better enforcement of accountability through clear delegated monitoring would be welcome. Boffa, Piolatto and Ponzetto (2016) found that differences in government accountability (for example Germany versus Italy) make the union more desirable by helping countries converge to the best practices. Political economy also tells us that a fiscal union does not seem very likely to happen for two reasons highlighted by Persson and Tabellini (1996): moral hazard in local policy (higher risk taking); and the fact that risk sharing entails redistribution (not in theory but always in practice). It also provides some other insights on the hostility to immigration through three factors: labour-market competition (see Professor di Giovanni’s presentation), pressure on the welfare state and xenophobia.
Gazing into his crystal ball, Professor Ponzetto concluded his presentation by reminding us that the EU remains very cautious and does not aim at grand reform. On the positive side, the single market will probably go forward and continue to deepen, in particular for the European financial markets. On the negative side, he remarks that the current (and old) inefficiencies will probably continue for quite some time. Also, barring a significant shift in German politics, the fiscal doctrine is not likely to move from austerity to pro-competitive reforms.
Alberto Alesina, Ignazio Angeloni, Federico Etro, 2005. “International Unions”, American Economic Review
Federico Boffa, Amedeo Piolatto & Giacomo A.M. Ponzetto, 2016. “Political Centralization and Government Accountability” Quarterly Journal of Economics
Marcelin Joanis, 2014. Shared Accountability and Partial Decentralization in Local Public Good Provision. Journal of Development Economics
Gino Gancia, Giacomo Ponzetto, Jaume Ventura, 2016. “Globalization and Political Structure”, NBER Working Paper No. 22046
Torsten Persson, Guido Tabellini, 1996. “Federal Fiscal Constitutions: Risk Sharing and Redistribution”, Journal of Political Economy
Bruce Stokes, Euroskepticism Beyond Brexit, Pew Research Center, June 7, 2016
Migration in the EU and its economic impacts
By Julian di Giovani
Professor di Giovani continued the roundtable with a presentation on the theme of migration in the EU. He first articulated it around a thorough analysis of the current situation and the data. Even if inflows massively increased in 2015 due to the refugee crisis, migration in the EU is not not new and deals with various inflows coming from outside and inside the EU. Looking at historical data, migration to Europe has been a steady process since WWII and notably accelerated in 2006 after the significant extension of the European Union to ten countries, mostly in Eastern Europe. Contrary to what is usually expected, the proportion of foreigners in 2014 was similar in the US and in the largest European countries.
Research has been very active in breaking down and understanding the net economic impact of immigration, with an extensive literature on underlying economic variables. The first results from Borjas (1995) highlighted limited gains (to the tune of 0.1% of GDP) and even negative aggregate gains. Borjas found that overall gains were lower than net fiscal costs implying a transfer of wealth from nationals to immigrants. However, Borjas already added place for positive effects when the level of skills of immigrants was taken into account. Recent research has shown larger gains of immigration: Klein and Ventura (2007) through labour reallocation extension in a growth model; di Giovanni, Levchenko and Ortega (2015) in multicountry model that focuses on increased varieties and remittances. Another line of research summarized by Clemens (2011), noted that globalization was most successful in terms of economic gains with the mobility of the labour factor rather than that of capital and goods.
Source: “Economics and Emigration: Trillion-Dollar Bills on the Sidewalk?”, Clemens, M., Journal of Economic Perspectives, 2011
Ortega and Peri (2014) also found positive evidence in a cross-country analysis of income per person and predicted openness to migrants, with results driven by Total Factor Productivity due to diversity effects such as differentiated skills in labour force and increased innovation. Labour market outcomes are also addressed in de la Rica, Glitz and Ortega (2015) with the confirmation that migrants are more unemployed and paid less but also that countries are unequal in education level of their migrant populations. The direct question is then the distributional effects of these labour market outcomes for native workers. The first results are mixed: Borjas (2003) finds that immigration is responsible for large drop in unskilled wages in the US while Ottaviano and Peri (2012) establish the opposite in a model that allows imperfect substitution between immigrants and nationals with equal education and experience. Looking at the firm-level data in Germany, Dustmann and Glitz (2015) show that changes in the skill mix of local labor supply are mostly absorbed by adjustments within firms with changes in relative factor intensities as well as firms entering and exiting the market. Finally, net fiscal effects remain hard to estimate. Contrary to Borjas (1995), Dustmann and Frattini (2014) get a positive and substantial effect when exploiting micro data for the UK over 1995-2001.
Going forward, Professor di Giovanni insisted that migration is indeed a blessing for the European Union’s ageing population and can also facilitate the increase of female participation in the labour market. Innovative policy is of course needed but some solutions already exist, such as the EU Blue Card scheme for high-skills workers, and should be more developed to get results in the short run. But this also requires both institutional and social rigidities to be tackled as well as more resources: an appropriate policy reaction should go far beyond the migration issue only, which cannot be taken as an isolated issue.
George J. Borjas, 1995. “The Economic Benefits from Immigration,” Journal of Economic Perspectives
George J. Borjas, 2003. “The Labor Demand Curve is Downward Sloping: Reexamining the Impact of Immigration on the Labor Market”, Quarterly of Journal Economics
Paul Klein, Gustavo J. Ventura, 2007. “TFP Differences and the Aggregate Effects of Labor Mobility in the Long Run” The B.E. Journal of Macroeconomics
Julian Giovanni & Andrei A. Levchenko & Francesc Ortega, 2015. “A Global View Of Cross-Border Migration,” Journal of the European Economic Association
Michael A. Clemens, 2011. “Economics and Emigration: Trillion-Dollar Bills on the Sidewalk?”, Journal of Economic Perspectives
Francesc Ortega, Giovanni Peri, 2014. “Openness and income: The roles of trade and migration”, Journal of International Economics
Gianmarco I. P. Ottaviano & Giovanni Peri, 2012. “Rethinking The Effect of Immigration On Wages,” Journal of the European Economic Association
Christian Dustmann & Albrecht Glitz, 2015. “How Do Industries and Firms Respond to Changes in Local Labor Supply?”, Journal of Labor Economics
Christian Dustmann & Tommaso Frattini, 2014. “The Fiscal Effects of Immigration to the UK”, Economic Journal
A European (dis)Union
“What is going wrong in the European Union these days?” is probably the question that many European voters have been trying to answer in the past few years and all recent events keep pointing at it. First, the recent showdown between Bundesbank President Weidmann and ECB President Draghi on fiscal policy reminded us that even the monetary union, though initiated first to trigger political union, remains a far from smooth cooperation. Second, the theme of (im)migration has been at the very core of the Brexit campaign and added to the overall dissent among European political leaders. This brings us to a last topic of politics at the European level that will be key to deliver solutions.
In such a dense context, the 14th BGSE Economics Trobada 2016 ended up with a very much welcome roundtable focused on “The Future of Europe” and chaired by Professor Jaume Ventura. To help us understand better these various challenges for the EU and its future, Professor Ventura gathered the affiliated Barcelona GSE professors Fernando Broner, Julian di Giovanni and Giacomo Ponzetto.
Breaking the Bank and Bailing out States: Finance in the EU
By Fernando Broner
European Financial Markets
Professor Broner first presented the financial structure of European capital markets, in opposition to that of the United States. Notably, he highlighted that while equity markets play a greater role than banking sector assets in the US, the relation is more than inverted in the EU where banking sector assets are almost six times the size of equity assets.
Equity markets are regulated by the European Securities and Markets Authority (ESMA), founded only recently in 2011. As pointed by Professor Broner, this young regulator still suffers from weak coordination which restrains its action and thus the extension and deepening of European equity markets. Similarly, the International Financial Reporting Standards (IFRS), adopted in 2002 to bring a unified framework and hence foster equity markets, has not been fully enforced. On top of these regulatory inefficiencies, two more reasons also weigh on European equity markets: a strong home bias (64% of EU and 61% of Eurozone equity is held domestically) instead of more interconnections between European countries; as well as the fact that banks mostly lend to each other through debt instruments.
Despite European banks having large activities abroad (18% vs 9% in the US), they are smaller and less diversified, in comparison to their American competitors. However, Professor Broner noted that the crisis prevention framework was today better articulated, under the ECB’s Single Supervisory Mechanism and an improved interaction between the European Commission and the European Banking Authority (EBA) at the rule-making level. The picture is less positive though for crisis management due to responsibilities shared at both the national (lender of last resort, deposit insurance) and European level (Single Resolution Mechanism (SRM) and European Stability Mechanism (ESM)).
Sovereign Debt and Bail Out control
Professor Broner then turned to sovereign debt, another highly sensitive topic for the EU. Prior to the global financial crisis, sovereign risk had almost disappeared in the Euro-area as spreads for all countries were trading in the same range. Part of this decline and convergence in spreads could be explained by expectations of “automatic” bail-outs and a higher cost of default, which proved both inefficient and inaccurate. Along the Eurozone debt crises, various packages were set up to address bail-outs: at the beginning the International Monetary Fund was very active along with European countries and progressively took a step back to let the EU new financial institutions (first the European Financial Stability Facility then the ESM) take over and manage the bail-outs packages. Even the ECB has been directly involved with its Security Markets Programme enacted in 2010 to purchase mostly sovereign bonds. Professor Broner highlighted the unclear role of the ESM: it is similar to a bank capitalized by Eurozone members, whose priority is to provide liquidity. However, it remains criticized by some countries as it offers a kind of transfer scheme, notably through its high maturity loans at very low interest rates. Another challenge is the “sovereign-bank embrace” generated by the current institutional setup, with Eurozone banks holding more and more government bonds. This has serious implications as it crowds out lending to the private sector and reinforces banks’ exposure to sovereign risk while reciprocally banks’ exposure affects also government on the fiscal side as the cost of banking crises significantly participated to the sharp increase of public debt ratios of these countries.
Source: Professor Broner’s presentation
Professor Broner concluded his presentation with some recommendations for the future of European finance. First, remaining barriers to international diversification in the equity markets should be removed. Second, banks’ risk sharing should be improved by encouraging more equity exposure as well as consolidation across countries with more regulation of banks’ subsidiaries. Finally, there should some disincentive action against the current sovereign-bank embrace trend with a direct lender of last resort scheme, direct ESM funding for recapitalizing banks and limitations to the sovereign exposure.
On 26 October 2016, Chilean economist Dr. José Gabriel Palma gave a lecture, organised by Institut Barcelona d’Estudis Internacionals (IBEI) – “Why is inequality so unequal in the world? Do Nations just get the Inequality they deserve?” During the lecture, he also presented his recently-published working paper, which re-examines the eponymous Palma Ratio .
Forces at work
In an earlier paper, published in 2011 , Dr. Palma had already highlighted the two forces pertaining to rising inequality across the world: ‘One is ‘centrifugal’, and leads to an increased diversity in the shares appropriated by the top 10 and bottom 40 per cent. The other is ‘centripetal’, and leads to a growing uniformity in the income-share appropriated by deciles 5 to 9’ (Palma, 2011). Decile 10 refers to the top 10 per cent, and deciles 5 to 9 can be interpreted as the middle-upper class. What this means is that the middle-upper class has been quite successful at protecting their shares, and anybody who genuinely wants to understand inequality within a country should really focus on the income-share of the top 10%. As Clinton’s campaign strategist put it crudely: “It’s the share of the rich, stupid.”
Gini or Palma?
The Gini coefficient, developed by Corrado Gini, is an index measuring income inequality, with 0 implying perfect equality, and 1 meaning perfect inequality. The Gini coefficient has been in use for a long time, but is not without its limitations. Atkinson (1970)  pointed out that the ‘Gini coefficient attaches more weight to transfers affecting middle income classes and the standard deviation weights transfers at the lower end more heavily.’ In other words, the Gini index is more sensitive to changes in the middle of the distribution, and less sensitive to changes at the top and bottom. Cobham and Sumner (2013)  have also echoed similar sentiments and criticised the use of the Gini index. They have in fact coined the term ‘Palma ratio’, which is ‘the ratio of the top 10% of population’s share of gross national income (GNI), divided by the poorest 40% of the population’s share of GNI’ (Cobham and Sumner, 2013).
Why might the Palma ratio be a more relevant indicator of the extent of income disparity in an economy? Why is there a growing consensus that the use of the Palma ratio is more appropriate in formulating policies that aim at reducing poverty? The answer is obvious – if an economy has a high Palma ratio, policy-makers can immediately focus on measures to narrow the gap between the top 10% and the bottom 40%. In Palma’s own words: ‘… … it measures inequality where inequality exists; it is also simple, intuitive, transparent and particularly useful for policy purposes’ (Palma, 2016). In contrast, the Gini index is likely to engender distortions because it reflects changes in the distribution where the probability of such changes (i.e. in the middle-upper class) is the lowest. This is such an important consideration that, during the lecture at IBEI, Dr. Palma cautioned his audience to ponder about norms versus distortions. Indeed, if we are often fixated on the middle-upper class, whose homogeneity is evident across the world, we might just be turning a blind eye to more serious concerns – the norms, that is, the (widening) gap between the extreme ends of the spectrum. In light of this, an inclusion of the Palma ratio in policy design will lead to desirable outcomes.
It is interesting to note that the Palma index has also revealed certain stylised facts.
Source: Palma, 2016
Using the Palma index, the figure shows how inequality increases in an almost linear fashion quite steadily until about the 100th ranking, the point at which most Latin American countries come into the picture (as indicated by the red circles – with the exception of Uruguay), and where inequality starts increasing exponentially.
Moreover, we also need to reconsider the correlation between education and income distribution. Dr. Palma has noted that most of the diversity in educational attainment (especially in terms of tertiary education) across the world is found in the deciles 5 to 9 group. However, ‘why does one find extraordinary similarity across countries in the shares of national income appropriated by this educationally highly diverse group?’ (Palma, 2016). Chile is a case in point. With a rate of 71% of gross tertiary enrolment, the income share generated by deciles 7 to 9 in the country is about the same as that of the Central African Republic, which has a gross tertiary enrolment at only 3.1% (Palma, 2016). Hence, is it really all about education? Or is it that the positive externalities and effects of education will truly manifest themselves only in certain institutional settings? Ostensibly, more research has to be done to unravel the intricacies of the relationship between income disparity and educational attainment.
“We know very little.”
That was what Dr. Palma emphasised, during the lecture, to a sea of concerned faces amongst the audience. However, that is undoubtedly the right attitude to adopt as we challenge certain mainstream ideas in economics. Apart from being assiduous in the journey of constantly refining our tools of analysis, as demonstrated by the shift from the Gini index to the Palma ratio, we should also make a conscientious effort in reflecting about norms versus distortions. Indeed, from time to time, one must always make a critical assessment of the aspects on which one should focus.
 Palma, J. G. (2016) ‘Do Nations just get the Inequality they Deserve? The ‘Palma Ratio’ Re-examined’, Cambridge Working Paper Economics, no. 1627, University of Cambridge.
 Palma, J. G. (2011) ‘Homogenous middles vs. heterogeneous tails, and the end of the ‘Inverted-U’: the share of the rich is what it’s all about’, Cambridge Working Paper Economics, no. 1111, University of Cambridge.
 Atkinson, A. B. (1970) ‘On the Measurement of Inequality’, Journal of Economic Theory 2. p.244-263.
 Cobham, A., and Sumner, A. (2013) ‘Is it all about the tails? The Palma Measure of Income Inequality’, CGD Working Papers, no. 343, Center for Global Development.