Stop dropping outliers, or you might miss the next Messi!

Jakob Poerschmann ’21 explains how to teach your regression the distinction between relevant outliers and irrelevant noise

Soccer star Leo Messi in action on the field
Photo by Marc Puig Perez on Flickr

Jakob Poerschmann ’21 (Data Science) has written an article called “Stop Dropping Outliers! 3 Upgrades That Prepare Your Linear Regression For The Real World” that was recently posted on Towards Data Science.

The real world example he uses to set up the piece will resonate with every fan of FC Barcelona (and probably scare them, too):

You are working as a Data Scientist for the FC Barcelona and took on the task of building a model that predicts the value increase of young talent over the next 2, 5, and 10 years. You might want to regress the value over some meaningful metrics such as the assists or goals scored. Some might now apply this standard procedure and drop the most severe outliers from the dataset. While your model might predict decently on average, it will unfortunately never understand what makes a Messi (because you dropped Messi with all the other “outliers”).

The idea of dropping or replacing outliers in regression problems comes from the fact that simple linear regression is comparably prone to extremes in the data. However, this approach would not have helped you much in your role as Barcelona’s Data Scientist. The simple message: Outliers are not always bad!

Dig into the full article to find out how to prepare your linear regression for the real world and avoid a tragedy like this one!

Connect with the author

portrait

Jakob Poerschmann ’21 is student in the Barcelona GSE Master’s in Data Science.

Data Science team “Non-Juvenile Asymptotics” wins 3rd prize in annual Novartis Datathon

Patrick Altmeyer, Eduard Gimenez, Simon Neumeyer and Jakob Poerschmann ’21 competed against 57 teams from 14 countries.

Screenshot of team members on videoconference
Members of the “Non-Juvenile Asymptotics” Eduard Gimenez, Patrick Altmeyer, Simon Neumeyer and Jakob Poerschmann, all Barcelona GSE Data Science Class of 2021

The Novartis Datathon is a Data Science competition taking place annually, usually in Barcelona. In 2020, the Barcelona GSE team “Non-Juvenile Asymptotics” consisting of Eduard Gimenez, Patrick Altmeyer, Simon Neumeyer and Jakob Poerschmann won third place after a fierce competition against 57 teams from 14 countries all over the globe. While the competition is usually hosted in Barcelona, the Covid-friendly version was fully remote. Nevertheless, the increased diversity of teams clearly made up for the missed out atmosphere.

This year’s challenge: predict the impact of generic drug market entry

The challenge of interest concerned predicting the impact of generic drug market entry. The risk of losing ground against cheaper drug replicates once the patent protection runs out is evident for pharmaceutical companies. The solutions developed helped solving exactly this problem, making drug development much easier to plan and calculate.

While the problem could have been tackled in various different ways, the Barcelona GSE team focused on initially developing a solid modeling framework. This represented a risky extra effort in the beginning. In fact more than half of the competition period passed without any forecast submission by the Barcelona GSE team. However, the initial effort clearly paid off: as soon as the obstacle was overcome, the “Non-Juvenile Asymptotics” were able to benchmark multiple models at rocket speed.

Fierce competition until the very last minute

The competition was a head-to-head race until the last minute. Still in first place until minutes before the final deadline, the predictions of two teams from Hungary and Spain ended up taking the lead by razor sharp margins.

Congratulations to the winners!!!

Group photo of the team outside the entrance of Universitat Pompeu Fabra
The team at Ciutadella Campus (UPF)

Connect with the team

Automation and Sectoral Reallocation

Article by Dennis Hutschenreiter, Tommaso Santini, and Eugenia Vella

Illustration of a robot sitting on a scale while workers move to the other side
Original artwork by Angelica Lena

In this paper, we study the sectoral reallocation of employment due to automation in Germany. Empirical evidence by Dauth et al. (2021) shows that robot adoption has induced a shift of employment from the manufacturing to the service sector, leaving total employment unaffected. We rationalize this evidence through the lens of a two-sector, general equilibrium model with matching frictions and endogenous participation.

Few papers have studied the effect of automation on employment in a multi-sectoral model. Berg et al. (2018) argue that the inclusion of a non-automatable sector amplifies the difference between the effect of automation on low- and high-skill workers. Sachs et al. (2019) also include a non-automatable sector in an overlapping generations model. They study the possibility of one generation improving their welfare at future generations’ expense through robot adoption. To the best of our knowledge, we are the first to build a two-sector general equilibrium model with search and matching frictions to analyze the long-run impact of automation on both sectoral and aggregate employment.

We consider a representative household that decides how to allocate its members between non-participants in the labor markets, job-searchers in manufacturing, and job-searchers in services. The household also accumulates capital. On the production side, there is a representative firm in each sector. The manufacturing firm decides how many vacancies to post and how much capital to borrow from the household. Automation increases the capital intensity of the technology in the manufacturing sector. This can be motivated by the idea that some work operations, formerly performed by humans, are now executed by robots (Acemoglu and Restrepo (2018)). In the service sector, we assume for simplicity that no capital is needed, and thus the representative firm decides only the number of vacancies to post.

Two panels plot sectoral employees in Germany and employment in the model

Key takeaways

After calibrating our model to the German economy in 1994, we perform steady-state comparative statics to study the long-run impact of automation on sectoral reallocation of employment. As the stock of robots increased by 87% in Germany between 1994 to 2014, we can qualitatively compare the model economy’s reaction to an increase in the degree of automation with the sectoral employment shares we observe in Germany over time.

The right panel of Figure 1 shows the model implied values of sectoral employment levels together with total employment. A higher degree of automation, which we take as a proxy of robot adoption, increases employment in services and decreases it in manufacturing. Despite the adjustments of sectoral employment, total employment remains constant, consistently with the empirical evidence of Dauth et al. (2021). In the left panel of Figure 1, we plot the employment shares in the German economy, which we compute using data from the German Federal Statistical Office (DESTATIS). The model qualitatively replicates the observed pattern in sectoral employment.

To assess how well our model can explain the sectoral reallocation of employment in Germany, we focus on comparing two steady states. These two steady states correspond to the start and end years in the empirical analysis of Dauth et al. (2021). The model predicts a decline of 27% in the ratio of manufacturing employment to service employment, which is reasonably close to the one found in the aggregate data for the German economy, i.e., 32%.

Having shown that the model replicates the sectoral reallocation we observe in the data, we then ask the following question. What determines the extent of sectoral reallocation?

Two main parameters govern the strength of the sectoral reallocation of employment in the model: (1) the elasticity of substitution between capital and labor in the manufacturing sector α and (2) the elasticity of substitution between the outputs of the two sectors χ. Intuitively, in the first case, as α decreases, capital and labor become stronger complements in the production of the manufacturing good. As automation raises the return of capital for a given capital stock, in the long-run this leads to a higher capital stock in the steady state. The stronger the complementarity between the two inputs in manufacturing (i.e., the lower α), the higher is the relative demand for manufacturing workers. Therefore, the sectoral reallocation of employment due to automation is mitigated for lower values of α, as Figure 2 demonstrates.

Figure plots employment and degree of automation
Note: The plotted variables are normalized to zero in the initial steady state. α denotes the elasticity of substitution between capital and labor in manufacturing production. χdenotes the elasticity of substitution between the two sectoral goods.

Concerning the second parameter, we need to distinguish two different effects on production and employment in the service sector. Firstly, since automation leads to a higher accumulation of capital in the long run and, thus, to higher household wealth, this will lead to a higher demand for services, which is a normal good. Secondly, the stronger is the complementarity between the two goods in the economy, the higher is the increase in the demand for services. Consequently, a higher substitutability (i.e., a higher χ) between service and manufacturing goods mitigates the increase in the demand for services and, thus, the sectoral reallocation of employment, as Figure 2 shows.

Conclusion

To sum up, we build a general equilibrium model with an automatable and a non-automatable sector and labor market frictions that is able to rationalize the empirical evidence presented by Dauth et al. (2021) on (i) the substantial sectoral reallocation of employment and (ii) the null-effect on total employment. We show that our calibrated model can reasonably explain the empirical strength of the sectoral reallocation of labor. Furthermore, we analyze which key parameters govern the magnitude of this effect in the model.

An interesting extension of our model would be to include heterogeneous agents and capital-skill complementarity (see e.g. Dolado et al. (2021) and Santini (2021)). With that extended framework, one could study the interplay between automation, sectoral automation, and inequality. We leave this topic for future research.

References

Connect with the authors

portrait

Dennis Hutschenreiter is a PhD candidate in the IDEA Program (UAB and Barcelona GSE).

portrait

Tommaso Santini is a PhD candidate in the IDEA Program (UAB and Barcelona GSE)

portrait

Eugenia Vella is a Research fellow at AUEB, ELIAMEP, and MOVE.

Opening the Black Box of Austerity: Evidence from Fiscal Consolidation Plans

Alessandro Franconi ’17 (Macroeconomic Policy and Financial Markets)

In a new LUISS Working Paper, “Opening the Black Box of Austerity: Evidence from Fiscal Consolidation Plans,” Alessandro Franconi ’17 (Macro) explores the effects of austerity measures on labour markets and on income inequality and finds evidence of a mechanism that can mitigate the size of the economic contraction.

Paper abstract

This paper explores the effects of austerity measures on labour markets and on income inequality and finds evidence of a mechanism that can mitigate the size of the economic contraction. The results indicate that: (i) Fiscal consolidation causes greater distortions for the youth, hence they deserve a special attention to avoid severe long-term economic costs. (ii) While at first glance transfers cuts seem to be ideal, a careful examination suggests that these policies can jeopardise the success of fiscal consolidation. (iii) Tax hikes, negatively affecting the productive sector, trigger frictions in the labour market that give rise to recessionary effects. (iv) Spending cuts, targeting public sector wages and employment, can endanger the capabilities of the current and future labour force. (v) Lastly, income inequality increases with tax hikes and spending cuts, whereas the muted response to transfers cuts is explained by the reaction of labour demand.

Connect with the author

Is the COVID-19 pandemic a consumption game changer?

Research co-authored by Alex Hodbod ’12 (ITFD) and Steffi Huber ’10 (Economics)

The CEPR journal on Covid Economics recently included the paper, “Is COVID-19 a consumption game changer? Evidence from a large-scale multi-country survey” by Alexander Hodbod ’12 (ITFD), Cars Hommes, Stefanie J. Huber ’10 (Economics), and Isabelle Salle.

Steffi gave an interview to CEPR’s Tim Phillips about the team’s research:

Policies to avoid zombification of the economy

In an accompanying VoxEU column, the authors discuss the risks that government responses to COVID-19 could “zombify” the economy.

“A representative consumer survey in five EU countries indicates that many consumers do not miss certain goods and services they have cut down on since the COVID-19 outbreak,” the authors explain in their column. “Fiscal policy must recognise that some firms will become obsolete in the altered post-COVID-19 environment. To achieve a swift recovery, these obsolete firms must be allowed to fail fast so that resources can be reallocated to more efficient uses. Instead, fiscal support should be laser-like in targeting those households who are particularly hard hit by the crisis. Such support should be oriented towards helping displaced workers retrain and find new jobs.”

Paper abstract and download

Prospective economic developments depend on the behavior of consumer
spending. A key question is whether private expenditures recover once
social distancing restrictions are lifted or whether the COVID-19 crisis
has a sustained impact on consumer confidence, preferences, and, hence,
spending. Changes in consumer behavior may not be temporary, as they
may reflect long-term changes in attitudes arising from the COVID-19
experience. This paper uses data from a representative consumer survey
in five European countries conducted in summer 2020, after the release
of the first wave’s lockdown restrictions. We document the underlying
reasons for households’ reduction in consumption in five key sectors:
tourism, hospitality, services, retail, and public transports. We identify
a large confidence shock in the Southern European countries and a
permanent shift in consumer preferences in the Northern European
countries. Our results suggest that horizontal fiscal support to all firms
risks creating zombie firms and would hinder necessary structural
changes to the economy.

Connect with the authors

  • Alexander Hodbod ’12 (International Trade, Finance, and Development). Counsellor to ECB Representative to the Supervisory Board, European Central Bank (DGSGO-SO), Frankfurt, Germany.
  • Cars Hommes. Professor of Economic Dynamics at CeNDEF, Amsterdam School of Economics, University of Amsterdam, and research fellow of the Tinbergen Institute, Amsterdam, The Netherlands, Senior Research Director (Financial Markets Department), Bank of Canada.
  • Stefanie J. Huber ’10 (Economics). Assistant Professor at CeNDEF, Amsterdam School of Economics, University of Amsterdam, and research candidate fellow of the Tinbergen Institute, Amsterdam, The Netherlands. 
  • Isabelle Salle. Principal Researcher at the Bank of Canada (Financial Markets Department), research fellow at the Amsterdam School of Economics, University of Amsterdam, and research fellow of the Tinbergen Institute, Amsterdam, The Netherlands. 

The Exploring Happiness Index

A new tool created by Elliot Jones ’18 (Macro Program)

Elliot Jones ’18 (Macroeconomic Policy and Financial Markets) is a Sovereign Credit Risk Analyst at the Bank of England. Together with co-creator Jessica Golding, he has established Exploring Happiness, a research organization that looks to bring produce evidence-based research in order to generate policy recommendations focused towards sustainably increasing wellbeing across all areas of society.

The latest project to come out of the initiative is the Exploring Happiness Index. This is an online tool that anyone can use to help them achieve their goals and to support their mental health.

Learn more about the index in the video and read on to find out how to start using it:

Explaining the concept of the index

Everyone is different and what makes each of us happy is also different. Some people are career-driven, others live for the social scene and some are highly family-orientated. Despite this, we all also have a lot in common – we all value our health, both mental and physical, the quality of our personal relationships matters a lot and we all like to have something to do that makes us feel worthwhile. It is these basic fundamentals that we use to build our index. We take evidence-based research on the main determinants of life satisfaction (which is taken as being synonymous with happiness) to bring together a group of components that we all have in common and these become the building blocks of the index.

Methodology

The differences of users are captured in two main ways – by identifying the circumstances and preferences of each user. When creating an account each user can choose from 13 different individual types such as an employed worker, a student or a retired person. As a result of this choice, the components that make up the index will change to reflect that individual types circumstances (e.g. an employed worker has a ‘Work’ component, a student has an ‘Education’ component and a retired person has neither of these, but greater weight is applied to their ‘Leisure Time’ component). 

Next, users are then able to choose how important each of the components are to them and the weights in the index will shift to reflect these choices. These two steps make the index unique for each user.

Three benefits of using the index

  1. Being informed:  Our decisions, big or small, will play an important role in determining our happiness and we are more likely to make the right decision when we have better information available to draw upon. The usage of this index provides users with this information, perhaps meaning just knowing how happy you are could make you happier.
  2. Mental health tool: Using this index allows users time to reflect, to think about what has been going well and what has been more challenging. There is good evidence available which points to the benefits of self-reflection on mental health. This has been shown for self-reflection in a number of forms (e.g. from expressive writing to gratitude journaling), across various life stages and as an effective treatment for those with diagnosed mental illnesses. Our view, which we intend to robustly test in the future, is that the method of self-reflection that this index requires will boost users resilience and mental wellbeing.
  3. The ultimate tracker: Nowadays, it is not uncommon to track several parts of our lives, from steps to sleep to calories. But what’s the point in tracking these things? For most people, it’s because they believe if they do more steps or sleep better, they will end up feeling better. This index allows you to check whether that’s true in practice.

Get starting with the Exploring Happiness Index

You can create an account here or find out a little more first by heading to the index homepage.

If you have any feedback, please email info@exploringhappiness.co.uk or use the feedback form on our website. 

Elliot Jones ’18 is a Sovereign Credit Risk Analyst at the Bank of England. He is an alum of the Barcelona GSE’s Master’s in Macroeconomic Policy and Financial Markets.

LinkedIn | Exploring Happiness

Does Air Pollution Exacerbate Covid-19 Symptoms? Evidence from France

Economics master project by Mattia Laudi, Hubert Massoni, and James Newland ’20

The Eiffel Tower under a dark red sky
Image by Free-Photos from Pixabay

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

For patients infected by Covid-19, underlying health conditions are often cited as a source of increased vulnerability, of which exposure to high levels of air pollution has proven to be an exacerbating cause. We investigate the effect of long-term pollution exposure on Covid-19 mortality, admissions to hospitals and admissions to intensive care units in France. Using cross-sectional count data at the local level, we fit mixed effect negative binomial models with the three Covid-19 measures as dependent variables and atmospheric PM2.5 concentration (µg/m3) as an explanatory variable, while adjusting for a large set of potential confounders. We find that a one-unit increase in PM2.5 concentration raised on average the mortality rate by 22%, the admission to ICU rate by 11% and the admission to hospital rate by 14% (rates with respect to population). These results are robust to a large set of sensitivity analyses. As a novel contribution, we estimate tangible marginal costs of pollution, and suggest that a marginal increase in pollution resulted on average in 61 deaths and created a 1 million euro surcharge in intensive care treatments over the investigated period (March 19th – May 25th).

A map of air pollution and a map of Covid deaths in France

Conclusions

The study is a strong indication that air pollution is a crucial environmental factor in mortality risks and vulnerability to Covid-19. The health risks associated with air pollution are well documented, but with Covid-19 in the spotlight we hope to increase awareness of the threat caused by pollution, not only through direct increased health risks, but also through external factors, such as pandemics.

We show the aggravating effect of long-term pollution exposure to three levels of severity of Covid-19 symptoms in France: admission to hospitals for acute Covid-19 cases, admission to intensive care units for the most severe vital organ failures, and fatalities (all expressed per 100,000 inhabitants). Using cross-sectional data at the départemental (sub-regional) level, we fit mixed effect negative binomial models with the three Covid-19 measures as dependent variables and the average level of atmospheric concentration of PM2.5 (µg/m3) as an explanatory variable. We adjust for a set of 18 potential confounders to isolate the role of pollution in the spread of the Covid-19 disease across départements. We find that a one-unit increase in average PM2.5 levels increases on average the mortality rate by 22%, the admission to ICU rate by 11% and the admission to hospital rate by 14%. These results are robust to a set of 24 secondary and sensitivity analyses per dependent variable, confirming the consistency of the findings across a wide range of specifications.

We further provide numerical – and hence more tangible – estimates of the marginal costs of pollution since March 19th. Adjusting for under-reporting of Covid-19 deaths, we estimate that long-term exposure to pollution marginally resulted in an average 61 deaths across French départements. Moreover, based on average daily costs of intensive care treatments, we estimate that pollution induced an average 1 million euros in costs borne by hospitals treating severe symptoms of Covid-19. These figures strongly suggest that areas with greater air pollution faced substantially higher casualties and costs in hospital services, and raise concerns about misallocation of resources to the healthcare system in more polluted areas.

Our paper provides precise estimates and a reproducible model for future work, but is limited by the novelty of the phenomenon at the centre of the study. Our empirical investigation is restricted to the scope of France alone due to cross-border inconsistencies in Covid-19 data collection and reporting. Once Covid-19 data reporting is complete and consistent, we hope future studies will examine the effects of air pollution at a greater scale, or in greater detail. On the other hand, more disaggregated data – at the individual or hospital level – would allow more precise estimates and a better understanding of key factors of Covid-19 health risks and would also allow the use of surface-measured air pollution. Measured pollution data is available for France, but is inherently biased when aggregated at the départemental level, due to lack of territorial coverage. If precise data tracking periodic Covid-19 deaths becomes available for a wider geographic region, we specifically recommend a MENB panel regression incorporating a PCFE for spatially correlated errors. This will produce the most accurate estimates.

Going forward, more accurate and granular data should motivate future research to uncover the exact financial costs attributable to air pollution during the pandemic. Precise estimation of costs of Covid-19 treatments and equipment (e.g. basic protective equipment for personnel or resuscitation equipment), should feature in a more accurate cost analysis. Hospital responses should be thoroughly analysed to understand the true cost of treatments across all units.

It is crucial that the healthcare costs of pollution are globally recognised so that future policy decisions take them into account. Ultimately, this paper stresses that failure to manage and improve ambient air quality in the long run only magnifies future burdens on healthcare resources, and cause more damage to human life. During a global pandemic, the costs of permitting further air pollution appears ever more salient.

Connect with the authors

About the Barcelona GSE Master’s Program in Economics

Demand Estimation in a Two-Sided Market: Viewers and Advertisers in the Spanish Free-to-Air TV Market

Competition and Market Regulation master project by Sully Calderón and Aida Moreu ’20

Photo by Glenn Carstens-Peters on Unsplash

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

Our research arises in a context where “free” services in one market cannot be understood without taking into consideration the other side of the market. The Spanish free-to-air TV industry is a two-sided market in which viewers demand TV programs (for free) and advertisers demand advertising spots for which they pay a price that depends mainly on audience.  Our main contribution to the two-sided market literature is estimating both viewers and advertisers demand to be able to understand the interactions of both sides of the free-to-air TV market.

This analysis is carried out by developing an econometric analysis of the free to air TV market in Spain that captures the reaction of viewers to a change in advertising quantity and the effect on price of ads that this would bring. We specified Viewers Demand in the Spanish free-to-air TV through a logit model to analyse the impact of advertising minutes on the audience share and, we specified Advertisers Demand by an adaptation of the model of Wilbur (2008) to understand the effect of audience share and advertising quantity on prices of ads.  

Conclusions

The results of the Viewers Demand model show an elastic demand and that viewers are averse to advertising regardless of the day but during prime time they are a bit more ad tolerant, especially from 10pm to 11 pm. 

Logit estimation of Viewers Demand. Download the paper to read text version.

On the other side of the market, the Advertising Demand model shows that advertisers are relatively inelastic to both an increase of adds and an increase in audience share. This may be due to the fact that the data available for this project is precisely coming from the most viewed channels, for which advertisers would have more inelastic demand.

Logit estimation of Advertising Demand. Download the paper to read text version.

As expected, the results show that advertisers are more elastic with regards to audience share than to quantity of advertising.

Connect with the authors

  • Sully Calderón, Economic Advisor at Comisión Federal de Competencia Económica (México)
  • Aida Moreu, Research Analyst at Compass Lexecon (Madrid)

About the Barcelona GSE Master’s Program in Competition and Market Regulation

Tracking the Economy Using FOMC Speech Transcripts

Data Science master project by Laura Battaglia and Maria Salunina ’20

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

In this study, we propose an approach for the extraction of a low-dimensional signal from a collection of text documents ordered over time. The proposed framework foresees the application of Latent Dirichlet Allocation (LDA) for obtaining a meaningful representation of documents as a mixture over a set of topics. Such representations can then be modeled via a Dynamic Linear Model (DLM) as noisy realisations of a limited number of latent factors that evolve with time. We apply this approach to Federal Open Market Committee (FOMC) speech transcripts for the period of Greenspan presidency. This study serves as exploratory research for the investigation into how unstructured text data can be incorporated into economic modeling. In particular, our findings point at the fact that a meaningful state-of-the-world signal can be extracted from expert’s language, and pave the way for further exploration into the building of macroeconomic forecasting models, and in general into the usage of variation in language for learning about latent economic conditions.

Key findings

In our paper, we develop a sequential approach for the extraction of a low-dimensional signal from a collection of documents ordered over time. We apply this framework to the US Fed’s FOMC speech transcripts for the period 08-1986 to 01-2006. We retrieve estimates for a single latent factor, that seem to track fairly well a specific set of topics connected with risk, uncertainty, and expectations. Finally, we find a remarkable correspondence between this factor and the Economic Policy Uncertainty Indices for United States.

figure
figure

Connect with the authors

About the Barcelona GSE Master’s Program in Data Science

Institutional real estate investors, leverage, and macroprudential regulation

VoxEU article by Manuel A. Muñoz ’13 (Macroeconomic Policy and Financial Markets)

I am honoured to share my new VoxEU article (with you), which I believe it’s relevant for the ongoing debate on how to strengthen the macroprudential regulatory framework for nonbanks:

Ensuring that institutional real estate investors are subject to countercyclical leverage limits would be particularly effective in smoothing the housing price and the credit cycle.


In addition, the associated ECB working paper suggests that this type of regulation would allow for rental housing prices to increase less abruptly during the boom, an issue that policymakers in several countries of the euro area have attempted to handle via price regulation (an alternative that could generate price distortions).

Also on VoxEU by Manuel A. Muñoz

Macroprudential policy and COVID-19: Restrict dividend distributions to significantly improve the effectiveness of the countercyclical capital buffer release (July 2020)

Connect with the author

alumni

Manuel A. Muñoz ’13 is Senior Lead Expert at the European Central Bank. He is an alum of the Barcelona GSE Master’s in Macroeconomic Policy and Financial Markets.

Are you a Barcelona GSE alum with a new paper or project to share? Learn how to submit your work to the Voice!