Multi-Armed Bandit Approach to Portfolio Choice Problem

Finance master project by Güneykan Özkaya and Yaping Wang ’20

Image by bibblio

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Introduction

Historically, there have been many strategies implemented to solve the portfolio choice problem. An accurate estimation of optimal portfolio allocation is challenging due to the non-deterministic complexities of financial markets.  Due to this complexity, investors tend to resort to the mean-variance framework. If we consider the mean-variance framework, there are several drawbacks to this approach. The most pivotal one could be normality assumption on returns so that we could depict the behavior of returns only by mean and variance.  However, it is well known that returns possess a heavy-tailed and skewed distribution, which results in underestimated risk or overestimated returns. 

In this paper, we rely on a distribution of different metrics other than returns to optimize our portfolio. In simple terms, we combine several parametric and non-parametric bandit algorithms with our prior knowledge that we obtain from historical data. This framework gives us a decision function in which we can choose portfolios to include in our final portfolio. Once we have our candidate portfolio weights, we apply the first-order condition over portfolio variances to distribute our wealth between 2 candidate set of portfolio weights such that it minimizes the variance of the final portfolio. 

Our results show that if contextual bandit algorithms applied to portfolio choice problem, given enough context information about the financial environment, they can consistently obtain higher Sharpe ratios compared to classical methodologies, which translates to a fully automated portfolio allocation framework.

Key results

We conduct the experiments on 48 US value-weighted industry portfolios and consider the time range 1974-02 to 2019-12; The table below reports extensive evaluation criteria of following strategies by order; Minimum Variance Portfolio (MVP), Constant Weight Rebalance portfolio (CWR), Equal Weight portfolio (EW), Upper Confidence Bound 1 (UCB1), Thompson Sampling (TS), Maximum Probabilistic Sharpe ratio (MaxPSR), Probability Weighted UCB1 (PW-UCB1). Below the table, one can observe the evaluation of cumulative wealth through the whole investment period.

table
Table 1. Evaluation Metrics

One thing to observe here, even UCB1 and PW-UCB yield the highest Sharpe ratios. They also have the highest standard deviation, which implies bandit portfolios tend to take more risk than methodologies that aim to minimize variance, but this was already expected due to the exploration component. Our purpose was to see if the bandit strategy can increase the return such that it offsets the increase in standard deviation. Thompson sampling yields a lower standard deviation because TS also consists of portfolio strategies that aim to minimize variance in its action set.

figure
Figure 1. Algorithm Comparison

We also include the evaluation of cumulative wealth throughout the whole period and in 10 year time intervals. One interesting thing to notice is that bandit algorithms’ performance diminishes during periods of high momentum followed by turmoil. The drop in the bandit algorithms’ cumulative wealth is more severe compared to classic allocation strategies such as EW or MVP. Especially PW-UCB1, this also can be seen from standard deviation of the returns. This is due to using the rolling window to estimate moments of the return distribution. Since we are weighing UCB1 with the Sharpe ratio probability, and since this probability reflects the 120-day window, algorithm puts more weights on industries that gain more during high momentum periods, such as technology portfolio. During the dot.com bubble (1995-2002) period, UCB1 and PW-UCB1 gain a lot by putting more weight on technology portfolio, but they suffer the most, during the turmoil that followed high momentum period. One can solve this issue by using more sophisticated prediction model to estimate returns and the covariance matrix.

figure
Figure 2. 1974-1994 Algorithm Comparison
figure
Figure 3. 1994-2020 Algorithm Comparison

To conclude, our algorithm allows dynamic asset allocation with the relaxation of strict normality assumption on returns and incorporates Sharpe ratio probability to better evaluate performances. Our algorithm could appropriately balance the benefits and risks well and achieve higher returns by controlling risk when the market is stable.

Connect with the authors

About the Barcelona GSE Master’s Program in Finance

Scalable Inference for Crossed Random Effects Models

Data Science master project by Maximilian Müller ’20

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

Crossed random effects models are additive models that relate a response variable (e.g. a rating) to categorical predictors (e.g. customers and products). They can for example be used in the famous Netflix problem, where movie ratings of users should be predicted based on previous ratings. In order to apply statistical learning in this setup it is necessary to efficiently compute the Cholesky factor L of the models precision matrix. In this paper we show that for the case of 2 factors the crucial point to this end is not only the overall sparsity of L, but also the arrangement of non-zero entries with respect to each other. In particular, we express the number of flops required for the calculation of L by the number of 3-cycles in the corresponding graph. We then introduce specific designs of 2-factor crossed random effects models for which we can prove sparsity and density of the Cholesky factor, respectively. We confirm our results by numerical studies with the R-packages Spam and Matrix and find hints that approximations of the Cholesky factor could be an interesting approach for further decrease of the cost of computing L.

Key findings

  • The number of 3-cycles in the fill graph of the model are an appropriate measure of the computational complexity of the Cholesky decomposition.
  • For the introduced Markovian and Problematic Design we can prove sparsity and density of the Cholesky Factor, respectively.
  • For precision matrices created according to a random Erdös-Renyi-scheme the Spam algorithms could not find an ordering that would be significantly fill-reducing. This indicates that it might be hard or even impossible to find a general ordering rule that leads to sparse Cholesky factors.
  • For all observed cases, many of the non-zero entries in the Cholesky factor are either very small or exactly zero. Neglecting these small or zero values could spare computational cost without changing the Cholesky factor ‘too much’. Approximate Cholesky methods should therefore be included in further research.
Fill-in-ratio (a measure of relative density of the Cholesky factor) vs. matrix size for the random Erdös-Renyi scheme. For all permutation algorithms the fill-in-ratio grows linearly in I indicating that in general it might be hard to find a good, fill-reducing permutation.

Connect with the author

About the Barcelona GSE Master’s Program in Data Science

On the Importance of Soft Skills in the U.S. Labor Market

EPP master project by Antonio Biondi, Zacharias Kountoupis, Joan Rabascall, and Marco Solera ’20

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

This paper explores the role of soft skills in the U.S. labour market. According to the previous literature, these skills – also called non-cognitive- are crucial as they allow firms to lower coordination costs by trading job tasks more efficiently. We look at both sides of the labour market.

On the demand side, we collect 4,980 job ads from U.S. job portals through a web scraping technique, finding that larger firms require more job tasks and soft skills in their ads than the small and the medium ones.

On the supply side, we match the skills from the O*NET dictionary with the Survey of Income and Program Participation (SIPP) of the United States from 2013 to 2016, estimating return to soft skills around 15% of hourly wage. Moreover, we find statistically significant soft skills wage premium in the big firms around 2.5%, up to 3.5% for highly educated workers.

To the best of our knowledge, this is the first paper that finds a firm size wage premium for soft skills. These pieces of evidence suggest that larger enterprises are willing to pay more soft skills as they face higher coordination costs.

Conclusions

An increasing literature is focusing on the role of “soft skills” as the critical driver for labour market outcomes. A growing body of empirical evidence documents a reversal in demand for cognitive and soft skills: stagnating or even decreasing the first one, sharply increasing the second one.

A possible explanation is given by the fact that soft skills are associated to job tasks that are harder to replace by the automation, as they are mainly composed of tacit knowledge that is tough to encode (Autor, 2015). In this paper we analyse the role of soft skills in the U.S. labour market and their impact on wage.

As regards the supply side, we use the factor analysis to collect skills and abilities, finding a return to soft skills around 15% on U.S. hourly wage, that is almost four times higher than the return to cognitive abilities (4%). Moreover, we found a statistically significant soft skills wage premium in larger firms around 2.5% of hourly wage, up to 3.5% for those highly educated. We also document a strong complementarity between the firm size wage premium and level of education, especially for women: for this latter, the premium starts from 3% and increases to 4,5% when considering only those with more than 12 years of schooling. Results are consistent with our hypothesis, according to which soft skills are more valuable when increasing the size of firms as they are supposed to face higher coordination costs, compare to small enterprises.

The demand side analysis supports our results. After collecting job ads from U.S. job portals, we found that larger firms require more soft skills than the small ones. Finally, we report an excess of demand for soft skills in comparison to their occupational needs.   

Connect with the authors

About the Barcelona GSE Master’s Program in Economics of Public Policy

Criminals or Victims? Evidence on Forced Migration and Crime from the Colombia-Venezuela Border

ITFD master project by Maria Dale, Giacomo Gattorno, Andre Osorio, Rebeca Peers, and Kerenny Torres ’20

Photo: George Castellano / AFP

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

How does a sudden migrant crisis affect criminal activity, and through which mechanisms does this effect take place? We approach this topic by studying the effect of Venezuelan migration on crime rates in Colombia, in the context of the recent migrant crisis that made more than 1.2 million Venezuelans cross the border. 

Our study focuses on border provinces, where the presence of non-economic migrants is higher and potential assimilation problems could be exacerbated. Building on the fact that Venezuelan migration to Colombia happened due to apparently exogenous reasons and is unrelated to economic outcomes in the latter, we are able to study the causal effect of this large migration wave on crime rates. 

Our results show that Venezuelan forced migration had no significant effect on overall crime, but a positive and significant effect on personal theft in Colombian border provinces. Furthermore, migration had a positive and significant effect on personal theft victimization rates of both Venezuelans and Colombians, while only having significant effects on the criminalization rates of Venezuelans. These results are robust to different specifications and controls, and two placebo tests provide strong evidence in favor of our empirical strategy and results. Finally, we link our findings with the overarching criminal context in Colombian border provinces, and develop relevant policy recommendations based on our findings.

Conclusions

In this paper we analyzed the impact of the Venezuelan migrant crisis, which saw more than 1.2 million Venezuelans cross the border, on crime rates in Colombian border provinces between 2014 and 2018. We focused on the border provinces to study the particular effects of non-economic migration, as we find that migrants in this area did not target areas with specific characteristics but just settled in the closest center to get food, medicine and essential services. This settlement decision strictly based on closeness to border is also illustrated by the figure below: not only is the highest share of foreign-born population in the Colombian regions concentrated in our area of study (the 7 border provinces), but within these border provinces there is also a clear concentration in municipalities on the physical border with Venezuela.

figure
Figure 1. Venezuelan-born population as % of total population by province and municipality, 2018. Source: Own calculations, based on data from DANE.

The main contribution of our study was presenting compelling and robust evidence on no significant effect of Venezuelan forced migration on overall crime, but a positive and significant effect on personal theft in Colombian border provinces. By also analyzing both the direct and indirect mechanisms through which this migration inflow can have an effect on different crime rates, we’re able to establish a causal effect of migration on both personal theft victimization and criminalization rates of Venezuelans, while only having significant effects on the victimization rates of Colombians.

Besides our key findings, other results hint at a more holistic narrative: migration also had a positive and significant effect on both homicide and personal injury victimization rates of Venezuelans. What could be driving these effects? As seen previously, Venezuelan migrants settled in municipalities right across the border from Venezuela – which, coincidentally, are municipalities where criminal organizations have a large presence: criminal organizations are present in 23 of the 42 municipalities on the border, and in 39 of the 81 municipalities where the change in proportion of Venezuelan migrants between 2014-18 was above the average. 

In these high migration municipalities, as a consequence of criminal activity, homicide rates tended to be much higher at baseline than in other municipalities. Moreover, we see some evidence of discrimination in homicides – after controlling for all relevant socioeconomic covariates, being Venezuelan seems to have a significant and positive effect on being a victim of homicide. 

This additional information could help construct the following holistic narrative and recommendations: 

  • A large wave of migrants arrived in low-income border provinces with little (formal) employment opportunities, and some had to resort to small-scale theft to make a living, which explains the positive effect of migration on Venezuelan personal theft criminalization rates
  • Driven by both opportunity and attractiveness of targets, this caused an increase in personal theft victimization rates for both Colombians and Venezuelans, although it was much higher for the former than for the latter.
  • However, these municipalities were controlled largely by criminal organizations, which started threatening Venezuelans and forcefully recruiting them in large numbers. This could help explain the positive and significant effect of migration on both homicide and personal injury victimization rates of Venezuelans.
  • These findings imply that government policies should focus on reducing the vulnerability of Venezuelans by providing swift access to the formal labor market, either in border provinces or nationwide, so that Venezuelans can avoid resorting to small-scale theft and escape forced recruitment and exploitation from criminal organizations.

Connect with the authors

About the Barcelona GSE Master’s Program in International Trade, Finance, and Development

Stealth trading in modern high-frequency markets

Finance master project by Alejandro García, Thomas Kelly, and Joan Segui ’20

Photo by Aditya Vyas on Unsplash

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Introduction

This paper builds on the stealth trading literature to investigate the relationship between several different trade characteristics and price discovery in US equity markets.  Our work extends the Weighted Price Contribution (WPC) methodology, which in its simplest form posits that if all trades conveyed the same amount of information, their contribution to market price dynamics over a certain time interval should equate their share in total transactions or total volume traded in the period considered. Traditionally, the approach has been used to provide evidence that trades of smaller sizes convey a disproportionate amount of information in mature equity markets through the estimation of a parsimonious linear specification.

table

The methodology is flexible enough to accommodate for a first set of key extensions in our work, which focus on studying the relative price contribution from trades initiated by high-frequency traders (HFTs) and on stocks of different market capitalization categories over the daily session. Nonetheless, previous research has found that short-lived frictions make the WPC methodology ill-suited for analyzing price discovery at under-a-minute frequencies, a key timespan when HFTs are in focus. Therefore, to analyze the information content of trades of different attributes at higher frequencies we use a Fixed Effects specification to characterize trades that correctly anticipate price trends over under-a-minute windows of varying length as price informative.

Key results

At the daily level, our results underpin prior research that has found statistical evidence of smaller trades inputting a disproportionate amount of information into market prices. This result holds regardless of the type of initiating trader or market capitalization category of the stock being transacted, suggesting that the type of trader on either side on the transaction does not significantly alter the average information content over the session. 

At higher frequencies, trades initiated by HFTs are found to contribute more to price discovery than trades initiated by non-HFTs only when large and mid cap stocks are being traded, consistent with prior empirical findings pointing to HFTs having a strong preference for trading on highly liquid stocks.

Connect with the authors

About the Barcelona GSE Master’s Program in Finance

Industrial Robots and Where to Find Them: Evidence and Theory on Derobotization

Economics master project by Amil Camilo, Doruk Gökalp, Julian Klix, Daniil Iurchenko, and Jeremy Rubinoff ’20

An abandoned factory robot
Image by Peter H from Pixabay

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Around the world, and especially in high-tech economies, the demand and adoption of industrial robots have increased dramatically. The abandonment of robots (referred to as derobotization or, more broadly, deautomation) has, on the other hand, been less discussed. It would seem that the discussion on industrial robots has rarely been about their abandonment because, presumably, the abandonment of industrial robots would be rare. Our investigation, however, shows that the opposite is true: not only do a substantial number of manufacturing firms deautomate, a fact which has been overlooked by the literature, but the reasons for which they deautomate are highly multi-dimensional, suggesting that they depend critically on the productivity of firms and those firms’ beliefs about robotization.

Extending the analysis of Koch et al. (2019), we use data from SEPI Foundation’s Encuesta sobre Estrategias Empresariales (ESEE), which annually surveys over 2000 Spanish manufacturing firms on business strategies, including on whether they adopt robots in their production lines. We document three major facts on derobotization. First, firms that derobotize tend to do so quickly, with over half derobotizing in the first four years after adoption of robots. Second, derobotizing firms tend to be relatively smaller than firms which stay automated for longer periods of time. Third, firms that abandon robots demand less labor and increase their capital-to-labor ratios. The prompt abandonment of robots, we believe, is indicative of a learning process in which firms robotize production with expectations of higher earnings, but later learn information which causes them to derobotize and adjust their production accordingly.

With this in mind, we propose a dynamic model of automation that allows firms to both adopt robots and later derobotize their production. In our setup, firms face a sequence of optimal stopping problems where they consider whether to robotize, then whether to derobotize, then whether to robotize again, and so on. The production technology in our model is micro-founded by the task-based approach from Acemoglu and Autor (2011). In this approach, firms assign tasks to workers of different occupations as well as to robots in order to produce output. For simplicity, we assume two occupations, that of low-skilled and high-skilled workers, where the latter workers are naturally more productive than the former. When firms adopt robots, the firm’s overall productivity (and the relative productivity of high-skilled workers) increases, but the relative productivity of low-skilled workers decreases. At the same time, once firms robotize they learn the total cost of maintaining robots in production, which may exceed their initial expectations. At any point in time, firms can derobotize production with the newfound knowledge of the cost. Likewise, firms can reautomate at a lower cost with the added assumption that firms retain the infrastructure of operating robots in production.

The simulations of our model can accurately explain and reproduce the behavioral distribution of automation across firms in the data (see Figure 1). Indeed, we are able to show that larger and more productive firms are more likely to robotize and, in turn, the firms which derobotize tend to be less productive (referred to as the productivity effect). However, the learning process which reveals the true cost of robotized production (referred to as the revelation effect) also highlights the role of incomplete information as a plausible explanation for prompt abandonment.  Most importantly, our simulations suggest that analyses which ignore abandonment can overestimate the effects of automation and, therefore, must be incomplete. 

Our project is the first, to our knowledge, to document the pertinent facts on deautomation as well as the productivity effect and the revelation effect. It is apparent to us, based on our investigation, that any research seeking to model automation would benefit from modeling deautomation. From that starting point, there remains plenty of fertile ground for new questions and, consequently, new insights.

Connect with the authors

About the Barcelona GSE Master’s Program in Economics

Effects of Syndication on Investment Performance

Finance master project by Ozan Diken and Dominic Henderson ’20

Two people review reports together
Photo by bongkarn thanyakij from Pexels

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Paper abstract

In venture capital, two or more venture capitalists (VC) often form syndicates to participate in the same financing rounds. Historically, syndicated investments have been found to have a positive effect on the investment performance. The paper provides insight into the effects of syndication on the likelihood of a successful exit for the venture-backed firm. It addresses the possible driving components such as the composition of the syndicates and, in particular, the internal investment funds being classed as external firms in two of the four models proposed, as well as a relaxation on the definition of investment round. One of the main conclusions is that in the analysis, using the chance of exiting and money in minus money out as success factors, syndication coefficients across all models are shown to have a higher chance of exiting. This supports the Value-add hypothesis and opposes the alternative, the Selection hypothesis, as it proposes that syndicated VC firms bring varying expertise to the project in order to increase the success factors post-investment. The paper advises to proceed with caution as the story is not consistent across the analysis.

Main conclusions

The paper aimed at looking to add to the literature of debates on reasons for syndication, such as the Valueadd vs Selection hypothesis as set out from various points of views. Uncertainty around profitability is the reason for syndication through the Selection hypothesis, however, the Value-add hypothesis suggests that VCs syndicate to add additional value to the venture post-investment. This is where the varying definitions of syndication we introduced, in order to draw inferences from the data. If the Soft definition of syndication (where syndication can occur across multiple investment rounds), was more successful, it may favour the Value-add hypothesis. However, in the initial test using “exited” as success, the Soft syndication models did not show a significant difference compared to the Hard syndication models.

bar chart

Using the chance of exiting as a success factor, syndication coefficients across all models showed a higher chance of exiting. Using this as a success factor, you could argue for the Value-add and against the Selection hypothesis, as syndicated investments across all models resulted in a higher chance of exiting the investment. Including the key controls, resulted in similar conclusions to be drawn, with syndication increasing the log odds of exiting. This does support the conclusions of Brander, Amit and Antweiler (2002) that highlight that the Valueadd hypothesis dominates.

Using Money Out minus Money In as a success factor it was shown syndicated investments increased this which would be in line with the Value-add hypothesis according to Brander, Amit and Antweiler (2002), however, this could be down to successful companies being input with greater investments which are already successful.

Using exit duration as a success factor, conclusions were unable to be drawn about syndication, as the syndication coefficients were not significant. A potential reason for this, as the literature suggests, Guo, Lou and Pérez-Castrillo (2015), highlight, that the type of fund the investment is being purchased for has an impact on the duration and amount of funding, therefore impacting the returns of the VCs. They find that CVC (corporate venture capital) backed startups receive a significantly higher investment amount and stay in the market for longer before they exit (Guo, Lou and Pérez-Castrillo, 2015). The data did not allow us to analyse the type of fund, meaning the investment strategy could differ from the outset. As no control variable exists for the type of fund it is therefore assumed this does not significantly impact the outcome. Controlling for the type of fund may have shed light on this aspect of the results.

Connect with the authors

About the Barcelona GSE Master’s Program in Finance

Household level effects of flooding: Evidence from Thailand

International Trade, Finance, and Development master project by Zhuldyz Ashikbayeva, Marei Fürstenberg, Timo Kapelari, Albert Pierres, Stephan Thies ’19

Source: NY Times

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

This thesis studies the impacts of flooding on income and expenditures of rural households in northeast Thailand. It explores and compares shock coping strategies and identifies household level differences in flood resilience. Drawing on unique household panel data collected between 2007 and 2016, we exploit random spatio-temporal variation in flood intensities on the village level to identify the causal impacts of flooding on households. Two objective measures for flood intensities are derived from satellite data and employed in the analysis. Both proposed measures rely on the percentage area inundated in the surrounding of a village, but the second measure is standardized and expressed in comparison to the median village level flood exposure. We find that household incomes are negatively affected by floods. However, our results suggest that rather than absolute levels of flooding, deviations from median flood exposure are driving negative effects on households. This indicates a certain degree of adaptation to floods. Household expenditures for health and especially food rise in the aftermath of flooding. Lastly, we find that above primary school education helps to completely offset potential negative effects of flooding.

Conclusion

This paper adds to the existing body of literature by employing a satellite based measure to investigate the long-run effects of recurrent floods on household level outcomes. We first set out to identify the causal impacts of flooding on income and expenditures of rural households in Thailand. Next, we explored and compared shock coping strategies and identified potential differences in flood resilience based on household characteristics. For this purpose, we leveraged a detailed household panel data set provided by the Thailand Vietnam Socio Economic Panel. To quantify the severity of flood events, we calculated flood indices based on flood maps collected by the Geo-Informatics and Space Technology Development Agency (GISTDA) measuring the deviation from median levels of flooding in a 5km radius around a respective village. The figure below illustrates the construction of the index for a set of exemplary villages that lie in the Nang Rong district of Buri Ram in northeast Thailand.

(a) 2010 flooding in Buri Ram and surrounding provinces. Red lines mark the location of the Nong Rong district.

(b) Detailed overview of flood index construction. Red dot shows the exact location of each village with the 5 km area around each village marked by the red circle.

Our results suggest a negative relationship between floods and per household member income, for both total income and income from farming. Per household member expenditure, however, does not seem to be affected by flood events at all. The only exemptions are food and health expenditures, which increase after flood events that are among the top 10 percent of the most severe floods. The former is likely to be driven by the fact that many households in northeastern Thailand live at subsistence level, and therefore consume their farming produce. A lack of production in a given year may lead these households to substitute this loss by buying produce from markets. Rising health expenditures may be explained by injuries caused or diseases obtained during a heavy flood.

Investigating potential risk mitigation strategies revealed that households with better educated household heads suffer less during flood events. However, this result does not necessarily point to a causal relationship, as better educated households might settle in locations of the village which are less likely to be flooded. While our data does not allow to control for such settlement choices on the micro-spatial level, our findings still provide valuable insights for future policy-relevant research on the effects of education on disaster resilience in rural Thailand. Moreover, our data suggests that only very few households are insured against potential disasters. Future research will help to investigate flood impacts and risk mitigation channels in more detail.

Authors: Zhuldyz Ashikbayeva, Marei Fürstenberg, Timo Kapelari, Albert Pierres, Stephan Thies

About the Barcelona GSE Master’s Program in International Trade, Finance, and Development

Media and behavioral response: the case of #BlackLivesMatter

Economics master project by Julie Balitrand, Joseph Buss, Ana Monteiro, Jens Oehlen, and Paul Richter ’19

source: Texas Public Radio

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

We study the effects of the #BlackLivesMatter movement on the law abiding behavior of African-Americans. First, we derive a conceptual framework to illustrate changes in risk perceptions across different races. Second, we use data from the Illinois Traffic Study Dataset to investigate race ratios in police stops. For identification, we apply a linear probability OLS regression on media coverage as well as an event study framework with specific cases. We find that the number of black people committing traffic law violations is significantly reduced after spikes in media coverage and notable police shootings. In the latter case, we further find that the effect holds for an approximate ten day period. We argue that these observed changes in driving behavior are a result of the updated risk beliefs.

Game Tree. Balitrand et al.

Conclusions

Beginning with our model, we show that media related changes in risk perceptions cause a change in the proportion of people committing crimes. Using this model, we further predict that this change would be different across different racial groups. More specifically, it predicts that Blacks became more cautious in order to decrease the chance of a negative interaction with the police. On the other hand, whites were predicted to not change their behavior, since the violence in media coverage is not relevant to their driving decisions.

In order to test our model, we develop a hypothesis testing strategy that allows us disentangle police actions from civilian decisions. By considering the proportion of stopped people who are black at nighttime, we completely remove any effect caused by changes in policing intensity and bias. Instead, we create a testable hypothesis that only focuses on the differences in behavior between racial groups.

To test this hypothesis, we use a linear probability model along with traffic data from Illinois. We test the hypothesis using both an event study approach, as well as using media intensity data from the GDELT Project. Both approaches verify our model’s predictions with high significance levels. Therefore, we have shown that Blacks became more cautious in response to these events compared to other racial groups. In addition, our robustness check on the total number of stops supports the claim that non-blacks do not have a significant response to media coverage of police brutality toward Blacks. This leads to the conclusion that the expected proportion of Blacks breaking traffic laws goes down in response to coverage of these events.

An implicit assumption in our model was that as media coverage goes to zero, Blacks would revert back to their original level of caution. To test this we looked at three days intervals following each media event. We showed that after approximately 10 days, the coefficients were not significant anymore, showing that the media only caused a short term change in behavior. Since this was a robustness check, and not a main focus of our model, we did not investigate this further. This is an interesting conclusion, and warrants future analysis.

On a final note, we want to address the type of media we use for our analysis. Our model section considers media in a general sense. This can include, but is not limited to, social media platforms such as Twitter and Facebook, as well as more traditional media platforms such as television and print newspapers. All of these sources cover police brutality cases at similar intensities. We use TV data for media intensity, since it affects the broadest demographic and therefore best represents the average driver’s exposure to the topic. Different media age medians might affect different demographics more or less. For example, social media may have a greater effect on younger drivers than older drivers. We believes this topic warrants further analysis, in a addition to the topic of the previous paragraph.

Authors: Julie Balitrand, Joseph Buss, Ana Monteiro, Jens Oehlen, and Paul Richter

Evaluating the performance of merger simulation using different demand systems

Competition and Market Regulation master project by Leandro Benítez and Ádám Torda ’19

Photo credit: Diego3336 on Flickr

Evaluating the performance of merger simulation using different demand systems: Evidence from the Argentinian beer market

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

This research arises in a context of strong debate on the effectiveness of merger control and how competition authorities assess the potential anticompetitive effects of mergers. In order to contribute to the discussion, we apply merger simulation –the most sophisticated and often used tool to assess unilateral effects– to predict the post-merger prices of the AB InBev / SAB-Miller merger in Argentina.

The basic idea of merger simulation is to simulate post-merger equilibrium from estimated structural parameters of the demand and supply equations. Assuming that firms compete a la Bertrand, we use different discrete choice demand systems –Logit, Nested Logit and Random Coefficients Logit models– in order to test how sensible the predictions are to changes in demand specification. Then, to get a measure of the precision of the method we compare these predictions with actual post-merger prices.

Finally, to conclude, we point out the importance of post-merger evaluation of merger simulation methods applied in complex cases, as well as the advantages and limitations of using these type of demand models.

Conclusion

Merger simulations yield mixed conclusions on the use of different demand models. The Logit model is ex-ante considered inappropriate because of its restrictive pattern of substitution, however it performed better than expected. Its predictions on average were close to the predictions of the Random Coefficients Logit model, which should yield the most realistic and precise estimates. Conversely, the Nested Logit model largely overestimated the post-merger prices. However, the poor performance is mainly motivated by the nests configuration: the swap of brands generates almost two close to monopoly positions in the standard and low-end segment for AB InBev and CCU, respectively. This issue, added to the high correlation of preferences for products in the same nest, generates enhanced price effects.

table_1_estimation_results

Regarding the substitution patterns, the Logit, Nested Logit and Random Coefficients Logit models yielded different results. The own-price elasticities are similar for the Logit and Nested Logit model, however for the Random Coefficients Logit model they are more almost tripled. This is likely driven by the estimated larger price coefficient as well as the standard deviations of the product characteristics. As expected, by construction the Random Coefficients Logit model yielded the most realistic cross-price elasticities.

table_2_elasticities

Our question on how does the different discrete choice demand models affects merger simulation –and, by extension, their policy implications– is hard to be answered. For the AB InBev / SAB-Miller merger the Logit and Random Coefficients Logit model predict almost zero changes in prices. Conversely, according to the Nested Logit, both scenarios were equally harmful to consumers in terms of their unilateral effects. However, as mentioned above, given the particular post-merger nests configuration, evaluating this model solely by the precision of its predictions might be misleading. We cannot discard to have better predictions under different conditions.

table_3_evaluation

As a concluding remark, we must acknowledge the virtues and limitations of merger simulation. Merger simulation is a useful tool for competition policy as it gives us the possibility to analyze different types of hypothetical scenarios –like approving the merger, or imposing conditions or directly blocking the operation–. However, we must take into account that it is still a static analysis framework. By focusing only on the current pre-merger market information, merger simulation does not consider dynamic factors such as product repositioning, entry and exit, or other external shocks.

Authors: Leandro Benítez and Ádám Torda

About the Barcelona GSE Master’s Program in Competition and Market Regulation