Stop dropping outliers, or you might miss the next Messi!

Jakob Poerschmann ’21 explains how to teach your regression the distinction between relevant outliers and irrelevant noise

Soccer star Leo Messi in action on the field
Photo by Marc Puig Perez on Flickr

Jakob Poerschmann ’21 (Data Science) has written an article called “Stop Dropping Outliers! 3 Upgrades That Prepare Your Linear Regression For The Real World” that was recently posted on Towards Data Science.

The real world example he uses to set up the piece will resonate with every fan of FC Barcelona (and probably scare them, too):

You are working as a Data Scientist for the FC Barcelona and took on the task of building a model that predicts the value increase of young talent over the next 2, 5, and 10 years. You might want to regress the value over some meaningful metrics such as the assists or goals scored. Some might now apply this standard procedure and drop the most severe outliers from the dataset. While your model might predict decently on average, it will unfortunately never understand what makes a Messi (because you dropped Messi with all the other “outliers”).

The idea of dropping or replacing outliers in regression problems comes from the fact that simple linear regression is comparably prone to extremes in the data. However, this approach would not have helped you much in your role as Barcelona’s Data Scientist. The simple message: Outliers are not always bad!

Dig into the full article to find out how to prepare your linear regression for the real world and avoid a tragedy like this one!

Connect with the author

portrait

Jakob Poerschmann ’21 is student in the Barcelona GSE Master’s in Data Science.

Data Science team “Non-Juvenile Asymptotics” wins 3rd prize in annual Novartis Datathon

Patrick Altmeyer, Eduard Gimenez, Simon Neumeyer and Jakob Poerschmann ’21 competed against 57 teams from 14 countries.

Screenshot of team members on videoconference
Members of the “Non-Juvenile Asymptotics” Eduard Gimenez, Patrick Altmeyer, Simon Neumeyer and Jakob Poerschmann, all Barcelona GSE Data Science Class of 2021

The Novartis Datathon is a Data Science competition taking place annually, usually in Barcelona. In 2020, the Barcelona GSE team “Non-Juvenile Asymptotics” consisting of Eduard Gimenez, Patrick Altmeyer, Simon Neumeyer and Jakob Poerschmann won third place after a fierce competition against 57 teams from 14 countries all over the globe. While the competition is usually hosted in Barcelona, the Covid-friendly version was fully remote. Nevertheless, the increased diversity of teams clearly made up for the missed out atmosphere.

This year’s challenge: predict the impact of generic drug market entry

The challenge of interest concerned predicting the impact of generic drug market entry. The risk of losing ground against cheaper drug replicates once the patent protection runs out is evident for pharmaceutical companies. The solutions developed helped solving exactly this problem, making drug development much easier to plan and calculate.

While the problem could have been tackled in various different ways, the Barcelona GSE team focused on initially developing a solid modeling framework. This represented a risky extra effort in the beginning. In fact more than half of the competition period passed without any forecast submission by the Barcelona GSE team. However, the initial effort clearly paid off: as soon as the obstacle was overcome, the “Non-Juvenile Asymptotics” were able to benchmark multiple models at rocket speed.

Fierce competition until the very last minute

The competition was a head-to-head race until the last minute. Still in first place until minutes before the final deadline, the predictions of two teams from Hungary and Spain ended up taking the lead by razor sharp margins.

Congratulations to the winners!!!

Group photo of the team outside the entrance of Universitat Pompeu Fabra
The team at Ciutadella Campus (UPF)

Connect with the team

Tracking the Economy Using FOMC Speech Transcripts

Data Science master project by Laura Battaglia and Maria Salunina ’20

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

In this study, we propose an approach for the extraction of a low-dimensional signal from a collection of text documents ordered over time. The proposed framework foresees the application of Latent Dirichlet Allocation (LDA) for obtaining a meaningful representation of documents as a mixture over a set of topics. Such representations can then be modeled via a Dynamic Linear Model (DLM) as noisy realisations of a limited number of latent factors that evolve with time. We apply this approach to Federal Open Market Committee (FOMC) speech transcripts for the period of Greenspan presidency. This study serves as exploratory research for the investigation into how unstructured text data can be incorporated into economic modeling. In particular, our findings point at the fact that a meaningful state-of-the-world signal can be extracted from expert’s language, and pave the way for further exploration into the building of macroeconomic forecasting models, and in general into the usage of variation in language for learning about latent economic conditions.

Key findings

In our paper, we develop a sequential approach for the extraction of a low-dimensional signal from a collection of documents ordered over time. We apply this framework to the US Fed’s FOMC speech transcripts for the period 08-1986 to 01-2006. We retrieve estimates for a single latent factor, that seem to track fairly well a specific set of topics connected with risk, uncertainty, and expectations. Finally, we find a remarkable correspondence between this factor and the Economic Policy Uncertainty Indices for United States.

figure
figure

Connect with the authors

About the Barcelona GSE Master’s Program in Data Science

Structure and power dynamics in labour flow and company control networks in the UK

Data Science master project by Áron Pap ’20

Droplets of dew collect on a spider web
Photo by Nathan Dumlao on Unsplash

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

In this thesis project I analyse labour flow networks, considering both undirected and directed configurations, and company control networks in the UK. I observe that these networks exhibit characteristics that are typical of empirical networks, such as heavy-tailed degree distribution, strong, naturally emerging communities with geo-industrial clustering and high assortativity. I also document that distinguishing between the type of investors of firms can help to better understand their degree centrality in the company control network and that large institutional entities having significant and exclusive control in a firm seem to be responsible for emerging hubs in this network. I also devise a simple network formation model to study the underlying causal processes in this company control network.

figure

Conclusion and future research

Intriguing empirical patterns and a new stylized fact are documented during the study of the company control network, since there is suggestive evidence that the types and number of investors are strongly associated with how “interconnected” a firm is in the company control network. Based on the empirical data it also seems that the largest institutional investors mainly seek opportunities where they can have significant control without sharing it with other dominant players. Thus the most “interconnected”/central firms in the company control network are the ones who can maintain this power balance in their owner structure. 

The devised network formation model helps to better understand the potential underlying mechanisms for the empirically observed stylized facts about the company control network. I carry out numerical simulations, sensitivity analysis and also calibrate parameters of the model using Bayesian optimization techniques to match the empirical results. However, these results could be “fine-tuned” at different stages further, in order to have a better empirical fit. First, the network formation model could be enhanced to represent more complex agent interactions and decisions. But also, the model calibration method could be extended to include more parameters and a larger valid search space for each of those parameters.

This project could also benefit from improvements to the utilised data. For example more granular data on the geographical regions could help to understand the different parts of London more and to have a more detailed view of economic hubs in the UK. Moreover, the current data source provides a static snapshot of the ownership and control structure of firms. Panel data on this front could enhance the analysis of the company control network, numerous experiments related to temporal dynamics could be carried out, for example link prediction or testing whether investors follow some kind of “preferential attachment” rules when acquiring significant control in firms.

Connect with the author

Áron Pap, Visiting Student at The Alan Turing Institute

About the Barcelona GSE Master’s Program in Data Science

Scalable Inference for Crossed Random Effects Models

Data Science master project by Maximilian Müller ’20

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.

Abstract

Crossed random effects models are additive models that relate a response variable (e.g. a rating) to categorical predictors (e.g. customers and products). They can for example be used in the famous Netflix problem, where movie ratings of users should be predicted based on previous ratings. In order to apply statistical learning in this setup it is necessary to efficiently compute the Cholesky factor L of the models precision matrix. In this paper we show that for the case of 2 factors the crucial point to this end is not only the overall sparsity of L, but also the arrangement of non-zero entries with respect to each other. In particular, we express the number of flops required for the calculation of L by the number of 3-cycles in the corresponding graph. We then introduce specific designs of 2-factor crossed random effects models for which we can prove sparsity and density of the Cholesky factor, respectively. We confirm our results by numerical studies with the R-packages Spam and Matrix and find hints that approximations of the Cholesky factor could be an interesting approach for further decrease of the cost of computing L.

Key findings

  • The number of 3-cycles in the fill graph of the model are an appropriate measure of the computational complexity of the Cholesky decomposition.
  • For the introduced Markovian and Problematic Design we can prove sparsity and density of the Cholesky Factor, respectively.
  • For precision matrices created according to a random Erdös-Renyi-scheme the Spam algorithms could not find an ordering that would be significantly fill-reducing. This indicates that it might be hard or even impossible to find a general ordering rule that leads to sparse Cholesky factors.
  • For all observed cases, many of the non-zero entries in the Cholesky factor are either very small or exactly zero. Neglecting these small or zero values could spare computational cost without changing the Cholesky factor ‘too much’. Approximate Cholesky methods should therefore be included in further research.
Fill-in-ratio (a measure of relative density of the Cholesky factor) vs. matrix size for the random Erdös-Renyi scheme. For all permutation algorithms the fill-in-ratio grows linearly in I indicating that in general it might be hard to find a good, fill-reducing permutation.

Connect with the author

About the Barcelona GSE Master’s Program in Data Science

Machine Learning for the Sustainable Management of Main Water Supply Assets

Maryam Rahbaralam ’19 (Data Science)

big data

Maryam Rahbaralam ’19 (Data Science) presented “Machine Learning for the Sustainable Management of Main Water Supply Assets” with Jaume Cardús (Aigües de Barcelona) during the Pioneering Fields and Applications (Strong AI) session at the 2019 Big Data and AI Congress in Barcelona.

Abstract

The developed machine learning model gives the prediction of the probability of failure for each pipe section of the water supply network, allowing an early renewal of those in more detrimental conditions in terms of social, environmental and economic consequences.

Video

Maryam Rahbaralam ’19 is a Data Scientist at the Barcelona Supercomputing Center (BSC). She is an alum of the Barcelona GSE Master’s in Data Science.

LinkedIn | Twitter

Solving data science problems with Record Matching

Presentation by Data Science alum Jordan McIver ’15

presentation

Every organisation needs to be able to properly connect disparate datasets to take full advantage of their data assets. Alchemmy held an event to discuss approaches and technologies to connect datasets and watchouts to consider once they are connected.

Check out my talk here where we look at an approach that best enables data scientists by partnering them with the other staff who actually hold the context of the data:

Video summary

Most businesses have some or all of the following problems: not enough data science resources for the work required; a large community of data-adjacent staff who have most of the context but are not contributing what they know in the right way; data science problems lacking that same context; algorithms that cannot overcome a lack of data quality or availability of training data. Jordan walks through the use of interactive dashboards where users quality assess the data and this feeds back into the data science process which addresses these problems.

Jordan

Jordan McIver ’15 is Head of Data Consulting at Alchemmy in London. He is an alum of the Barcelona GSE Master’s in Data Science.

LinkedIn

Investigation of Sentiment Importance on Intraday Stock Returns

Data Science master project by Michele Costa, Alessandro De Sanctis, Laurits Marschall and S. Hamed Mirsadeghi ’18

Investigation of Sentiment Importance on Intraday Stock Returns

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2018. The project is a required component of every master program.


Authors:

Michele CostaAlessandro De SanctisLaurits Marschall and S. Hamed Mirsadeghi

Master’s Program:

Data Science

Paper Abstract:

The main goal of our Master Project is to predict intraday stock market movements using two different kinds of input features: financial indicators and sentiments from news and tweets. While the former are part of the common technical analysis of financial econometric models, the extracted sentiment of news articles and tweets from Twitter are also proven to correlate with stock markets movements. Our paper aims at contributing to the existing academic and professional knowledge in two main directions. First, we evaluate three different approaches to extract the sentiment from both social and mass media based on its forecasting power. Second, we deploy a battery of engineered features based on the sentiment, together with the financial indicators, in a machine learning model for a fine-grained minute-level forecasting exercise. In the end, two different classes of models are fitted to test the forecasting power of the combined input features. We estimated a classical ARIMA-model, and an XGBoost-model as machine learning algorithm. We collected data on the companies Apple, JPMorgan Chase, Exxon Mobil, and Boeing.

Figure: Exxon Mobil
The picture shows how sentiments towards Exxon Mobil moved over time. The two lines refers to two different methodologies: Loughran-McDonald is based on a financial dictionary while SentiStrength was trained on social media such as MySpace.


More about the Data Science Program at the Barcelona Graduate School of Economics

BGSE Data Talks: Professor Piotr Zwiernik

The Barcelona GSE Data Science student blog has a new post featuring an interview with Piotr Zwiernik (UPF and BGSE), Data Science researcher and professor in the BGSE Data Science Master’s Program.

The Barcelona GSE Data Science student blog has a new post featuring an interview with Piotr Zwiernik (UPF and BGSE), Data Science researcher and professor in the BGSE Data Science Master’s Program:

Hello and welcome to the second edition of the „Data Talks“ segment of the Data Science student blog. Today we have the honor to interview Piotr Zwiernik, who is assistant professor at Universitat Pompeu Fabra. Professor Zwiernik was recently awarded the Beatriu de Pinós grant from the Catalan Agency for Management of University and Research Grants. In the Data Science Master’s Program he teaches the maths brush-up and the convex optimization part of the first term class „Deterministic Models and Optimization“. Furthermore, he is one of the leading researchers in the field of Gaussian Graphical Models and algebraic statistics. We discuss his personal path, the fascination for algebraic statistic as well as the epistemological question of low-dimensional structures in nature…

Read the full interview on the Barcelona GSE Data Scientists blog

BGSE represented by “Just Peanuts” at Data Science Game finals in Paris

Class of 2017 Data Science graduates Roger Garriga, Javier Mas, Saurav Poudel, and Jonas Paul Westermann qualified for the final round of the Data Science Game in Paris this fall. Here is their account of the event.


Data Science Game is an annual competition organized by an association of volunteers from France. After competing in a tough online classificatory phase during the master we classified to the finals in Paris where we would be presented with a new problem to solve in a 2 days hackathon.

The hackathon was held in a palace property of Capgemini called Les Fontaines. It was an amazing building that made the experience even better.

The problem presented was to estimate the demand of 1.500 different products on 4 different countries using historic orders from 100.000 customers during the past 5 years by forecasting the three subsequent months. This was a well defined challenge that could be tackled with a large variety of solutions and for us specially the time constrain was one of the main challenges, since at the end we could be only 3 instead of 4.

We started by exploring the data and we realised that there were a lot of missing values due to a cross of databases done by the company who provided the data. So we spent some time by cleaning up the data and filling some of the missing values, to later on apply our models. After all the cleaning the key element to solve the challenge was later on to engineer good features that would represent well the data and then apply a simple model to predict the 3 months ahead.

The hackathon can be summed up in a day and a half coding, modeling and discussing without sleeping surrounded by 76 other participants from all across the world that were basically doing exactly the same, with short pauses to eat pizza, hamburgers and Indian food. So, a pretty good way to spend a weekend.

This slideshow requires JavaScript.