Fintech has increasingly become part of the global economy with the evolution of technology, increasing investments in fintech firms, and greater integration between traditional incumbent financial firms and fintech. Since the 2007–2009 financial crisis, research has also paid more attention to systemic risk and the impact of financial institutions on systemic risk. As fintech grows, so too should the concern about its possible impact on systemic risk. This paper analyzes two indices of public fintech firms (one for the United States and another for Europe) by computing the ∆CoVaR of the fintech firms against the financial system to measure their impact on systemic risk. Our results show that at this time fintech firms do not contribute greatly to systemic risk.
Conclusions and key results
Our results show that, for the US, the payment and remittances and the market and trading support categories contribute the most to the VaR of the fintech industry. Instead, in Europe, fintech firms that provide software solutions and information technologies seem to be contributing the most to the risk of the sector. The estimation that includes fintech firms and the representative sample of the financial sectors show that fintech firms are not systemically important. Within the US financial system, the fintech companies that do contribute to systemic risk increase it by around 0.03%, while, in Europe, fintech firms contribute very little to the systemic impact (close to 0%). The Spearman’s rank correlation between a fintech firm’s ∆CoVaR and its respective size and between a fintech firm’s ∆CoVaR and its beta strengthens the importance of our estimations for a better assessment of systemic risk rather than just relying on the size and the beta of the firms to determine their likely contribution to systemic risk.
Some limitations of our study include the scope of our analysis method (∆CoVaR), the representation of the fintech sector, and the analysis of only two markets. However, micro-level data analysis focusing on each individual fintech category and changing the focus on emerging markets could reveal the specific risks, highlighting key research lines.
The full paper will be made available when it is added to the ADBI working papers series.
Jebb Peria ’10 (Finance), Associate at EV Private Equity
Jebb Peria ’10 recently answered some questions about careers in private equity in a post for his employer, EV Private Equity. Here are a few excerpts from the interview.
I’ve heard of private equity but how does it differ from, say, venture capital or fund management?
Fund management is basically a firm of money managers investing pooled funds from investors. The capital may be invested in traditional asset classes such as equities, fixed income and cash and alternative asset classes such as hedge funds, private equity, real estate, commodities and infrastructure.
Private Equity (PE) is an active form of investment in privately held companies with the objective of growing them over a medium to long-term period. As active investors, PE firms work closely with management to increase and maximise the company’s value through financial engineering, improved governance and operational performance.
At EV Private Equity, we primarily invest in early-growth companies that have: a distinct product or service; the potential to grow rapidly; low levels of debt; and experienced management teams. We seek innovative and disruptive technology companies that can scale and drive superior returns.
Venture Capital (VC) is a subset of PE which provides capital to early-stage businesses, usually in technology-based sectors. Venture capitalists normally invest in high-growth, high-risk, start-up or early-staged ventures, typically with a bias towards technology or innovation. PE tends to focus on later-stage investment in businesses that are more established and are generating cash. VC uses primarily equity while PE may use equity and debt (leverage).
Both PE and VC use a measurement known as MOIC (Multiple On Invested Capital) to calculate the returns they make from their investments. PE target returns range from 2x-5x while VC returns are expected to be higher.
Do I need an MBA from Harvard, a mathematics degree or an accountancy qualification in order to be considered?
No, not necessarily. As a matter of fact, I don’t have any of those credentials. I graduated with a BA in Economics (with highest distinction) from York University in Canada, an MA in Economics from the University of Toronto, and an MSc in Finance from Barcelona GSE. I am also a CFA® charterholder. I guess this depends on which type of PE firm you want to work with as there are generalists and specialists.
As energy specialists, our team at EV Private Equity is comprised of people with substantial experience in the energy industry [oil and gas (O&G), oil field services (OFS)] as well as those from technical disciplines (reservoir, drilling, mechanical, chemical, and software engineering as well as geophysics and naval architecture). We also recruit candidates with graduate business degrees in areas such as MBA, finance, economics, strategy etc.
Is it true that private equity is very secretive and is not accountable to any regulators or governments?
EV Private Equity is regulated by the Financial Conduct Authority in the UK and the SEC in the US under the Investment Advisor Act of 1940.
Like any other firm, EV Private Equity and its portfolio companies are obliged to abide by the laws and regulations of all countries we operate in. This is also part of the fiduciary duty towards the firm’s institutional investors, comprised mainly of large public and private pension funds, insurance companies, university endowment funds and sovereign wealth funds.
What is a typical day like in private equity?
I typically start the morning reading through the latest news and market trends. I skim-through DagensNæringsliv, Bloomberg, Financial Times and even LinkedIn to check on the latest oil price, mergers and acquisitions (M&As) and geopolitical news. Then, I read through my emails to check for any updates on the portfolio companies I’m involved with and any immediate requests from the partners.
My day is normally split between fixed deliveries and ad hoc tasks. My deliveries would range from weekly meetings and operational updates with portfolio companies to monthly, quarterly and yearly financial reporting to updating fair market values of portfolio companies to weekly meetings with the digital marketing team. I would also participate in quarterly investor meetings, board meetings as well as annual strategy meetings with my portfolio companies.
If there’s a deal I am involved in, I would build the financial model, perform valuation and sensitivity analysis and support the drafting of the investment paper. I would also be participating in weekly call updates with the due diligence providers regarding any red flags and show stoppers (in other words, developments that may affect our decision to invest).
If one of my portfolio companies is preparing for an exit, I might be having calls with the management and the financial advisors discussing the potential buyers, the market sentiment and the status of the Information Memorandum (IM), the document we share with prospective buyers.
There is not much slack time. If I do have some spare time, I can always find something to work on: a process to simplify and make more efficient; a model to automate; improvements to our social media presence; or offering support to other office locations.
What are the rewards?
Helping to create value for the company and produce superior returns for investors is rewarding and gratifying.
I also get to work with different partners, management teams, board members and technologies. These teach me different insights, strategies, and management styles.
It is very rewarding to work with the smart, entrepreneurial and down-to-earth group of individuals at EV Private Equity. They make the workplace fun and invigorating.
Of course, the job is also financially rewarding. I would like to believe that I am fairly and reasonably remunerated given my performance and contributions, the skillset I bring to the table, and my dedication to my craft.
Finance master project by Daniel A. Landau and Gabriel L. Ramos ’19
Editor’s note: This post is part of a series showcasing Barcelona GSE master projects. The project is a required component of all Master’s programs at the Barcelona GSE.
In this paper, we characterize a variety of international financial markets as partially correlated networks of stock returns via the implementation of the joint sparse regression estimation techniques of Peng et al. (2009). We explore a number of mean-variance portfolios, with the aim of enhancing out-of-sample portfolio performance by uncovering the hidden network dynamics of optimal portfolio allocation. We find that Markowitz portfolios generally dissuade the inclusion of central stocks in the network, yet the interaction of a stock’s individual and systemic performance is more complex. This motivates us to explore the time-varying correlation of these topological features, which we find are highly market dependent. Building on the work of Peralta & Zareei (2016), we implement a number of investment strategies aimed at simplifying the portfolio selection process by allocating wealth to a targeted subset of stocks, contingent on the time-varying network dynamics. We find that applying mean-variance allocation to a restricted sample of stocks with daily portfolio re-balancing can statistically signiﬁcantly enhance out-of-sample portfolio performance in comparison to a market benchmark. We also find evidence that such portfolios are more resilient during periods of major macroeconomic instability, with the results applicable to both developed and emerging markets.
Conclusion and Future Research
In our work, we represent 4 international exchanges as individual networks of partially correlated stock returns. To do so, we build a Graph, comprised of a set of Vertices and Edges, via the implementation of the joint sparse regression estimation techniques of Peng et. al (2009). This approach allows us to uncover some of the hidden topological features of a series of Markowitz tangency portfolios. We generally find that investing according to MPT dissuades the inclusion of highly central stocks in an optimally designed portfolio, hence keeping portfolio variances under control. We find that this result is market-dependent and more prevalent for certain countries than for others. From this cross-sectional network analysis, we learn that the interaction between a stock’s individual performance (Sharpe ratio) and systemic performance (eigenvector centrality) can be complex. This motivates us to explore the time-varying correlation ρ between Sharpe ratio and eigencentrality.
Overall, we show that in considering the time-varying nature of partially correlated networks, we can enhance out-of-sample performance by simplifying the portfolio selection process and investing in a targeted subset of stocks. We also find that our work proposes a number of future research questions. Although we implement short-sale constraints, it would also be wise to introduce limits on the amount of wealth that can go into purchasing stocks, as this would help to avoid large portfolio variances. Furthermore, our work paves the way for future research into the ability of ρ-dependent investment strategies to enhance portfolio performance in times of macroeconomic distress and major financial crises.
There exists a substantial body of literature concerned with the calibration of the Heston model for pricing financial derivatives under stochastic volatility, many of which rely on computationally expensive algorithms. Our paper evaluates a calibration method of the Heston model proposed by Alòs, De Santiago, and Vives (2015), which can be used to price derivatives with little computational effort. The calibration method is innovative in the sense that it considers only the three most critical regions of the implied volatility surface. The regions where the underlying option is, firstly, at-the-money, secondly, close to maturity and lastly, far away from maturity. Although their procedure is parsimonious and very easy to implement, they calibrate a model whose empirical applicability is contested.
The main contribution of our paper is the evaluation of their model in an extensive numerical exercise as well as an application to real data. Collecting empirical option data has been one of the main challenges with respect to this work, since historical data on financial derivatives is not accessible to the public. Faced with this issue we have written a script that allowed us to automatically scrape option data at a high frequency over just a couple of weeks. Thus, we build our own extensive data base. Also, we have made the data and code available on https://griipen.shinyapps.io/bgse/ and https://github.com/HitKnit/BGSE2018/tree/HitKnit-optionscraping, respectively.
In terms of our results, we find that whilst the calibration method has solid theoretical foundations and produces satisfactory estimation results within the theoretical Heston universe. However, it fails in practice. Specifically, for the numerical exercise we find that out of all simulations the maximum average error across the entire volatility surface is 0.999 percent while the mean error across simulations is only 0.481 percent. In sharp contrast to that, absolute percentage errors for our empirical data are on the order of 30-40 percent in many cases. In the following figure, we present our findings for intra-daily data from May 16, 2018. The left column shows empirical implied volatilities for a European call option on Facebook Inc. (FB) stocks. From top to bottom volatilities are shown for the opening, lunch and closing sessions. The central column shows the fitted volatility surfaces while the right column shows absolute percentage differences between empirical and estimated values. The finding that errors are particularly high for at-the-money options with short times to maturity is robust across the entire data sample.
Conclusions and key results:
In light of these results, we conclude that inherent limitations of the Heston Model disqualify the calibration for practical use. Nonetheless, we believe that similarly simple calibration methods as the one examined here should be used in combination with more sophisticated option pricing models.
We’ve just come across some articles written by several Barcelona GSE Alumni who are now Research Assistants and Economists at Caixabank Research in Barcelona. New articles are published each month on a range of topics.
Below is a list of all the alumni we found listed as article contributors, as well as their most recent publications in English (click each author to view his or her full list of articles in English, Catalan, and Spanish).
If you’re an alum and you’re also writing about Economics, let us know where we can find your stuff!
Gerard Arqué (Master’s in Macroeconomic Policy and Financial Markets ’09)
Patrick Altmeyer (Finance student ’18) who has an interest in monetary policy, shares his work on whether misguided monetary policy can explain the European housing bubble.
Property prices surged throughout Europe in the early 2000s before collapsing during the crisis and causing tremendous welfare losses. This dissertation uses Structural Vector Autoregression (SVAR) to analyse the role of house prices within the monetary transmission mechanism in Europe over the past decades in order to understand whether lax interest rate policy had caused the bubble. Quarterly observations of inflation, output, consumption, real estate prices and mortgage variables for eight European countries were used. Sample periods vary by model specification but generally four decades.
Impulse response functions for the baseline SVAR suggest that real estate prices did indeed respond positively to dovish monetary policy and thereby amplified conventional effects on consumer spending. However, the interpretation of these preliminary results is complicated by explosive house price dynamics during the early 2000s. The linear vector autoregressions fail to fully capture these non-linear elements of the time series. A statistical test developed by Homm and Breitung (2012) is therefore used to identify bubble periods in the various countries analysed. Explosive house price dynamics are found in all countries but Germany as shown in Figure 1.
Figure 1: House price trends in European countries. Shaded areas indicate bubble periods.
Information about house price bubbles is subsequently used to augment the baseline SVAR in various ways. Consequently, the measured effect of a decrease in interest rates on house prices remains positive, but to a lesser extent. Overall, evidence found here suggests that interest rate policy alone was not responsible for the European housing bubble. Rather, it appears that the boom could be better explained by joint effects of loose monetary policy, financial liberalisation and associated mortgage market innovations. Note, for example, that total securitisation activity measured in terms of the number of euro-denominated asset-backed securities outstanding increased six fold from 2000 until the credit bubble burst in mid 2007. Unsurprisingly, many have drawn a connection between monetary policy and securitisation commonly arguing that the latter amplified the conventional credit effects of the former. Information about mortgage rates and lending activity is used as a proxy for mortgage securitisation and added to the SVAR in the final section of the empirical part. Indeed, these variables are found to have high explanatory power with respect to house price trends in most countries as evident in Figure 2, which plots forecast error variance decompositions for each country under the preferred model specification.
Figure 2: Forecast error variance decompositions.
The paper therefore concludes that stricter interest rates more closely aligned with policy rules could not have entirely avoided the bubbles, hence this approach is not recommended for the future. Putting more focus on asset price stability and thereby departing from the policy rate’s traditional role of smoothing consumption and consumer prices would be too complicated and is therefore not advisable, either. In light of the finding that financial innovations have greatly contributed to bubbles, policy makers should continue current efforts on imposing stricter regulation through macroprudential measures.
Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2017. The project is a required component of every master program.
We present a tradeoﬀ model of capital structure to investigate the sources of adjustment costs and study how ﬁrms’ ﬁnancing decisions determine partial adjustment toward target leverage ratios. The presence of market imperfections, like taxes and collateral constraints, is shown to play a decisive role in the behavior of the policy function of capital and leverage. By means of a contraction argument, we are able to show the existence of a target leverage towards which optimal leverage converges with a speed of adjustment that depends on a ﬁrm marginal productivity of capital. Our predictions are consistent with the empirical literature regarding both the magnitude of the speed of adjustment and the relationship between leverage ratios and the business cycle.
In this work we showed how ﬁnancial and economic frictions are able to generate a partial adjustment dynamics in leverage policy functions. In the model we studied, the key factors of this phenomenon are collateral constraints (which strike a balance between tax beneﬁts of debt and distress costs) and ﬁrm productivity of capital. The latter, in particular, determines the speed of adjustment towards the (state-dependent) target leverage ratio.
Our model ﬁts well several stylized facts of leverage dynamics established by the empirical literature: an example is given by the magnitude of the speed of adjustment, which falls into the conﬁdence intervals estimated by several authors. Another one, is the countercyclical behavior of leverage dynamics with respect to the business cycle, which is due to the fact that in recessions it is easier for the collateral constraint to be binding.
Future work should ﬁrst address the translation of the hypotheses of Theorem 5.4 on the Lagrange multiplier into assumptions on the components of the model (the production function and the various market frictions). The next step would then be to extend the model to a full general equilibrium model to study thoroughly the eﬀects of preference and monetary shocks on leverage dynamics. Pairing consumers’ utility maximization with ﬁrms’ ﬁnancing problem would also allow to study the interaction between expected returns and partial adjustment: in such framework, the collateral constraint should probably be replaced by several credit rating inequalities determining both ﬁrm speciﬁc discount rates and target leverage ratios.
Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2017. The project is a required component of every master program.
Ignatius Barnardt, Golschan Khun Jush, Thies Wollesen, Samuel Hayden and Eva Sotosek
Economics and Finance
We investigate a possible gender gap in returns to education using data from the World Bank’ STEP program for seven developing and emerging countries. We control for cognitive skills, non-cognitive skills and parental education – previously unobserved due to unavailability of data – to investigate how this heterogeneity is playing a role in estimating the gender diﬀerential in educational returns. We also model selection using the Heckman two-step estimation procedure to examine whether selection may be driving this phenomenon. Our ﬁndings suggest that gender gaps in returns to education are not as prominent in the countries in our sample as previously suggested. We also ﬁnd that controlling for unobserved heterogeneity on the one hand, and selection on the other, has diﬀerent eﬀects in diﬀerent countries, highlighting the importance of understanding individual countries’ labour markets in detail before drawing conclusions regarding the existence of a gender gap in returns to education.
This paper explores gender gaps in returns to education for seven developing and emerging countries. First, we investigate the existence of such a gap in a standard Mincerian framework. We ﬁnd a signiﬁcant returns gap in only two countries, namely Ukraine and Ghana, while the estimates for the other countries are centred relatively tightly around statistically insigniﬁcant point estimates close to zero. Using quantile regressions to dig deeper does not materially aﬀect our ﬁndings, although it does allow us to specify that the returns gaps estimated for Ghana and Ukraine are signiﬁcant at two out of three quartiles of the wage distribution, and that in Vietnam there is a small but signiﬁcant returns gap at the upper two quartiles of the distribution. These ﬁndings are important in providing context for the existing literature, showing that returns premiums in favour of females are not universally prevalent in developing countries for urban wage workers. This suggests that where large, signiﬁcant returns gaps have been found in the literature, this seems to be driven to a large extent by other segments of the labour market.
Second, we use our novel dataset to analyse the extent to which controlling for previously unobserved heterogeneity, namely cognitive skills, personality traits and family background, aﬀect OLS estimates of the returns gap. We ﬁnd that controlling for these STEP variables does not materially aﬀect our baseline estimates for Bolivia, Colombia, Georgia, Kenya and Vietnam (where the estimated gap remains insigniﬁcant and close to zero), or for Ukraine, where the estimated gap is of similar magnitude and remains signiﬁcant. Only in Ghana we ﬁnd that adding the STEP controls has a material eﬀect, reducing the point estimate of the gap substantially and rendering it insigniﬁcant. The results of the quantile regressions qualify this ﬁnding somewhat, showing that controlling for the STEP variables does make a diﬀerence for estimates of the gap at certain quantiles of the distribution in Ukraine and Vietnam. Overall, our ﬁnding regarding the importance of these sources of previously unobserved heterogeneity is cautiously negative: although they do appear to make a small diﬀerence for the level estimates and have an important eﬀect in Ghana, they do not appear to be universal sources of endogeneity in estimating the returns gap for urban wage workers.
Third, we examine the importance of controlling for selection in estimating the returns diﬀerential using the Heckman two-step procedure, dropping Kenya from our sample due to missing data. Here we ﬁnd that after controlling for selection, our point estimates of the returns gap remain insigniﬁcant in Ghana, Georgia and Vietnam, albeit with a relatively high point estimate in Georgia. Similarly, our estimate of the returns gap in Ukraine does not change considerably and remains signiﬁcant. In contrast, we obtain higher and signiﬁcant point estimates of the returns gap in Bolivia and Colombia. As explained above, this somewhat counterintuitive result is due to positive selection of females into employment in Bolivia and Colombia, and the positive relationship between education levels and probability of employment. Interestingly, in the two countries where selection appears to be important, we found earlier that controlling for the STEP variables did not have an observable eﬀect. Our ﬁndings therefore suggest that it is likely to be important to control for selection when estimating returns gaps in developing countries, even if only to exclude the possibility of selection bias. In addition, our approach suggests that selection is likely to operate through channels other than cognitive or non-cognitive abilities, or parental background.
Taken together, our ﬁndings show that, at least for urban wage workers in the countries in our sample, a returns premium for females may not be as prevalent as previously suggested. We also ﬁnd that controlling for potential sources of endogeneity, such as unobserved heterogeneity and selection, substantially changes the estimates of the gender returns gap in three out of seven of the countries in our sample. This highlights the importance of considering these channels to avoid the risk of biased estimation. This paper therefore represents a starting point for more detailed research, which could zoom in on the existence and drivers of returns diﬀerentials in individual countries, and overcome some of the limitations of this paper by extending it to rural areas and using samples with a larger number of clusters. These ﬁndings are also relevant to policy makers, since they demonstrate the importance of understanding the characteristics and dynamics of each country’s individual labour market prior to making policy proposals.
Delimiting the relevant market is a key concept for the analysis of mergers and acquisitions. The theoretical framework introduced by the SNNIP test helps to understand the conditions needed to do it. Nevertheless, there exist so many methods and the scientific community does not coincide in what of them is better to use. In this article based on previous work, some methods grounded on time series are presented.
In general, the concept of relevant market is associated with arbitrage. In this sense, two regions belong to the same market when arbitrage is possible. Therefore, it is possible to check whether the prices of these areas hold a pattern of convergence. As exposed mainly in Haldrup (2003) , we can differentiate two types of convergence:
Absolute convergence: it appears when there is perfect arbitrage with no transportation costs, then the stationary price difference between regions is zero. It can be expressed as:
Relative convergence: it is analogous to the previous concept but, in this case, transportation cost does not completely disappear. It can be expressed as:
Therefore, absolute convergence is a specific case of relative convergence for the case of α=0, which is mainly that transportation costs are equal to zero.
There are several methods used to analyse time series of prices. They are useful to define the relevant market. There are two main dimensions: defining the market of substitute goods and delimiting the area where a company is competing.
Correlation is one of the most common methods used to analyse prices. In this sense, Stigler & Sherwin (1985) proposed to do it with series transformed in logarithms to avoid problems arising from divergences in variance.
Ideally, two prices of goods or regions inside the same market should have high correlation in both logarithms and its first derivative (that works as an approximation of the growth rate).
This method presents many problems. Firstly, high correlations can be produced because of a spurious relation (Granger & Newbold, 1975). Moreover, Bishop & Walker (1996) argue that highly volatile exchange rates can distort the results. Nevertheless, Haldrup (2003) argue the since 90s exchange rates have a stable structure and, therefore, the analysis is not injured.
Cointegration can be determined by the procedure defined by Engle & Granger (1987). In a more general insight, if time series are integrated of order 1, it is possible to use the Johansen’s test (1991). In this sense, Alexander & Wyeth (1994) argue that a common market can be defined with only one cointegration relationship. In contrast, Haldrup (2003) argues that the single market is determined with k-1, the maximum, cointegration relationships. Cointegration cannot be applied when one of the series is not integrated, that is, it is stationary.
Given that cointegration relationships can be understood as a log-run equilibrium, it is possible to define best response functions to find results corresponding to price-based models, as Bertrand’s Oligopoly.
Since cointegration procedure is based on unit roots tests, Forni (2004) defined a way of determining the long-run equilibrium in a more flexible way. This test tries to analyse the stationarity of the logarithm of the ratios of both price series. It is possible to run different unit root test.
They do not belong to the same market (non-stationarity)
They do not belong to the same market (non-stationarity)
Both goods or regions belong to the same market (stationarity)
Figure 1 shows the time-series of the logarithm of the ratio of the price of two different goods. It is an example of relative convergence. Even with some outliers it is possible to see how the ratio fluctuates around an equilibrium. In this case, the test allows to conclude that the series is stationary. We could conclude that with the evidence extracted from this procedure, both goods are part of the same relevant market.
Figure 1: Ratio of two prices seeming to be in the same market Source: Own elaboration in previous work 
Figure 2 shows the same time-series but for different goods. It shows an unclear pattern of co-movement between prices. Not only prices seem not be related but also, they seem to move away. In this case, the series is not stationary and thus, according with this test, we could conclude that both goods do not belong to the same market.
Figure 2: Ratio of two prices seeming not to be in the same market.
Source: Own elaboration in previous work
From my point of view, for this purpose, unit root tests can be applied either with or without trend and intercept in the auxiliary regression. Initially, to test whether two goods or regions belong to the same market the trend is not relevant, since they should have a constant long-run equilibrium. In the case that the series were not stationary, repeating the test with trend would be interesting. It could explain if there exists a pattern of divergence between goods or regions. The intercept can be understood as the α coefficient exposed above. If it were zero and the test concluded stationarity, it could be a case of absolute convergence.
Granger causality is based on the analysis of VAR models. In an easy approach, with VAR models we try to estimate the price of one good or area in function of the lags of the other price and its own lags. Granger causality analyses the null of all coefficient of the other price are zero. If the null is rejected, one price causes the other and they seem to belong to the same market.
It is possible to carry out the regressions in both ways, the first one for estimating a price and the second one for estimating the other. There could be causality in both ways but it is not a necessary requirement to conclude that there exists a causality relationship between them.
Prices displayed as the ratio of Figure 1, showed a two-way causality relationship. However, prices of Figure 2 did not show any causality relationship.
There are many methods to analyse if some regions or goods belong to the same relevant market. Apart from the ones exposed above, other price-based ways can be used as VEC models or PCA, and other non-price-based methods as the shock analysis or the Elzinga & Hogarty Test (1973).
In general, different procedures do not use to issue contradictory answers, but they are not self-explanatory by themselves. They need to be complemented with each other to bring back the most accurate conclusion.
 See García García, Alberto (2016). El mercado relevante: técnicas económicas y econométricas para la delimitación. Trabajo Fin de Grado. Universidad de Oviedo.
 Haldrup, N. (2003). “Empirical Analysis of Price Data in the Delineation of the Relevant Geographical Market in Competition Analysis. University of Aarhus, Economic Working Paper .
 Stigler, G. J., & Sherwin, R. A. (1985). The Extent of the Market. Journal of Law and Economics, Vol. 28, No. 3, 555-585.
 Granger, C. W., & Newbold, P. (1974). Spurious Regressions in Econometrics. Journal of Econometrics, 2;, 111-20.
 Bishop, S., & Walker, M. (1996). “Price correlation analysis: still a useful tool for relevant market definition. Lexecon.
 Engle, R. F. & Granger, C.W. (1987). Co-Integration and Error Correction: Representation, Estimation and Testing. Econometrica, 55(2), 251-76.
 Johansen, S. (1988). Statistical Analysis of Cointegration Vectors. Journal of Economic Dynamics and Control, 231-254.
Alexander, Carol and Wyeth, John (1994) Cointegration and market integration: an application to the Indonesian rice market.The Journal of Development Studies, 30 (2). pp. 303-334. ISSN 0022-0388
 Forni, M. (2004). Using Stationarity Test in Antitrust Market Definition. American Law and Economic Review, 441-64.
 Elzinga, K. G., & Hogarty, T. F. (1973). The Problem of Geographic Market Definition in Antimerger Suits. Antitrust Bulletin, 18(1), pp.45-81.
We want to know what the BGSE community is thinking and reading about the Brexit.
We invite all Barcelona GSE students and alumni to share their early reflections on the potential economic consequences of the UK’s recent vote to leave the EU. Did you focus on a related topic in your master project? Are you working at a think tank, central bank, or consulting firm where your projects will be impacted by this decision? Have you seen any articles or links that you found useful for understanding what lies ahead?
Here are a couple of pieces we’ve found to get the discussion going:
The BGSE participates in A Dynamic Economic and Monetary Union (ADEMU), a project of the EU Horizon 2020 Program. Last week, ADEMU researchers held a webinar to discuss the Brexit.
Europe has grown out of its crises when reason and solidarity have prevailed, but it has also been devastated by its crises when fear and nationalism have taken the lead. Brexit, in the aftermath of the euro crisis, brings this dichotomy back to the foreground. Since 2010 there have been important advances in the development of the Economic and Monetary Union (EMU) and flexible forms of participation have allowed other EU countries, reluctant to join the euro, to share the basic principles that define the EU and have a common presence in the interdependent global world.
According to the panelists, Brexit raises 3 crucial questions:
Should the EMU be accelerated to become a centre of gravity within the EU, or slowed down to avoid a centrifugal diaspora? If accelerated, how?
Should an ‘exit’ country be allowed free entry to the single market and other EU public goods without accepting freedom of movement?
Should the EU remain as it is, or increase its capacity to offer common public services (Banking Union, border security, research funding, environment, etc.), or limit its scope of activity to the EU single and integrated market?
– Joaquín Almunia (Former Vice-President of the European Commission, honorary president of the Barcelona GSE)
– Ramon Marimon (European University Institute and UPF – Barcelona GSE; ADEMU)
– Gorgio Monti (European University Institute; ADEMU)
– Morten Ravn (University College London; ADEMU)
Annika Zorn (European University Institute; Florence School of Banking & Finance)
Nobel Laureate and Barcelona GSE Scientific Council member Joseph Stiglitz shares some reflections in the wake of the Brexit decision
What are you thoughts on Brexit?
We want to know what the BGSE community is thinking and reading about the Brexit. Please share your ideas, favorite sources for analysis, or observations from economists you respect in the comments below.
We use our own and third party cookies to carry out analysis and measurement of our website activities, in order to improve our services. Your continuing on this website implies your acceptance of these terms.