Jana Bobosikova ’10 is a graduate of the Barcelona GSE master program in International Trade, Finance and Development.
I was sitting in the Conference Room 1 at the United Nations in New York in summer 2013 amidst the Nexus Global Youth Summit attendees, listening to the opening address from Roland Rich, Executive Head of the United Nations Democracy Fund. The message I was hearing was clear and bold: we are here to take action. Not “we” as in policy makers and government funded multilateral agencies” but rather “we” as in Nexus, the global movement of Doers from the circles of the largest philanthropists, hardest working and most daring social entrepreneurs, investors, international advocates and NGOs.
Part of me was elated: so many influencers, all on the same page, with substantial financial and human commitment to contributing to making the world a better place!
Another part of me, the one I cultivated through the study of economic analysis at the ITFD program at Barcelona GSE, was a bit nervous about so much “good doing”.
I felt too aligned with the work of my former professor Xavier Sala-i-Martin and William Easterly that suggest that much of external help to date has had none or negative effect on socio-economic growth to simply embrace the possibility that Nexus Youth Summit was different and effective.
I sent a mental greeting to Prof. Antonio Ciccone and his often restated quest for the “one-handed economist” – creating one best solution with all relevant sets of variables – conclusions with no caveats “on the other hand.” Had I found them? What were the Nexus development solutions that gathered at the UN?
We have been conditioned to include savings as a basic variable for economic growth models. At Nexus, Aron Ping D’Souza felt so compelled by the meeting of Sir Richard Cohen at an earlier Nexus Europe Youth Summit that he started an impact investing annulation fund, using the best practice from classic and impact investing to target over AUS$1.7 trillion pension funds to a creating a 2.0 return for the economy.
We learned about girls’ education challenges in developing countries. One of Nexus’ members, Nikki Agrawal, invested in researching and launching menstruation-absorbing underwear to address one of the most significant school attendance problems for girls.
We studied about how to create policies that incentivize investing into R&D for a healthier global population. At Nexus, it was a great honor to be joined by Jake Glaser, the son of Elizabeth Glaser who pioneered and prompted research and development in pediatric AIDs in early 1990s. It was the one case of AIDS transmission via blood transfusion during Elizabeth Glaser’s giving birth and her subsequent fight to save her children that has kickstarted the largest research and movement on eliminating pediatric AIDS – a vision that is now becoming a reality.
I could probably keep going and catalogue the amazing encounters and inspiring efforts of the hundreds (!) of international innovators, family offices that fund some of the largest projects as well as startup social entrepreneurs – that make up Nexus.
And maybe I should, so that the passion and commitment of the Nexus movement and the research and rigorous analysis from the realms of development economists could start catalyzing into aligned efforts to improve international trade, finance and socio-economic development.
“…One of my research projects is on the Panic of 1907. In many ways, it resembles our recent economic crisis. For me, the most startling resemblance is the absolute fear that the monetary authority had about any contraction in the credit market.”
Excerpt from the blog post Nothing New Under the Sun Brian C. Albrecht ’14 (Master in Economics)
PhD student at University of Minnesota
Pedro Hinojo is a student in the Barcelona GSE Master in Competition and Market Regulation. Follow him on Twitter @pedrohinojo.
Crowdfunding can be defined as the peer-to-peer provision of financial resources, from the crowd to a particular project or venture. This is usually done via online platforms that forego the need of face-to-face interactions, slashing transactions costs and allowing the fundraiser to reach a wider audience.
This phenomenon started with a non-profit orientation, as donations channelled to political or development campaigns. Reward-based crowdfunding became more relevant later on, where a product is delivered to consumers who finance the project pre-development, usually at a discount or with other ‘perks’ (such as limited editions, first releases, recognition or references in the credits). Even if reward-based crowdfunding entails (economic) advantages for the fund providers, it is tagged as non-profit because these consumers value non-economic benefits (Belleflamme et al, 2013), such as the sense of belonging to a community (a reason for the funding scheme’s popularity in creative industries like films, music and videogames). Reward-based crowdfunding is rooted in the marketing concept of crowdsourcing, whereby firms take advantage of the crowd to obtain ideas, feedback, and solutions to corporate challenges (Schwienbacher and Larralde, 2010).
But crowdfunding has become relevant when moving towards a profit and investment orientation, be it credit-based or (notably less frequently) equity-shaped (Wilson and Testoni, 2014). In this fashion it is bound to become an alternative source of finance to the real economy when the traditional banking channel is temporarily subdued after the crisis (if not permanently due to more stringent capital requirements). Furthermore, it should benefit primarily small, nascent and innovative firms (which are among the most credit-rationed), especially when they produce unique goods whose features can be communicated easily through the internet.
Therefore, crowdfunding platforms put in contact a crowd of investors with entrepreneurs whose projects need financing. In principle, crowdfunding allows lenders to receive a (higher, although riskier) remuneration for their investment and entrepreneurs to get (cheaper) credit for their projects. Apart from these pecuniary benefits, the entrepreneurs can also promote their brand and products and engage with potential customers through crowdfunding platforms. Therefore, these platforms become essential not only to minimize intermediation costs but also to generate network externalities that attract good projects and a huge crowd of investors. Furthermore, projects appealing to crowdfunding have less geographical constraints in finding sources of finance than with traditional vehicles (Agrawal et al, 2013).
Nonetheless, crowdfunding comes at a cost for entrepreneurs (Agrawal et al, 2013). First, they lose the contact with rather professional investors, who can provide even more valuable advice than a crowd of individual consumers. Other sources of finance for these nascent or innovative firms, like venture capital or (to a lesser extent) angel investors, do provide some technical advice (beyond the funding) to assess the project’s feasibility (and help to improve it if needed).
Second, they may have to disclose some information in the online platforms, something which could be critical in nascent and creative/innovative activities. If some commercially sensitive information becomes publicly known to some extent, incumbents (less credit-constrained) can adapt these new ideas to their business, hammering potential competition from new entrants.
Crowdfunding also poses market challenges, given that imperfections which affect the financial sector are amplified. In financial markets information is far from perfect because it is both incomplete and asymmetric. Information is incomplete because agents cannot control results with their actions in an environment of risk and uncertainty. This problem is amplified in the context of crowdfunding where small, nascent and innovative firms are involved, with riskier projects (Llobet, 2014).
Moreover, information is asymmetric for borrowers and lenders, leading to moral hazard and adverse selection. Moral hazard arises because once borrowers have received the funds, they have the incentive to misbehave and refuse repayment, while the lenders find it difficult to monitor whether these eventual repayment problems are caused by misbehaviour or pure bad luck. Adverse selection happens because lenders cannot discriminate borrowers’ quality, so they charge a high cost to offset potential losses (or even ration credit), jeopardizing paradoxically the best borrowers.
Again, asymmetry of information may be amplified within the crowdfunding context. Given that lenders are now a crowd, it is very likely that each of them only holds a small part of the total investment, reducing incentives to carefully monitor with due diligence the borrowers’ ex ante quality or ex post conduct. In this case, the lack of geographic bonds enabled by crowdfunding is a hitch for lenders to track borrowers in the post-investment phase (Wilson and Testoni, 2014).
In addition, this crowd of lenders will be composed mostly of non-professional investors, so, even if they devoted time to assess the borrowers’ projects, they could lack the necessary skills. And, finally, a crowd of investors would have to face problems of collective action. Against this backdrop, the borrowers might also find less incentive to repay if they use crowdfunding platforms as a one-off bet to raise funds, without any discipline effect coming from repeated interactions or reputational issues.
Bearing in mind these market failures, there is some room for regulation. Considering some international cases (such as the US, the UK or Spain), the regulatory response usually adopts a paternalistic tone: restrictions to the investment by agents (especially individuals who are non-professional investors) and to the amount a project can raise. Crowdfunding platforms are subject to registry requirements similar to other financial intermediaries, although there may be exceptions for small projects. These exceptions can be sources of distortions if firms scale down their projects to fall below certain thresholds (Hornuf and Schwienbacher, 2014)
In order for public intervention to beat the market, regulation ought to be well targeted. Limits to the exposure of non-professional (low-income) investors are rational given their lack of skills, the risks of herd behaviour, path dependence (Agrawal et al, 2013), and the high risk-profile of these investments (Dorff, 2013). However, setting stringent caps on the maximum raisable amount for projects may squeeze the sector’s (and the whole economy’s) development.
Furthermore, the sector itself can provide some solutions to these market failures. For instance, crowdfunding platforms normally charge a fee for every successful project, (which raised the same or more funds than it had pledged). This strategy gives the platforms the right incentives (skin in the game) to monitor and screen projects, so that small (non-professional) investors are relieved of that assessment.
Besides, most crowdfunding platforms opt for an All-Or-Nothing (AON) or a ‘provision point mechanism’ model, whereby projects which do not raise the amount of funds they had pledged will not receive anything. Platforms opting for the Keep-It-All scheme (KIA, whereby projects receive all the funds they have raised even without having achieved their goal) would see higher funding costs charged to those projects (Cumming et al, 2014), given that underfunded projects (which would still receive the funds in the KIA scheme) have less likelihood to succeed.
To conclude, crowdfunding offers a promising venue to spur innovation, creativity and firm growth. The regulatory response must allow that development while ensuring that no substantial amounts are invested by those individuals lacking the skills and the resources needed to cope with the complex and risky investments (See Kay, 2014, whose contribution also served as an inspiration for the title of this post).
Barcelona GSE grad Alvaro Leandro looks at the EU’s Stability and Growth Pact through the lens of the draft budget plans of France and Italy.
The following post by Alvaro Leandro (ITFD’13 and Economics ’14) has been previously published by Bruegel.
Mr. Leandro is Research Assistant at Bruegel in Brussels, Belgium.
The EU’s fiscal framework, the Stability and Growth Pact (SGP), is a complicated system of fiscal rules. Rather than trying to assess the virtues and failures of the SGP, this blogpost aims at understanding its complex rules through the lens of the draft budget plans of France and Italy. France is in the corrective arm of the SGP, while Italy is now in the preventive arm, which allows the examination of various SGP requirements, such as the
structural balance pillar,
expenditure balance pillar,
and the debt criterion
which apply to countries in the preventive arm (like Italy), and the
headline budget deficit criterion,
the structural balance criterion,
and the cumulative structural balance criterion
which apply to countries in the corrective arm (like France). We also discuss the rules regarding financial sanctions.
On 28 November 2014, the European Commission released its opinions on the euro area Member States’ Draft Budgetary Plans for 2015. The purpose of these opinions is to assess each country’s compliance with the SGP, and to recommend appropriate action if there are risks of non-compliance.
Both Italy and France are “at risk of non-compliance with the provisions of the Stability and Growth Pact”
One of the surprises was that, in the case of Italy and France (as well as Belgium), the Commission decided to postpone its recommendations until March 2015, “in the light of the finalisation of the budget laws and the expected specification of the structural reform programmes announced by the authorities“. Both Italy and France are “at risk of non-compliance with the provisions of the Stability and Growth Pact”, according to the Commission.
The Stability and Growth Pact is composed of a preventive and a corrective arm. The corrective arm is called the Excessive Deficit Procedure (EDP), which is triggered for countries with a general government deficit larger than 3 percent of GDP or with debt larger than 60 percent of GDP not being reduced at a satisfactory pace. France is currently under the corrective arm and Italy was as well until 2013. Italy is therefore now subject to the rules of the preventive arm.
Source: Country Stability and Convergence Programmes for MTOs, AMECO for forecast of 2014 and 2015 Structural Balances
Notes: Data labels are for the MTOs. According to the Treaty on Stability, Coordination and Governance (TSCG), signed by all euro area members in March 2012, all signatory Member States must have an MTO higher than -0.5% of GDP (or -1% for countries with a debt/GDP ratio lower than 60%). The “fiscal” part of the TSCG is often called the ‘Fiscal Compact’.
The fundamental variables used to assess compliance with the preventive arm of the SGP are the country-specific medium-term budgetary objectives (MTOs), which are defined as structural balances (a measure of the government budget balance adjusted for the economic cycle and one-off revenue and expenditure items; this blogpost by Zsolt Darvas explains the estimation methodology and why it has some drawbacks). MTOs are chosen by each Member State following strict guidelines set out by the Commission, in order to ensure sustainability in its public finances (a higher MTO is required from countries with a high debt ratio or with a rapidly-ageing population faced with increasing age related expenditure for example, while the ‘Fiscal Compact’ limits the MTO for euro area member states, see the notes to Figure 1). A few examples of MTOs can be found in Figure 1: France, Italy and Spain have an MTO of 0 percent of GDP, while Germany’s MTO is -0.5 percent. This means that in the case of Germany, for example, a structural deficit of 0.5 percent of GDP is deemed enough to ensure the sustainability of its public finances.
The Fiscal Compact is not binding for non-euro area Member States, which therefore have more freedom in setting their MTOs. For example, Hungary has an MTO of -1.7 percent, the Polish and Swedish MTO is -1 percent, while it is zero for the United Kingdom.
To comply with the preventive arm of the SGP, all Member States must be at their MTOs or be on a path to reach them, with an annual improvement of their structural balance of 0.5 percent of GDP towards the MTO as a benchmark.
A higher effort might be required for countries with high debt/GDP ratios and pronounced risks to overall debt sustainability. A higher effort is also required in good economic times, and a lower effort in economic downturns. A Member State could also be allowed to deviate from the adjustments if it experiences “an unusual event outside its control with a major impact on the financial position of the general government”.
Therefore compliance with the preventive arm is not defined by the Member State’s structural balance, but by its path towards the MTO.
Structural balance pillar: Table 1 shows the recommended path for Italy. On the 28th of November 2014 the Commission decided that “severe economic conditions” (namely a real GDP contraction and a large negative output gap: see Table 3) justified that Italy is not required to adjust its structural balance towards the MTO by the 0.5 percent of GDP benchmark in 2014. This is why the required change in the structural balance for 2014 is 0. Italy had originally planned a large correction of its structural budget for 2014 in its 2013 Stability Program, of 0.7 percentage points. In its Draft Budget Plan for 2014 Italy revised this adjustment to 0.3. Finally it invoked Article 5 of Regulation 1175/2011 in its 2014 Stability Program which allows a deviation from the required adjustment “in the case of an unusual event outside the control of the Member State concerned which has a major impact on the financial position of the general government”. The required adjustment is also 0 in 2013 for the same reason: negative real output growth makes Italy eligible to the escape clause. In 2015 real GDP is forecast by the Commission to increase by 0.6 (see Table 3), which means that Italy can no longer apply for the escape clause regarding economic downturns.
Source: Commission Staff Working Document: Analysis of the draft budgetary plan of Italy (28 November 2014), European Commission Autumn Forecast (November 2014), Italy’s Stability Programme April 2014, Italy’s Stability Programme April 2013, Vade Mecum on the Stability and Growth Pact (May 2013)
Note: ΔSB denotes the percentage point change in the structural balance. MLSA: minimum linear structural adjustment. DBP: draft budget plan
(1): Deviation of the growth rate of public expenditure net of discretionary revenue measures and revenue increases mandated by law from the applicable reference rate in terms of the effect on the structural balance. A negative sign implies that expenditure growth exceeds the applicable reference rate.
Table 1: Italy’s compliance with the preventive arm and the debt criterion
Concretely, Italy’s medium-term budgetary objective is a structural balance of 0 percent of GDP, whereas the European Commission forecasts a structural balance of -0.8 percent of GDP in 2015. Thus Italy is required to adjust its structural balance towards its MTO by 0.5 percentage points of GDP (while a higher adjustment than 0.5 is required for countries with debt exceeding 60 percent of GDP, a lower effort is allowed in economic “bad times”). The forecast adjustment from 2014 to 2015 is of 0.1 pp. according to the Commission (taking account of additional measures announced on 27 October), which is considered to pose a risk of “significant deviation from the required adjustment”.
Expenditure balance pillar: Member States in the preventive arm of the SGP also have to comply with the expenditure benchmark pillar, which complements the structural balance pillar. It requires countries that are not at their MTO to contain the growth rate of expenditure net of discretionary revenue measures to a country-specific rate below that of its medium-term potential GDP growth. This medium-term potential GDP growth is calculated as a 10-year average (of the 5 preceding years, the current year and forecasts for the next 4 years), and in the case of Italy it is 0 percent in 2014 and 2015. Had Italy been at its MTO it would have had to contain net expenditure growth to 0 percent. However, not being at its MTO, it is required to contain net expenditure growth to a reference rate below medium-term potential GDP growth: -1.1 percent in 2015 (which is calculated so that it is consistent with a tightening of the budget balance of 0.5 percent of GDP when GDP grows at its potential rate). The applicable reference rate in 2014 is 0 because of the “severe economic conditions”. In 2013 the applicable reference rate was 0.3, which is different to that in 2014 and 2015 because it is revised every three years. The commission allows one-year and two-year average deviations of a maximum of 0.5 pp of GDP in terms of their impact on the structural balance. In 2015 the deviation in terms of its effect on the structural balance is forecast to be of 0.7 pp. of GDP, which is a deviation larger than the allowed 0.5 pp.
Debt Criterion: Countries which have recently left the EDP are subject to a 3-year transition period aimed at ensuring that the debt level is being reduced at an acceptable pace. Italy is in such a transition period, since it left the EDP in 2013. It is thus subject to required medium-term linear structural adjustments (MLSAs) aimed at ensuring that it will comply with the debt criterion. These MLSAs are formulated in terms of adjustments to the structural balance. Since Italy is in the preventive arm and therefore also subject to required adjustments towards the MTO, the largest one is applicable. The 2.5 pp. MLSA in 2015 (larger than the 0.5 pp. required change under the preventive arm) is at serious risk of not being met according to Commission forecasts. This violation of the debt criterion could lead to a reopening of the Excessive Deficit Procedure.
Commission’s view: In its opinion on Italy’s Draft Budget Plan released at the end of November 2014, the Commission points to risks of non-compliance with the requirements of the SGP, and “invites the authorities to take the necessary measures […] to ensure that the 2015 budget will be compliant with the Stability and Growth Pact”. It then says that “The Commission is also of the opinion that Italy has made some progress with regard to the structural part of the fiscal recommendations issued by the Council in the context of the 2014 European Semester and invites the authorities to make further progress. In this context, policies fostering growth prospects, keeping current primary expenditure under strict control while increasing the overall efficiency of public spending, as well as the planned privatisations, would contribute to bring the debt-to-GDP ratio on a declining path consistent with the debt rule over the coming years.”
Once a country has been identified as having an excessive deficit, which was the case for France in 2009, it is turned over to the corrective arm, the EDP, the purpose of which is to correct such a deficit.
Headline budget deficit criterion: Once a country has been identified as having an excessive deficit, which was the case for France in 2009, it is turned over to the corrective arm, the EDP, the purpose of which is to correct such a deficit. France has now been under the EDP for 5 consecutive years, and is subject to requirements set out in the latest Council recommendation to end the excessive deficit situation (June 2013). The recommendation released in 2009 originally planned a correction of the deficit (below 3 percent) by 2012, which was then postponed to 2013 in view of the actions taken and the “unexpected adverse economic events with major unfavourable consequences for government finances”. In June 2013, the Council again postponed the correction of the deficit to 2015 for the same reasons: France fell slightly short of the required 1 percent average annual fiscal effort for the period 2010-2013 (the actual average annual fiscal effort was 0.9 percent), but this was again against a backdrop of “unexpected adverse economic events”.
Source: Commission Staff Working Document: Analysis of the draft budgetary plan of France (November 28, 2014), Council recommendation to end the excessive deficit situation (June 2013), European Commission Autumn Forecast (November 2014)
Note: ΔSB denotes the percentage point change in the structural balance
Table 2: France’s compliance with the corrective arm
The latest Council recommendation (June 2013) sets out a path for France’s headline government balance, which you can see in Table 2. By 2015, the headline balance should be reduced to -2.8 percent of GDP. The forecast headline balance of -4.5 percent falls significantly short of this requirement.
Structural balance criteria: Additionally the adjusted change in the structural balance from 2014 to 2015 is forecast to be of 0.0 pp., and its cumulative change from 2012 to 2015 is forecast to be 1.6 pp., falling short of the requirements of 0.8 pp. and 2.9 pp. respectively (1). The structural budget also deviates from the requirements for 2014.
Commission’s view: Thus France is “at a risk of non-compliance” with the SGP, and, contrary to Italy, the Commission “is also of the opinion that France has made limited progress with regard to the structural part of the fiscal recommendations issued by the Council […] and thus invites the authorities to accelerate implementation”. In his letter to the President of the European Commission, France reiterated its determination to go ahead with reforms, most notably in the labour market. It remains to be seen whether progress by March 2015 will be assessed to be sufficient by the Commission.
Table 3: France and Italy: main macroeconomic indicators in 2014 and 2015
Non-compliance with the SGP can lead to sanctions. In the preventive arm, a Council recommendation which is not respected can lead to an interest-bearing deposit of 0.2 percent of GDP. A euro-area country in the corrective arm of the SGP may be required to make a non-interest bearing deposit until the deficit has been corrected, after which it can also be sanctioned with a fine worth up to 0.5 percent of GDP (with a fixed component of 0.2 percent of GDP and a variable component (2)). France and Italy are both at a risk of non-compliance with the requirements of the SGP. Failure to meet the required efforts in terms of fiscal consolidation and structural reforms by March 2015 could bring them closer to possible sanctions, unless the flexibility of the SGP is stretched further. Recent growth and inflationary figures suggest continued weak economic activity, and if economic data of 2014 qualified for “severe economic conditions”, 2015 may qualify too, especially if growth and inflation will disappoint relative to the November 2014 ECFIN forecasts. And in the preventive arm, structural reforms which have a verifiable positive impact on the long-term sustainability of public finances (such as by raising potential growth) could be considered when assessing the adjustment path to the medium-term objective.
(1) The adjusted changes in the structural balance correct for the negative impact of the changeover to ESA 2010 as well as for changes in potential growth and revenue windfalls/shortfalls.
(2) This variable component is equal to “a tenth of the absolute value of the difference between the balance as a percentage of GDP in the preceding year and either the reference value for government balance, or, if non-compliance with budgetary discipline includes the debt criterion, the government balance as a percentage of GDP that should have been achieved in the same year according to the notice issued”
Post by Nadim Elayan, current student in the Barcelona GSE’s Master Program in International Trade, Finance and Development. Follow him on Twitter @Nadim1306.
It is unethical that 20-year-old guys with no studies whatsoever earn 20 million euros a year by just kicking a ball during one hour and a half once a week whereas doctors who have studied almost an entire decade and save human lives every day earn 500 times less.
A fair society should compensate with higher wages people who save human lives than people that entertain us during the weekends.
How can we stand by watching soccer players earning millions a year while there are people starving in the same country?
Most of us will have probably heard these sentences or similar ones regarding the large wages that soccer players earn and even that this situation is immoral or a bad incentive for kids to have a good education. But is this true? Do soccer players actually earn more than doctors?
Before analyzing the Spanish soccer labor market or discussing the ethical implications of this situation we need first to say that this is not true. The fact that we can name some players with shockingly salaries it does not mean that on average soccer players earn more than doctors, or even more than the average salary of a specific country. In order to compare professions we need a non-biased sample. We cannot look at the best soccer player in the whole history, Lionel Messi, and compare his salary with a regular doctor in Barcelona and then conclude that soccer players earn 500 times more than doctors. This situation would be the same as looking at Yao Ming, a Chinese basketball player with a height of 7.6 feet (2.29 meters) and a weight of 310 pounds (141 kg) and wrongly concluding that Chinese people are 2 feet taller (0.6 meters) and they weigh 130 pounds (59 kg) more than the average European citizen. Thus Lionel Messi is not the best representative of soccer players’ earnings terms as Yao Ming is not the best representative of the Chinese citizen in physical terms.
In Spain there are more than 700,000 professional and amateur soccer players according to the Real Federación Española de Fútbol and most of them work without a salary or earning below the minimum wage and that is why most of them need another job. There are about 500 players earning on average a wage of 1,336,250.32€ a year, the ones playing in the First Division and also about 500 players earning a wage below 200,000€ on average, the ones playing in the Second Division. So in total we can count that within Spain there are only around 1000 players earning a salary way above the salary an average doctor earns, which in Spain is 64,424.66€ on average.
We would have also to take into account that the soccer professional life is barely higher than 10 years while the doctor’s one would be around 35 to 40 years. All this without considering the high risk of injury a soccer player faces every day that would leave him without any salary at all the rest of his life. But of course on the other hand soccer players work no more than 15 hours a week on average and therefore the wage for this .14% gets even larger if calculated per hour. So in order to compensate for the differences in professional lives we should observe that soccer players would earn at least 3.5 to 4 times more than doctors.
We can see that these 1,000 players, .14% out of the total, earn way more than doctors on average but the next group of professional players who earn the most are the ones playing in 2nd B Division. There are 2,000 players in this Division earning on average 35,000€, what is actually less than doctors, but the hourly wage would still be above, 55.55€ per hour. If we go further away focusing in the ones playing in 3rd Division their hourly wage is 14.44€ already below the doctors one.
Summing up .14% out of total soccer players earn a salary way above the doctors’ average that more than compensates their shorter professional life. The next .28% still earns a higher hourly wage than doctors’ average but not enough to compensate their shorter professional life. So the rest 99.58% of the soccer players do not earn more than the average salary of a doctor. Actually most of them do not earn anything and some of them earn a little bit if they are in the starting line-up or for each victory.
Once showed that the average soccer player does not earn more than the average doctor, not even more than the minimum wage, we will analyze the soccer labor market and try to explain why this 0.14% earn so much money.
Analyzing the soccer labor market
First we will focus on the soccer labor demand. It is extremely high for very low values of labor hired, so for the first soccer players in the First Division and even for the Second Division. It is easy to show this by noticing that European societies are willing to fill 50 to 90 thousand people stadiums more than 30 times a year at prices between 20€ and 200€ per person each game. Therefore demand is huge. But for higher levels of labor hired, so 2nd Division B, 3rd Division and following Divisions the labor demand is very low and close to 0. Usually these games are free attendance or paying a type of mandatory lottery participation.
Now we can talk about the labor supply. In this case we can separate it into 2 subgroups. For simplicity we just divide the market in 2 subgroups, the first is composed by the 1000 soccer players playing in First or Second Division and the rest of the players. Even when considering perfect inelastic labor supplies we see very high wages for the first subgroup due to the high labor demand and also due to the very scarce labor supply of this type. Whereas for the second subgroup the wages are extremely low, almost 0, because of the low labor demand and the extremely high labor supply.
People think constantly about soccer, they fill large stadiums paying really high prices and spend almost 100€ a year to buy the newest shirt of their team. Furthermore between 20 and 60 percent of total TV spectators watch Champions League games in prime time and even watch TV programs and listen radio programs that only talk about soccer… In conclusion people spend a large fraction of their income and time on soccer. Is it unethical that a large fraction of this cake is sent to workers via wages?
If we think that this 0.14% of total soccer players earn too much we should then say where we send this money generated by them. Would it be more ethical to let billionaire soccer teams owners to keep a larger fraction instead? In a non-profit Solow Economy, total output goes to capital and labor, therefore , high output will translate into high wages (for one club L=25, so very low). For example in F.C. Barcelona the season 2014-2015 has spent 509 million € in total and 288.9 million € of them only in players’ wages. So 56.76% of total expenditure goes to their soccer players.
Secondly if we speak about fairness we should want a society that creates only inequality from people who had the same life opportunities but they succeeded and managed to highlight in their respective fields and dislike inequalities coming from differences in opportunities. Since playing soccer is not expensive, all kids can play it everywhere and the best clubs, knowing this, have scouters all over the world. This makes this labor market pretty competitive, almost every kid at age 12 has at least tried once to get in F.C. Barcelona or Real Madrid through many tests these clubs organize everywhere. Thus, the kids who finally succeed must have been better than almost every kid of their age in the planet. All this combined make the differences in wages created be explained by differences in talent and effort no matter the race, family’s economic status or better education opportunities. In fact, most of the best soccer players like Pelé, Maradona, Ronaldinho, Cristiano Ronaldo, Samuel Eto’o, so also the ones earning high wages, come from very poor families.
Lastly the fact that there are people starving in a country has nothing to do with the fact that there are soccer players earning large wages because this is determined by a very large labor demand and a very scarce labor supply for this very specific First and Second Division players. The labor demand is determined by the society’s taste so we should blame this taste if we think that the amount of money generated by soccer is disproportionately huge and instead we should allocate this exact amount of time and income to fight hunger and other problems we find more relevant.
 Every team posts their wage budget annually. These values are obtained by dividing the total wage budget over the total players in that division.
Liyun Chen ’11 (Economics) is Senior Analyst for Data Science at eBay. She recently moved from the company’s offices in Shanghai, China to its headquarters in San Jose, California. The following post originally appeared on her economics blog in English and in Chinese. Follow her on Twitter @cloudlychen
Variance is an interesting word. When we use it in statistics, it is defined as the “deviation from the center”, which corresponds to the formula , or in the matrix form (1 is a column vector with N*1 ones). From its definition it is the second (order) central moment, i.e. sum of the squared distance to the central. It measures how much the distribution deviates from its center — the larger the sparser; the smaller the denser. This is how it works in the 1-dimension world. Many of you should be familiar with these.
Variance has a close relative called standard deviation, which is essentially the square root of variance, denoted by // . There is also something called the six-sigma theory– which comes from the 6-sigma coverage of a normal distribution.
Okay, enough on the single dimension case. Let’s look at two dimensions then. Usually we can visualize the two dimension world with a scatter plot. Here is a famous one — old faithful.
Old faithful is a “cone geyser located in Wyoming, in Yellowstone National Park in the United States (wiki)…It is one of the most predictable geographical features on Earth, erupting almost every 91 minutes.” We can see there are about two hundreds points in this plot. It is a very interesting graph that can tell you much about Variance.
Here is the intuition. Try to use natural language (rather than statistical or mathematical tones) to describe this chart, for example when you take your 6 year old kid to the Yellowstone and he is waiting for next eruption. What would you tell him if you have this data set? Perhaps “I bet the longer you wait, the longer next eruption lasts. Let’s count the time!”. Then the kid has a glance on your chart and say “No. It tells us that if we wait for more than one hour (70 minutes) then we will see a longer eruption in the next (4-5 minutes)”. Which way is more accurate?
Okay… stop playing with kids. We now consider the scientific way. Frankly, which model will give us a smaller variance after processing?
Well, always Regression first. Such a strong positive relationship, right? ( no causality…. just correlation)
Now we obtain a significantly positive line though R-square from the linear model is only 81% (could it be better fitted?). Let’s look at the residuals.
It looks like that the residuals are sparsely distributed…(the ideal residual is white noise which carries no information). In this residual chart we can roughly identify two clusters — so why don’t we try clustering?
Before running any program, let’s have a quick review the foundations of the K-means algorithm. In a 2-D world, we define the center as // , then the 2-D variance is the sum of squares of each pint going to the center.
The blue point is the center. No need to worry about the outlier’s impact on the mean too much…it looks good for now. Wait… doesn’t it feel like the starry sky at night? Just a quick trick and I promise I will go back to the key point.
For a linear regression model, we look at the sum of squared residuals – the smaller the better fit is. For clustering methods, we can still look at such measurement: sum of squared distance to the center within each cluster. K-means is calculated by numerical iterations and its goal is to minimize such second central moment (refer to its loss function). We can try to cluster these stars to two galaxies here.
After clustering, we can calculate the residuals similarly – distance to the central (represents each cluster’s position). Then the residual point.
Red ones are from K-means which the blue ones come from the previous regression. Looks similar right?… so back to the conversation with the kid — both of you are right with about 80% accuracy.
Shall we do the regression again for each cluster?
Not many improvements. After clustering + regression the R-square increases to 84% (+3 points). This is because within each cluster it is hard to find any linear pattern of the residuals, and the regression line’s slope drops from 10 to 6 and 4 respectively, while each sub-regression only delivers an R-square less than 10%… so not much information after clustering. Anyway, it is better than a simple regression for sure. (the reason why we use k-means rather than some simple rules like x>3.5 is that k-means gives the optimized clustering results based on its loss function).
Here is another question: why do not we cluster to 3 or 5? It’s more about overfitting… only 200 points here. If the sample size is big then we can try more clusters.
Fair enough. Of course statisticians won’t be satisfied with these findings. The residual chart indicates an important information that the distribution of the residuals is not a standard normal distribution (not white noise). They call it heteroscedasticity. There are many forms of heteroscedasticity. The simplest one is residual increases when x increases. Other cases are in the following figure.
The existence of heteroscedasticity makes our model (which is based on the training data set) less efficient. I’d like to say that statistical modelling is the process that we fight with residuals’ distribution — if we can diagnose any pattern then there is a way to improve the model. The econometricians prefer to name the residuals “rubbish bin” — however it is also a gold mine in some sense. Data is a limited resource… wasting is luxurious.
Some additional notes…
Residuals and the model: as long as the model is predictive, then residuals exist, regardless of the model’s type, either a tree or linear or whatever. Residual is just the true Y minus the prediction of Y (based on training data set).
Residuals and loss function： for ordinary least squares, if you solve it in the numerical way then it iterates by the SSR (sum of squared residuals) loss function (equals to the variance of residuals). In fact many machine learning algorithms relay on a similar loss function setting — either first order or higher order moments of residuals. From this perspective statistical modelling is always fighting with residuals. This differs from what the econometricians do so there was a huge debate on the trade off between consistency and efficiency. Fundamentally different believes of modelling.
Residuals, Frequentists and Bayesians: In the above paragraphs I mainly followed the Frequentist’s language. There was nothing on posterior… From my understanding many items there would be mathematically equivalent to the Bayesian’s frameworks so it should not matter. I will mention some Bayesian ideas in the following bullets so go as you wish.
Residuals, heteroscedasticity and robust standard error: We love and hate heteroscedasticity at the same time. It tells us that our model is not perfect while there is a chance to make some improvements. Last century people tried to offset heteroscedasticity’s impact by introducing the robust standard error concept — Heteroscedasticity-consistent standard errors, e.g. Eicker–Huber–White. Eicker–Huber–White changes the common sandwich matrix (bread and meat) we use for the significant test (you may play with it using the sandwich() package in R). Although Eicker–Huber–White contributes to the variance estimation by re-weighing with estimated residuals, this approach does not try to identify any patterns from the residuals. Thus there are methods like Generalized least square (GLS) and Feasible generalized least square (FGLS) that try to use a linear pattern to reduce the variance. Another interesting idea is clustered robust standard error which allows heterogeneity among clusters but constant variance within each cluster. This approach only works when the number of groups approaches infinite asymptotically. (otherwise you will be getting stupid numbers like me!)
Residuals and reduction of dimensions: generally speaking the more relevant co-variates introduced to the model the less the noise is; while there is also a trade-off towards overfitting. That is why we need to reduce the dimensions (e.g. via regularization). Moreover, it is not necessary that we want to make a prediction every time; sometimes we may want to filter out the significant features — a sort of maximizing the information we could get from a model (e.g. AIC or BIC or attenuation speed which increasing the punishment in regularization). In addition regularization is not necessarily linked to train-validation… not the same goal.
Residuals and experimentation data analysis: heteroscedasticity will not influence the consistency of Average Treatment Effect estimation in an experimentation analysis. The consistency originates from randomization. However people are still eager to learn more beyond a simple test-control comparison, especially when the treated individuals are very heterogenous; they look for heterogenous treatment effect. Quantile regression may help in some case if there is a strong covariate observed…but what could we do when there are thoudsands of dimensions? Reduce the dimension first?
Well, the first reaction to “heterogeneous” should be variance…right? otherwise how could we quantify heterogeneity? There is also a bundle of papers that try to see whether we would be able to find more information for treatment effects rather than simple ATE. This one for instance:
Ding, P., Feller, A., and Miratrix, L. W. (2015+). Randomization Inference for Treatment Effect Variation. http://t.cn/RzTsAnl
Víctor Burguete ’11 (International Trade, Finance and Development) is an Economic Researcher and Public Policy Analyst at IESE’s Public-Private Sector Research Center (IESE-PPSRC) in Barcelona. In this post, he shares the process of preparing a policy brief on Spanish policy reforms and provides an overview of the brief’s findings.
Preparing a Policy Brief like this took me over a month. It is necessary to consider than working in a research institution implies getting involved in many projects and there is usually less time than what I would like to devote to one specific project. In my opinion, it is very important to work open-minded and to continuously consider the possible connections among different projects. In the case of this Policy Brief, most of the data (international economic policy recommendations) were collected during the past few months. In late September I proposed this topic and the IESE-PPSRC research center decided to inaugurate these series of papers. After reviewing the literature (Table 1), I analyzed the data and I started creating some graphs and building the story I wanted to tell. Of course, the final text was reviewed several times until it was finally published.
“Spain’s response to EC and OECD economic policy recommendations” analyses the overall reformist progress of the Spanish Government in an international perspective. According to the international assessment, Spain ranks as one of the top reformers in the Euro Area and the EU as a whole. A second insight one gets from our Policy Brief is that Spain’s delivery, in relative terms to other countries, accelerated between 2011 and 2013.
Of course, this is the general trend and the Policy Brief offers details on the progress in the 18 policy sub-areas we cover at the SpanishReforms project, including how the reform priorities prescribed to Spain by these institutions have changed over time. Substantial progress is recognized in addressing the financial system reform, mainly in the area of recapitalization and restructuring but also by adopting other financial measures. However, both the OECD and the EC point to active labour market policies and professional services as the main structural reforms lagging behind.
More information in www.spanishreforms.com, a new an academic, non‐governmental website that aims at being a useful reference for those interested in independent, rigorous and up‐to‐date information about the Spanish economy and its economic policy reforms.
The importance of history in economic development is well-established (Nunn 2009; Spolaore and Wacziarg 2013), but less is known about the specific channels of transmission which drive this persistence in outcomes. Dell (2010) stresses the negative effect of the mita in Latin America, and Nunn and Wantchekon (2011) document the adverse impact of African slavery through decreased trust. But did other colonial arrangements lead to positive outcomes in the long run?
I address this question in my Job Market Paper by analyzing the long-term economic consequences of European missionary activity in South America. I focus on missions founded by the Jesuit Order in the Guarani lands during the seventeenth and eighteenth centuries, in modern-day Argentina, Brazil and Paraguay. This case is unique in that Jesuits were expelled from the Americas in 1767 –following European “Great Power” politics— precluding any continuation effect. While religious conversion was the official aim of the missions, they also increased human capital formation by schooling children and training adults in various crafts. My research question is whether such a one-off historical human capital intervention can have long-lasting effects.
To disentangle the national institutional effects from the human capital shock the missions supplied, I use within country variation in missionary activity in three different countries:
The area under consideration was populated by a single semi-nomadic indigenous tribe, so I can abstract from the direct effects of different pre-colonial tribes (Maloney and Valencia 2012; Michalopoulos and Papaioannou, 2013). The Guarani area also has similar geographic and weather characteristics, though I control for these variables in the estimation.
Using municipal level data for five states (Corrientes and Misiones in Argentina, Rio Grande do Sul in Brazil, and Itapúa and Misiones in Paraguay), I find substantial positive effects of Jesuit missions on human capital and income, 250 years after the missionaries were expelled. In municipalities where Jesuits carried out their apostolic efforts, median years of schooling and literacy levels remain higher by 10-15%. These differences in educational attainment have also translated into higher modern per capita incomes of nearly 10%. I then analyze potential cultural mechanisms that can drive the results. To do so I conduct a household survey and lab-in-the-field experiments in Southern Paraguay. I find that respondents in missionary areas have higher non-cognitive abilities and exhibit more pro-social behavior.
Even though I use country and state-fixed effects as well as weather and geographic controls, Jesuit missionaries might have chosen favorable locations beyond such observable factors. Hence the positive effects might be due to this initial choice and not to the missionary treatment per se.
To address the potential endogeneity of missionary placement, I conduct two empirical tests. The first one is a placebo that looks at missions that were initially founded by the Jesuits but were abandoned early on (before 1659). I can thereby compare places that were initially picked by missionaries with those that actually received the missionary treatment. I find no effect for such “placebo” missions, which suggests that what mattered in the long run is what the missionaries did and not where they first settled.
Second, I conduct a comparison with the neighboring Guarani Franciscan Missions. The comparison is relevant as both orders wanted to convert souls to Christianity, but Jesuits emphasized education and technical training in their conversion. Contrary to the Jesuit case, I find no positive long-term impact on either education or income for Franciscan Guarani Missions. This suggests that the income differences I estimate are likely to be driven by the human capital gains the Jesuits provided.
In addition, I employ an IV strategy, where I use as instruments the distance from early exploration routes and distance to Asuncion. Distance from the exploration routes of Mendoza (1535-1537) and Cabeza de Vaca (1541-1542) serves as a proxy for the isolation of the Jesuit missions (in the spirit of Duranton et al. 2014). Asuncion, in turn, served as a base for missionary exploration during the foundational period, but became less relevant for Rio Grande do Sul after the Treaty of Madrid (1750) transferred this territory to Portuguese hands. For this reason and to avoid the direct capital –and Spanish Empire—effects, I use this variable only for the Brazilian subsample of my data (as in Becker and Woessmann 2009; Dittmar 2011). The first-stage results are strongly significant throughout (with F-statistics well above 10), and the second-stage coefficients for literacy and income retain their sign and significance –appearing slightly larger—in the IV specifications.
Extensions and Mechanisms
To complete the empirical analysis, I examine cultural outcomes and specific mechanisms that can sustain the transmission of human capital from the missionary period to the present. I find that respondents in missionary areas possess superior non-cognitive abilities, as proxied by higher “Locus of Control” scores (Heckman et al., 2006). Using standard experiments from the behavioral literature, I find that respondents in missionary areas exhibit greater altruism, more positive reciprocity, less risk seeking and more honest behavior. I use priming techniques to further investigate whether these effects are the result of greater religiosity –which appears not to be the case.
In terms of mechanisms, my results indicate that municipalities closer to historic missions have changed the sectoral composition of employment, moving away from agriculture and towards manufacturing and services (consistent with Botticini and Eckstein, 2012). In particular, I document that these places still produce more handicrafts such as embroidery, a skill introduced by the Jesuits. People closer to former Jesuit missions also seem to participate more in the labor force and work more hours, consistent with Weber (1978). I also find that indigenous knowledge —of traditional medicine and myths—was transmitted more from generation to generation in the Jesuit areas. Unsurprisingly, given their acquired skills, I find that indigenous inhabitants from missionary areas were differentially assimilated into colonial and modern societies. Additional robustness tests suggest that the results are not driven by migration, urbanization or tourism.
At the Rényi Hour on November 20th, Samantha Cook presented her recent research on the description and categorisation of the global SWIFT (Society for Worldwide Interbank Financial Telecommunication) interbank network. Samantha is currently the Chief Scientist at Financial Network Analytics in Barcelona. Previously, she was a Quantitative Analyst at Google’s Research Group in New York and a professor at Columbia University in New York and Pompeu Fabra University in Barcelona.
The study focused on understanding the underlying structure of a network of messages between financial institutions in different countries. It looked at how the network was affected by various recent economic events and evaluated the robustness of the system over time.
The data set underpinning the study contains standard MT103 SWIFT messages from 1 January 2003 and 31 July 2013, a period characterised by extreme economic turmoil. Each message represents a single customer credit transfer from bank to bank. The data is aggregated at the country level.
Samantha showed us different statistical analyses of the data set. The analysis of the data in terms of a complex weighted network was particularly interesting. In the network, each node represented a country and the edges connecting two different nodes were weighted according to the amount of messages those country exchanged in a given time period. The resulting network follows approximately a Core-Peripheral structure, that is, some nodes are fully connected with each other (the so-called core) while some others are mostly connected only to a node of the core: these are the peripheral nodes. Interestingly, events such as the introduction of new regulations or the beginning of the financial crisis was clearly reflected in the links and even more striking this network structure was resilient during the period studied. This work showcases a novel approach to understanding the structure of the complex financial system and the findings may provide a way to help improve the global service.
The discussion also identified some opportunities for further research. For example, we discussed why the degree distribution does not behaves as other related financial networks, and why the number of links decreases while the number of messages has a clearly increasing trend. These questions, and others that emerge, may provide ideas for further research and modelling work in this area.
We use our own and third party cookies to carry out analysis and measurement of our website activities, in order to improve our services. Your continuing on this website implies your acceptance of these terms.