How to read the tables

Listen to this article

00:00
00:00

This year sees the fifth edition of the Financial Times ranking of Executive MBA programmes – MBA degrees for working managers.

The ranking is based on the data collated from two sets of questionnaires – one for the business schools and the second for alumni who graduated three years ago.

Of the 7,000 alumni who were contacted about the ranking this year, 3,000 completed the online questionnaire, a response rate of 43 per cent.

The growing popularity of EMBA programmes meant this year 95 business schools took part in the survey, compared with just 88 last year.

Of these 95 schools, 85 had a sufficient response rate (20 per cent of alumni and a minimum of 20 responses in total) to be considered for the ranking of the top 75 programmes.

The final position of a school in the table is determined on its performance within each of the criteria that forms the table.

These criteria are grouped into three main areas: the career progress of the alumni; the school’s diversity and the international experience it offers; and the school’s intellectual output and research.

The collated data, from the school and alumni questionnaires, is used to calculate a school’s performance within each of the table’s 16 criteria.

These results need to be standardised, so that each of the 16 criteria can be compared against each other.

In order to standardise all the criteria, we convert the results within each of them into z-scores, which enables us to position each school, within any criteria, relative to all the other schools.

Each criterion has an associated weight (given in the key to the table), which is a reflection of its relative importance.

The z-scores are then multiplied by these weights.

The sum of the weighted z-scores across all 16 criteria determines the school’s position in the final table.

All criteria are either presented in a rank form (with the top performing school scoring the number one rank), percentages or, as in a few instances, raw data (“salary today” and “languages” required, for example).

The main driver of the table is the career success of alumni, which takes into account the career progress of alumni from the period before they started their EMBA to the present day – usually five years.

The two main components of this are salary levels today and the percentage salary increase the alumni have experienced between the two dates.

The older the student, the higher the expected final salary would be, though the percentage increase is usually higher among younger alumni.

As salary data is reported in different currencies, they are all standardised by conversion to US dollars using PPP (purchasing power parity) exchange rates, formulated by the World Bank. By applying PPP rates to the alumni salaries, we can measure respondents’ purchasing power and the standard of living enjoyed – all standardised to US dollars.

This allows for more accurate international comparisons. Extraordinarily high salaries are omitted before calculating the averages, as well as salary data from alumni working in the non-profit sector and from students.

The career progression factors also include measurements of the change in the level of seniority and the size of the company in which the alumnus now works.

We also rank alumni on their work experience prior to the EMBA. All of these career progression components, represented in the first four fields of the alumni survey, account for 50 per cent of the final marks.

The “Aims Achieved” criterion determines the extent to which the school has enabled a participant to fulfil his or her goals or reasons for doing an EMBA and carries five per cent of the total weighting.

All of the above criteria are compiled from data collected by the FT over three years.

The data collected this year (from the EMBA class that graduated in 2002) carries 50 per cent of the total weight.

Data from the 2004 and 2003 surveys are each given 25 per cent of the total weight.

If only two years’ worth of data is available, then the ratio is 60 per cent from this year’s survey and 40 per cent from the 2004 survey.

The eight criteria, from and including “Women Faculty (%)” to “Languages”, measure the diversity among a school’s students, faculty and board members. Combined, these criteria account for 25 per cent of a school’s final score.

The final three criteria measure the school’s performance in research and account for a fifth of the total weighting.

The “FT research rank” is a rating based on the number of articles published in 40 international refereed academic and practitioner journals that were chosen by the business schools.

The period over which we assess publications is January 2002 to June 2005. For each publication, a point (or a fraction, if there is more than one author) is awarded to the school where the faculty member is currently employed.

The final index is a weighted sum of the absolute number of publications and the publications adjusted for faculty size (the number of articles per faculty member).

Additional research by Wai Kwen Chan and Ursula Milton Database consultant: Judith Pizer of Jeff Head Associates, Amersham, UK.

Copyright The Financial Times Limited 2017. All rights reserved. You may share using our article tools. Please don't copy articles from FT.com and redistribute by email or post to the web.