Listen to this article
This is the 10th year that the Financial Times has published its ranking of executive MBA programmes – part-time MBA degrees for senior working managers. The ranking aims to give a thorough assessment of the programmes submitted by business schools worldwide; it also considers the schools themselves, and their alumni.
Two sets of online surveys are used to compile the results, once business schools have shown they meet the criteria for inclusion. The first is completed by the schools themselves, and the second by alumni who graduated three years ago. This year, the graduating class in question was that of 2007.
For schools to remain eligible, a 20 per cent response rate is required from alumni, with a minimum of 20 responses for schools with fewer than 100 alumni in the graduating class. During the process, schools must not make contact with their alumni to encourage them to participate.
The ranking has continued to increase in popularity. In total, 121 business schools took part – eight more than last year – and a total of 4,621 responses were submitted by alumni, a response rate of 55 per cent.
Once the surveys are closed, data from alumni questionnaires are used to determine positions in five of the 16 criteria in the ranking; from “Salary today (US$)” to “Aims achieved”. The figures for these criteria include information collated by the FT over three years.
The data collected in 2010 carry 50 per cent of the total weight. Statistics from the 2009 and 2008 rankings are then used to account for 25 per cent of the total weight. If only two years of data are available for a school, the ratio is 60 per cent from this year’s survey, and 40 per cent from the 2009 survey.
The first four criteria in the table examine the salaries and career progression of alumni between starting their EMBA and now – typically five years. These criteria contribute 50 per cent of the final score.
To calculate the figure presented in “Salary today (US$)”, the salaries of alumni working in the non-profit and public service sectors, or who are full-time students, are removed.
Purchasing power parity rates supplied by the International Monetary Fund are used to convert the remaining salary data to US$ PPP equivalent figures. (These are rates of currency conversion that iron out differences in purchasing power between currencies, so that alumni salaries can be compared meaningfully.)
After this conversion, the highest and lowest pay packets are excluded before the average wage is calculated for each school. The table shows the US$ PPP figures. The salary percentage increase, “Salary increase %”, is calculated according to the differences in average wage for each school from before alumni started their EMBAs until now.
The next two criteria measure the career success of alumni before and after the EMBA. “Career progress” quantifies changes in the level of seniority and the size of the company in which alumni now work, versus before graduating. “Work experience” takes into account the seniority of alumni, the size of their employer, the length of time they remained with the company and any international work experience – all before they began the EMBA.
The fifth criterion, “Aims achieved”, assesses the extent to which the school has enabled respondents to fulfil their goals for doing an EMBA. It carries 5 per cent of the total weight.
The next eight criteria, from “Women faculty (%)” to “Languages”, are calculated using data from the business school survey. They measure the diversity of staff, board members and EMBA students at each school, and the international reach of the EMBA programme. They contribute 25 per cent of the final rank.
Of the final three criteria, two are based on data from the business school survey. The last criterion in the table, “FT research rank”, relates to the number of articles published by faculty members in 45 international academic and practitioner journals (a list updated earlier this year after conferring with all participating business schools). The period for which publications are assessed is January 2007 to August 2010. For each publication, a point (or a fraction, if there is more than one author) is awarded to the school where the author is now employed.
The final measure combines the absolute number of publications with the number of publications adjusted for the size of the faculty at a school. After all calculations have been applied to the data for each of the different ranking criteria, Z-scores are applied column by column. That is, for each criterion, separate
Z-scores (which take into account the differences in score between each school in that column and the spread of scores between the top and bottom school) are calculated. The Z-scores in each column are multiplied by the column weights (see the table key) and then added together to give a final score for each school – its overall rank for 2010.
All the criteria that contribute to the final ranking have underlying Z-scores, but in the table, the data are presented as US$ equivalents, ranks, percentages, or – in the case of languages – the number spoken fluently required on graduation.