© The Financial Times Ltd 2016 FT and 'Financial Times' are trademarks of The Financial Times Ltd.
January 25, 2010 11:39 am
This is twelfth year the Financial Times has produced a ranking of full-time MBA programmes. To be eligible to participate, European or US schools should be accredited by an international accreditation body such as AACSB, Equis or Amba; it must have a full-time MBA programme that has been running for at least four years; and it must have graduated its first class at least three years ago. Furthermore, classes must have at least 30 students.
The rankings are based on data collected from two main sources, alumni and business schools. This year a total of 156 business schools met the criteria for participation and completed the school survey provided. Some 21,328 alumni, of the graduating class of 2006, were then asked to complete an alumni survey and just over 8,000 responses were submitted.
The FT always surveys graduates three years after they have completed the degree to assess the effect of the MBA on their subsequent career progression and salary growth.
Of the 156 schools, 48 were excluded because of insufficient alumni data. The response threshold that the FT sets is 20 per cent of the entire class with an absolute minimum of 20 responses. The remaining schools were then ranked and the final table shows the top 100.
Three main areas are analysed to create the top 100: alumni salaries and career development; the diversity and international reach of the business school and its MBA programme; and the research capabilities of each school.
Within these areas, there are 20 criteria used to determine the rankings. Eight are based on data from alumni questionnaires: “weighted salary (US$)” to “placement success rank”, “alumni recommend rank” and “international mobility rank”.
The figures for seven of these eight criteria are based on data collected by the FT over three years. The data gathered for the MBA 2010 survey carry 50 per cent of the total weight. Data from the 2009 and 2008 rankings each carry 25 per cent.
With the exception of salary data, if only two years’ worth of data are available, the weighting is split 60:40 or 70:30 depending on whether the information is from 2009-2008 or 2009-2007. For salary data the weighting is 50:50, based on an assumption that the latest data are likely to be higher than in previous years and may distort the average. “Value for money rank” is based on the MBA 2010 figures only.
The first three criteria in the table examine alumni salaries and include the two most heavily weighted components of the ranking: “weighted salary (US$)” and “salary percentage increase”. Together these contribute 40 per cent of the rank for each school.
The following process is applied to all salary data before they are used in the ranking:
To begin with, salary data of alumni in the non-profit and public service sectors, or who are full-time students, are removed.
Purchasing Power Parity (PPP) rates supplied by the International Monetary Fund used to convert the remaining salary data to US$ PPP equivalent figures.
After the PPP conversion, the very highest and lowest salaries are excluded before the average salary is calculated for each school.
For larger schools, the average salary is weighted to reflect variations in salaries between different sectors. The weights are derived by calculating the percentage of all respondents working in each sector. This percentage breakdown is then used in the calculation of an overall average school salary which includes average salaries for each sector.
The salary data shown on the table are all US$ PPP equivalent figures.
The salary percentage increase is calculated according to the increase in average US$ PPP salary for each school from before alumni started the MBA until 2010. This is a period of four or five years.
Eleven of the ranking criteria are based on data from a questionnaire completed by each business school. These include the figures for “employed at three months (%)”, all criteria from “women faculty (%)” to “international board (%)” and from “international experience rank” to “FT doctoral rank”.
The final column in the table, “FT research rank”, is derived from data compiled by the FT. To compile the scores, papers written by the faculty of each school in 40 academic and practitioner journals over the past three years are counted. Each school is awarded points per number of papers. The mark is also weighted for faculty size so that schools with small faculties are not unduly penalised. The research rank contributes 10 per cent of the final score.
After the data have been compiled, the results for each field are converted to Z-scores on a column-by-column basis. That is, for each column, a separate set of Z-scores is calculated. Z-scores take into account the differences in score between each school in that column and the spread of scores between the top and bottom school.
Additional research by Database consultant: Judith Pizer of Jeff Head Associates, Amersham, UK. FT Research rank calculated using the Scopus database of research literature.
Copyright The Financial Times Limited 2016. You may share using our article tools.
Please don't cut articles from FT.com and redistribute by email or post to the web.