Overview of QS Rankings
The QS World University Rankings (QS WUR) is published annually by Quacquarelli Symonds, a British higher education analytics company. First published in 2004 (initially in partnership with Times Higher Education), QS is today the most widely read global university ranking, with over 100 million website visits annually and significant influence on student application decisions worldwide.
In 2023, QS introduced a substantially revised methodology with two new indicators: Sustainability and Employment Outcomes. The revised framework evaluates universities across six core metrics, each weighted differently. Understanding these weights is the single most important step in interpreting QS rankings intelligently.
QS ranks approximately 1,500 universities globally (from over 5,000 evaluated), publishing results typically in June for the following academic year's cycle.
Academic Reputation (40%)
The Academic Reputation Score is the single largest component of the QS ranking, carrying 40% of the total weight. It is derived from QS's annual Global Academic Survey, which polls academics worldwide and asks them to identify the universities they consider most excellent in their field of expertise. Respondents are asked not to nominate their own institution.
In recent cycles, QS has surveyed over 130,000 academics. The responses are then weighted by region and discipline to prevent any single geographic area or subject area from dominating. Universities receive a score from 0 to 100 based on how frequently and how consistently they are nominated.
The academic reputation score is both the most influential and the most criticised component of QS. Critics argue it measures historical prestige more than current quality — famous universities from the English-speaking world receive nominations from academics who studied there decades ago, creating a self-reinforcing prestige loop. Newer institutions, or institutions based in non-English-speaking countries, are structurally disadvantaged regardless of their actual educational quality.
Despite its limitations, reputation surveys do capture something real: the opinions of the research community itself about which institutions produce significant scholarly work and train the next generation of researchers.
Employer Reputation (10%)
The Employer Reputation Score contributes 10% of the total QS score. It derives from QS's Global Employer Survey, which asks employers — from multinational corporations to SMEs — to identify the universities from which they most prefer to recruit graduates. Over 70,000 employer responses feed into recent cycles.
Like the academic reputation survey, the employer survey is subject to prestige bias and geographic skewing. Employers in one country are better positioned to assess graduates from local institutions, but their responses still influence global rankings. QS attempts to correct for this through regional weighting.
The employer reputation score is particularly influential for students motivated by immediate career outcomes. Universities strong in professional fields — law, business, engineering — sometimes score higher on employer reputation than on academic reputation, producing interesting divergences in overall scores.
Faculty-Student Ratio (20%)
The Faculty-Student Ratio indicator accounts for 20% of the QS score and is used as a proxy for teaching quality and access to faculty. It is calculated by dividing the number of academic staff by the total number of enrolled students. A lower ratio (more staff per student) produces a higher score.
This is the indicator most easily improved through strategic hiring or enrolment management, making it a frequent target for gaming. A university could improve its ratio by hiring more adjunct or research-only staff who count as faculty but do not teach, or by reducing undergraduate intake while maintaining faculty numbers.
The metric also systematically disadvantages institutions in regions where faculty-to-student ratios are culturally and structurally different — Indian universities, for example, often have ratios that reflect very large student populations rather than low teaching quality.
Citations per Faculty (20%)
Citations per Faculty (CpF) accounts for 20% of the QS score and is the primary research quality indicator. It is calculated by dividing the total number of citations received by a university's publications over a five-year window by the number of faculty members. Data is sourced from Elsevier's Scopus database.
Citation Impact metrics reward universities where a relatively small faculty produces highly cited research. This can favour specialist institutes or universities that concentrate their faculty in high-citation fields (such as life sciences or materials science) over those with large humanities faculties where citation practices differ fundamentally.
Crucially, CpF is a per-capita metric, which somewhat levels the playing field between large and small universities. A 500-faculty institute where every paper attracts significant citations can outscore a 10,000-faculty mega-university where most output goes uncited.
International Faculty and Student Ratios (10%)
Two indicators — International Faculty Ratio (5%) and International Student Ratio (5%) — together contribute 10% of the total QS score. They measure the proportion of non-domestic faculty and students at an institution, treating internationalisation as a marker of global relevance and diversity.
These metrics systematically advantage geographically small or linguistically accessible countries (Singapore, Hong Kong, Switzerland, the UK) over large countries like China, India, or the United States, where even excellent universities draw predominantly domestic students. NUS Singapore and ETH Zurich both benefit significantly from international ratios that Harvard or MIT structurally cannot match.
QS Subject Rankings
Subject Rankings published by QS cover over 55 individual academic disciplines, from Accounting & Finance to Theology, Divinity & Religious Studies. They are compiled using a subset of the global indicators, weighted differently for research versus professional fields.
Subject rankings typically weight academic reputation more heavily (up to 50% in some subjects) and omit indicators that don't apply at the discipline level, such as international ratios. They also incorporate an H-Index metric for research-intensive subjects, rewarding sustained citation impact across a faculty's career output.
For students who know their intended field of study, QS subject rankings are considerably more decision-relevant than overall rankings. A university ranked #150 overall might rank #12 in Architecture or #25 in Petroleum Engineering — information that could entirely change an application strategy.
Criticisms of the QS Methodology
QS is the most commercially oriented of the major rankings, and it faces substantial methodological criticism from academic researchers:
- Survey dominance: With 50% of the score from reputation surveys, QS rankings are heavily influenced by historical brand recognition rather than measurable institutional quality. This creates strong path-dependency: prestigious universities stay prestigious regardless of current performance.
- Non-English bias: Scopus indexes English-language journals more comprehensively than journals in other languages, systematically disadvantaging universities in Germany, France, China, and the Arab world even when their research is excellent.
- Faculty-student ratio gaming: The 20% weight given to faculty-student ratios creates incentives for universities to count all research staff as "faculty" regardless of teaching involvement.
- Paid relationships: QS generates revenue through consulting services sold to universities seeking to improve their rankings. This commercial relationship creates real or perceived conflicts of interest.
- Limited scope: QS rankings tell students almost nothing about undergraduate teaching quality, campus experience, student support services, or graduate outcomes beyond employer reputation.
These criticisms do not make QS rankings useless — but they make it essential to use them alongside other sources of information rather than treating them as definitive verdicts on educational quality.