Overview of THE Rankings
The Times Higher Education Rankings (THE World University Rankings) are published annually by Times Higher Education, a specialist higher education media company based in London. Originally published jointly with QS until 2009, THE developed its own independent methodology in partnership with Thomson Reuters (later Elsevier) to produce a more data-intensive alternative to the survey-heavy QS approach.
THE ranks over 1,900 universities globally and is widely regarded as particularly credible among research-focused institutions. Its methodology underwent a major revision in 2022 with the introduction of a new "Research Quality" pillar and changes to how it weights bibliometric data. THE publishes its full methodology in detail, which is comparatively unusual in the rankings industry.
The current THE methodology distributes weight across five pillars: Teaching (29.5%), Research Environment (29%), Research Quality (30%), Industry (4%), and International Outlook (7.5%).
Teaching (29.5%)
THE's Teaching pillar is the most methodologically complex, drawing primarily from its annual Academic Reputation Survey but also incorporating structural data. The sub-indicators are:
- Reputation survey (15%): Academics worldwide are asked to assess universities' teaching environment and reputation for producing excellent graduates. This component shares the prestige-feedback limitations of all reputation surveys.
- Staff-to-student ratio (4.5%): The number of academic staff relative to enrolled students — analogous to QS's faculty-student ratio and subject to similar gaming risks.
- Doctorate-to-bachelor's ratio (2.25%): The proportion of postgraduate research students relative to undergraduates, as a proxy for research intensity and faculty engagement with advanced work.
- Doctorates awarded per academic staff (6%): How many PhD students a university graduates relative to its faculty, measuring the scale and efficiency of its doctoral training.
- Institutional income per academic staff (2.25%): A normalised measure of financial resources, though it conflates research income with teaching investment.
The teaching pillar's combination of survey data and structural metrics is frequently cited as a strength over pure reputation rankings, though critics note that none of these indicators directly measure the quality of instruction in a classroom.
Research Environment (29%)
The Research Environment pillar captures the infrastructure, funding, and culture that makes research possible. Its sub-indicators are:
- Reputation survey (18%): THE's survey asks academics specifically about research reputation, separate from teaching. This is a significant source of path-dependence in the rankings.
- Research income (6%): External research funding received per academic staff member, normalised by purchasing power. Research funding is often a self-reinforcing indicator — universities with strong reputations attract more grants, which reinforces their rankings, which attracts more grants.
- Research productivity (6%): The number of papers published per academic staff member, weighted by subject area to account for different publication norms across disciplines. Research Output in fields like physics and biology, where multi-author papers are common, is handled differently from output in humanities fields.
Research Quality (30%)
Research Quality is THE's largest single pillar (30%) and the most data-intensive. It is calculated using bibliometric data from Elsevier's Scopus database:
- Citation impact (15%): Citation Impact is measured as a field-weighted citation impact (FWCI) — how often a university's papers are cited relative to the global average for papers in the same field, year, and document type. Field-weighting is crucial: it allows a biology paper to be compared fairly with a chemistry paper despite very different typical citation rates.
- Research strength (5%): The volume of a university's output in high-impact journals — those in the top 10% globally by citation percentile.
- Research excellence (5%): The proportion of a university's papers that fall in the top 1% most-cited globally — identifying the very peak of research contribution.
- Research influence (5%): Measured by H-Index, which combines publication volume and citation frequency into a single number. A researcher (or institution) with an h-index of 100 has published at least 100 papers each cited at least 100 times.
The field-weighting methodology is one of THE's genuine methodological advantages over simpler citation counts — it levels the playing field between different academic disciplines.
Industry (4%)
The Industry pillar, worth only 4% of the total, attempts to capture universities' knowledge transfer and commercial engagement activities. It has two sub-indicators:
- Industry income (2%): Research income derived from industry sources, normalised per academic staff. This rewards universities that collaborate with the private sector.
- Patents (2%): A measure of patenting activity, capturing the commercial application of university research through intellectual property creation.
The Industry pillar is THE's smallest, reflecting ongoing debate about whether commercial engagement is an appropriate proxy for a university's social value. Universities specialising in basic rather than applied research, or those in countries with less developed knowledge economies, are structurally disadvantaged here.
International Outlook (7.5%)
International Outlook measures global engagement through three sub-indicators:
- International students ratio (2.5%): The proportion of enrolled students from outside the home country.
- International faculty ratio (2.5%): The proportion of academic staff recruited internationally.
- International co-authorship (2.5%): The proportion of a university's publications co-authored with researchers at international institutions. This is THE's most genuinely research-relevant internationalisation metric, as it captures global scientific collaboration rather than just enrolment geography.
THE's International Diversity Index calculations apply a ceiling to prevent any single over-represented nationality from dominating a university's international proportion — a methodological refinement that QS does not employ.
THE Impact Rankings
Separate from the main World University Rankings, THE publishes annual Impact Rankings that measure universities' contributions to the UN Sustainable Development Goals (SDGs). These rankings use entirely different methodology — research publications about each SDG, stewardship of that goal within the university, and outreach activities — and can produce dramatically different league tables. Universities outside the top 100 of the main THE rankings sometimes appear in the top 10 of specific SDG categories, which illustrates how differently "best university" can be defined depending on the measurement frame.
Comparison with QS
THE and QS share some structural similarities — both use reputation surveys, both use Scopus/Elsevier bibliometric data — but differ in important ways:
- Survey weight: QS gives 50% combined weight to reputation surveys; THE gives approximately 33% combined weight (teaching and research reputation sub-components).
- Citation methodology: THE uses field-weighted citation impact (FWCI), which is more methodologically sound for cross-disciplinary comparison. QS uses raw citations per faculty, which advantages science-heavy universities.
- International indicators: Both include international ratios, but THE's addition of international co-authorship is more research-relevant than pure enrolment demographics.
- Teaching proxies: THE's doctorate-related indicators capture research training; QS's faculty-student ratio is a cruder proxy for teaching quality.
Students should read both rankings while being aware that THE's methodology produces results that tend to favour large research-intensive universities in English-speaking countries, while QS rankings — partly because of the employer survey — sometimes give more credit to universities with strong professional school reputations.