The Ranking Game: How Universities Manipulate Numbers
University rankings were designed to evaluate institutions. In practice, they have partly become systems that institutions learn to optimise — sometimes in ways that improve real quality, and sometimes in ways that merely improve scores. The distinction between genuine improvement and strategic gaming is central to any serious assessment of what rankings tell us.
Gaming strategies are not theoretical concerns. They have been documented extensively by academic researchers, investigated by journalism, and acknowledged — at least partially — by ranking publishers themselves. The QS World University Rankings, [[term:times-higher-education-rankings]], and other major rankings have all introduced methodological safeguards specifically in response to gaming behaviour they identified in the data.
Understanding common gaming strategies helps students read rankings more critically and helps institutions recognise when a competitor's rise reflects genuine improvement versus metric optimisation.
English-Language and Western Bias
One of the most fundamental and widely discussed limitations of global university rankings is their structural bias toward English-language institutions and Western academic traditions. This bias operates through several channels:
- Bibliometric databases: Both Elsevier's Scopus and Clarivate's Web of Science index English-language journals more comprehensively than journals published in other languages. A high-quality paper published in a Chinese, German, Arabic, or Portuguese journal may receive less citation credit than an equivalent paper in an English-language journal, simply because fewer researchers find it through database searches.
- Reputation survey composition: The academic world is not evenly distributed across languages. English-language academics — particularly those trained at or employed by prominent English-language universities — disproportionately populate the surveyor panels used by QS and THE. Their nominations naturally reflect familiarity with institutions in their own networks.
- Research norms: Western academic publishing norms — journal articles rather than books, natural sciences citation practices rather than humanities practices, individual rather than collective authorship — are built into the Citation Impact metrics in ways that systematically disadvantage universities where different research traditions predominate.
The consequence is that a Chinese university, a Brazilian university, or an Egyptian university may be producing research of equal quality to a top-100 British or American university while ranking much lower because its research circulates in different bibliometric ecosystems. Students from non-English-speaking countries should be aware that their home country's top universities may be substantially underranked relative to actual quality.
The Survey Problem
The Academic Reputation Score derived from reputation surveys — which accounts for up to 40% of the QS ranking — is subject to a fundamental epistemological problem: most academics cannot meaningfully evaluate the quality of institutions they have no direct experience with. When asked to nominate "excellent universities," respondents typically nominate universities they attended, visited, collaborated with, or know by historical reputation.
This creates a strong Matthew effect (the rich get richer): universities with high reputations in 1990 attract nominations that maintain high reputations in 2025, even if other institutions have overtaken them in measurable research quality. The survey essentially encodes historical prestige rather than current performance.
Academic research by Ellen Hazelkorn, Philip Altbach, and others has shown that reputation surveys converge dramatically on a small set of historically elite institutions — the same universities appear at the top of reputation surveys across all major rankings because they are the ones most widely known, not necessarily because they are the most excellent. New institutions, non-English institutions, and institutions without large international alumni networks are systematically underrepresented in nomination counts.
Rewarding Prestige Over Teaching Quality
Perhaps the most consequential criticism of global university rankings is what they don't measure: the quality of undergraduate education. No major global ranking includes a direct measure of how well universities teach their students. Student satisfaction surveys, graduate learning outcomes, assessment quality, pedagogical innovation, and rates of academic support for struggling students are absent from every major ranking methodology.
The implication is perverse: a university can rank in the global top 10 while systematically neglecting its undergraduates, as long as it produces high volumes of citations, attracts elite researchers, and maintains its reputation with academic survey respondents. Several US News investigations and academic studies have documented cases of high-ranked universities where undergraduate teaching quality and student-to-faculty actual contact hours are significantly lower than their ranking position implies.
Research Output — which is directly measured — may actually compete with teaching quality rather than complement it. Research-active faculty at research-intensive universities have strong incentives to prioritise their research and doctoral students over undergraduate teaching, particularly at institutions where promotion decisions depend primarily on publications.
Impact on University Behavior
Rankings do not merely describe institutional quality — they actively shape institutional behaviour. University presidents and provosts make resource allocation decisions, hiring choices, and strategic priorities with ranking implications explicitly in mind. This is a significant unintended consequence of what began as consumer information.
Documented behavioural responses to ranking incentives include:
- Restricting undergraduate enrolment to improve student-to-faculty ratios, reducing access to higher education
- Hiring "star" researchers primarily for citation or Nobel Prize value rather than teaching contribution
- Reducing investment in non-indexed research (arts, humanities, professional development) in favour of research that generates Scopus/WoS citations
- Merging departments or creating joint appointments to claim more "international faculty" for internationalization indicators
- Launching aggressive lobbying campaigns to increase their academic reputation survey scores
Regional and National Alternatives
Partly in response to the limitations of global rankings, a growing number of regional and national ranking systems have emerged that attempt to measure institutional quality through locally relevant criteria:
- U-Multirank: A European Commission-funded platform that allows universities to be compared on up to 31 indicators across five dimensions, with users selecting which dimensions matter to them. Designed explicitly as an alternative to single-number league tables.
- Quacquarelli Symonds regional rankings: QS publishes separate rankings for Asia, Latin America, the Arab Region, and the BRICS countries, which recognise institutions better adapted to their regional contexts.
- National rankings: Many countries maintain their own ranking systems — such as the UK's Times and Guardian university guides, the German CHE Ranking, or India's NIRF — that are calibrated to local educational contexts and student priorities.
The Case For and Against Rankings
Despite their well-documented limitations, global university rankings serve several genuine social functions:
For rankings: They provide an accessible starting point for international students navigating unfamiliar higher education systems. They create accountability pressure on institutions that might otherwise lack external performance evaluation. They have driven real improvements in research output and internationalisation at institutions motivated by ranking competition. And they provide a lingua franca for global higher education discussion — a common reference point across languages and cultures.
Against rankings: They systematically disadvantage non-English institutions regardless of quality. They measure what is easy to quantify rather than what matters most for student outcomes. They create perverse institutional incentives that may harm the core educational mission. They reinforce existing hierarchies of prestige and make it harder for genuinely excellent but less famous institutions to achieve recognition. And they mislead students into making decisions based on institutional prestige rather than educational fit.
The most honest position is that rankings are a flawed but useful tool — valuable when understood critically and used in combination with other information sources, harmful when treated as definitive verdicts on institutional quality.