Citation Engineering
Citation Impact metrics — which contribute 20–30% to most major global rankings — are among the most susceptible to manipulation because they operate through a complex, semi-opaque ecosystem of academic publishing. Citation engineering encompasses a range of practices through which universities attempt to inflate measured citation counts without proportionally increasing the genuine intellectual influence of their research.
The most common citation engineering practices documented in academic literature and journalism include:
- Strategic co-authorship networks: Arranging mutual citation agreements — sometimes informally — between faculty at the same or different institutions, where researchers include citations to each other's work regardless of relevance. At scale, these networks can meaningfully inflate citation counts for participating faculty.
- Authorship inflation on papers: Adding authors to papers who made minimal or no intellectual contribution, specifically to allow citation counting across more faculty members (improving the per-faculty citation average).
- Paying for authorship: In some countries and institutions, documented cases exist of researchers purchasing authorship positions on papers produced by publishing mills — companies that sell authorship for fees. This is considered academic misconduct but is difficult to detect at scale.
- Journal manipulation: Universities with faculty serving as editors of journals have been documented using editorial control to steer citations toward their own faculty's work through citation requirements in reviewer guidelines.
QS responded to citation gaming concerns in its 2023 methodology update by excluding self-citations from its Citations per Faculty calculation — a change that had significant effects on universities where self-citation rates were abnormally high.
Strategic Hiring for Nobel Effect
ARWU rankings weight Nobel Prize winners and Fields Medal recipients at their affiliated institution at the time of the award. This creates a direct incentive for universities to recruit senior scientists who are credible Nobel candidates — or, more strategically, to recruit recently awarded laureates and claim their institutional affiliation.
Several institutions have been documented offering extremely generous packages to attract recently awarded Nobel laureates who would not otherwise have sought to move. The University of Chicago's economics department is famously described as a "Nobel factory," partly because of its history of retaining and attracting laureates, but other institutions have more recently pursued deliberate laureate-recruitment strategies to improve ARWU positioning.
The H-Index of potential hires has similarly become a direct hiring criterion at some research-intensive institutions, with departments explicitly targeting researchers with h-index scores above specific thresholds as a strategy to improve their citation impact indicators across all ranking systems.
Inflating International Student Numbers
The International Diversity Index metrics in QS and THE reward universities for having high proportions of international students and faculty. This has generated a range of strategies to improve these proportions without necessarily increasing the genuine diversity or global engagement of the institution:
- Definition manipulation: Counting students who hold foreign passports but have been educated entirely domestically as "international students" — inflating the headline number without the genuine intercultural dimension the metric is meant to capture.
- Offshore campus enrollment: Some universities count students enrolled at offshore branch campuses as contributing to international ratios, even when these students have no contact with the home campus. The classification of branch campus students varies across ranking methodologies.
- Dual-affiliation faculty: Creating visiting or adjunct faculty positions for foreign-based researchers who rarely or never visit, specifically to inflate international faculty ratios. QS has introduced minimum time-on-campus thresholds in response to this practice.
- Reshaping visa policies: Institutions have lobbied their governments for more accessible student visas specifically to improve their international ratios, which is a legitimate policy advocacy but also reflects the distorting incentive power of rankings on higher education policy.
Manipulating Faculty-Student Ratios
The Faculty-Student Ratio indicator in QS (weight: 20%) is perhaps the most straightforward to manipulate through administrative classification decisions:
- Counting all research staff as faculty: Universities can count researchers on project-based contracts — who have no teaching responsibilities — as faculty members, improving the ratio without adding any teaching capacity.
- Enrolment management: Selectively restricting undergraduate intake while maintaining faculty numbers improves the ratio but may also exclude qualified applicants from educational opportunities.
- Reclassification of part-time positions: Converting part-time faculty appointments to full-time equivalents in ways that inflate the faculty headcount rather than the FTE calculation can produce significant ratio improvements.
Columbia University's 2023 scandal, in which it admitted to providing incorrect faculty and class size data to U.S. News, ultimately led to it falling from #3 to #12 in the domestic rankings — demonstrating that data misreporting can survive for years before detection, and that the impact of correction can be dramatic. Other data integrity investigations are ongoing at institutions across multiple countries.
Self-Citation Policies
Self-citations — papers citing previous work by the same authors — are a legitimate part of scholarly communication when they establish context or acknowledge prior work. However, excessive self-citation artificially inflates individual and institutional citation counts. Structural self-citation practices documented at some institutions include:
- Journal editorial policies that require authors to cite a minimum number of papers from the same journal (increasing journal impact factor, which feeds into bibliometric rankings)
- Department cultures that implicitly or explicitly expect researchers to cite colleagues' work regardless of relevance
- Requiring graduate students to extensively cite their supervisors' earlier work in dissertations and publications
Elsevier and Clarivate both maintain watch lists of journals with abnormal self-citation rates and can exclude or flag suspicious citation patterns. THE and QS's use of field-normalised metrics provides some protection against the most egregious self-citation gaming, but the problem remains active enough that methodological responses continue to evolve.
Survey Campaign Strategies
The Academic Reputation Score component — worth up to 40% in QS — is susceptible to systematic influence through survey response campaigns. Strategies documented or alleged include:
- Alumni mobilisation: Some universities have sent communications to their academic alumni specifically encouraging them to participate in QS or THE surveys and nominate their alma mater, blurring the line between legitimate alumni engagement and coordinated survey manipulation.
- Conference diplomacy: Hosting major international conferences and ensuring that attendees have positive, memorable experiences at the institution generates goodwill and name recognition that influences survey nominations in subsequent years.
- Research partnerships: Establishing formal research collaboration agreements with universities in underrepresented survey regions generates nomination flows from those regions — both because researchers at partner institutions become more familiar with the university and because the partnerships are perceived as reflecting mutual academic respect.
QS attempts to detect coordinated survey responses through statistical analysis of nomination patterns, and has removed suspicious response clusters from calculations in some years. THE's academic survey methodology includes regional and disciplinary weighting that partially corrects for geographic biases in survey participation.
The Arms Race Effect
When universities collectively optimise for rankings, the result can be an arms race where institutional resources are diverted from educational mission to ranking performance. The societal costs of this dynamic include:
- Research investment shifting toward high-citation fields (life sciences, materials science) at the expense of equally valuable but less-cited domains (humanities, arts, social work)
- Faculty hiring prioritising citation productivity over teaching ability, disadvantaging undergraduate education quality
- Administrative resources spent on ranking analytics, data preparation, and reputation management rather than student support or curriculum development
- Institutional anxiety about ranking position influencing decisions that should be made on educational or ethical grounds
Academic researchers including Ellen Hazelkorn and Simon Marginson have documented cases of universities making policy decisions — curriculum changes, program closures, research priority shifts — explicitly motivated by ranking implications rather than academic merit. The distortion of institutional mission by ranking incentives is considered one of the most serious long-term consequences of the global rankings industry.
How Ranking Bodies Fight Gaming
All major ranking publishers have introduced methodological responses to known gaming strategies, though the cat-and-mouse nature of the problem means new strategies emerge as old ones are closed off:
- Self-citation exclusion: QS now excludes self-citations from its Citations per Faculty calculation. THE uses Scopus data with field-normalised citation impact (FWCI), which is more resistant to gross citation manipulation.
- Minimum on-campus time requirements: QS has introduced rules about minimum faculty presence requirements before staff count toward international faculty ratios.
- Data auditing: THE and QS both conduct spot audits of university-submitted data, and QS specifically excludes universities found to have submitted fraudulent data.
- Statistical anomaly detection: Both publishers use statistical methods to identify universities with citation patterns, survey nomination patterns, or ratio improvements that are statistically implausible without manipulation.
- Community reporting: Academic whistleblower mechanisms and journalism investigations have become an important external check, with several high-profile gaming cases surfaced through academic community reporting rather than ranking body detection.
The underlying tension remains: as long as university rankings carry significant consequences for funding, applications, recruitment, and institutional reputation, universities will have strong incentives to optimise for them. The most resilient response is for students, governments, and institutions themselves to use rankings as one input among many rather than as the primary — or sole — measure of institutional quality.