Principles of Research Ethics
Research Ethics provides the normative framework within which scientific investigation must operate. These principles protect research participants from harm, ensure the integrity of scientific findings, and preserve public trust in the institutions of science.
The foundational principles of research ethics in human subjects research were articulated in the Belmont Report (1979), which identified three core principles: respect for persons (autonomy and informed consent), beneficence (maximizing benefit and minimizing harm), and justice (fair distribution of research burdens and benefits). These principles emerged directly from the historical catastrophes of Nazi medical experimentation and the Tuskegee Syphilis Study — episodes in which researchers violated basic ethical norms with devastating human consequences.
Research ethics extends beyond human subjects to encompass the integrity of scientific data, authorship practices, peer review conduct, conflicts of interest, and the responsible use of emerging technologies including artificial intelligence and gene editing. Ethical failures in these domains may not directly harm individual participants but distort the scientific record in ways that cause cumulative societal harm.
Universities are the primary institutional setting where research ethics norms are taught, enforced, and adjudicated. Research ethics training is mandatory for PhD students and faculty at most research-intensive universities, required by federal funders including NIH and NSF, and increasingly extended to undergraduate research programs.
Institutional Review Boards
Institutional Review Boards (IRBs) — called Research Ethics Committees (RECs) or Ethics Review Committees in many countries — are university-based committees charged with reviewing and approving research involving human participants before it begins. IRB review is required by US federal regulation for any research conducted at institutions receiving federal funding, regardless of whether the specific study is federally funded.
IRB review assesses whether the proposed research involves acceptable risks relative to expected benefits, whether informed consent procedures are adequate, whether participant privacy is protected, and whether recruitment is equitable. The level of review required ranges from exempt (minimal risk studies such as observation of public behavior) through expedited review to full board review for research involving more than minimal risk.
Informed consent is the ethical cornerstone of human subjects research. Research participants must be told the purpose of the research, the procedures they will undergo, any foreseeable risks and discomforts, any benefits to themselves or others, and their right to withdraw without penalty. Consent must be voluntary, free from coercion, and documented appropriately. Special protections apply for vulnerable populations including children, prisoners, and cognitively impaired individuals.
IRB review is sometimes criticized as bureaucratic and slow, creating obstacles for researchers conducting low-risk social science and behavioral research while providing insufficient protection against genuinely risky biomedical experimentation. The regulatory framework is under periodic review, and there have been sustained efforts to streamline review for research with minimal participant risk while strengthening oversight for high-risk research.
Data Management
Responsible data management is a core element of research integrity. The data underlying published findings must be managed, documented, and retained in ways that allow verification, correction, and reuse of scientific results.
Data management plans (DMPs) are now required by most major funders, including NIH and NSF, as part of grant applications. These plans specify how data will be collected, formatted, stored, protected, and ultimately shared or archived. The movement toward open data — making research data publicly available alongside publications — reflects growing recognition that data transparency is essential for reproducibility and efficient scientific progress.
Data fabrication and falsification — inventing data that was never collected, or manipulating data to produce desired results — are among the most serious forms of Research Ethics violations. High-profile cases including Diederik Stapel (social psychology), Hwang Woo-suk (stem cell research), and multiple cardiology researchers have resulted in career-ending retractions and, in some cases, criminal prosecution.
Data storage and retention requirements vary by funder and institution but typically mandate retention of raw data for at least five to ten years after publication. This allows post-publication audits and responses to concerns about the validity of published findings. Access to original data is often the determining factor in whether misconduct investigations can reach definitive conclusions.
Plagiarism and Fabrication
Research misconduct categories defined by the US Office of Research Integrity (ORI) are fabrication (making up data or results), falsification (manipulating data, equipment, or processes), and plagiarism (appropriating another person's ideas, processes, results, or words without attribution). These FFP (Fabrication, Falsification, Plagiarism) categories represent the core serious violations in research integrity policy.
Plagiarism in research contexts includes copying text from other publications without citation, reproducing figures or data without attribution, and presenting others' ideas as one's own. Self-plagiarism — republishing substantially identical content from one's own prior publications without disclosure — violates norms of scientific communication even though it does not misappropriate others' work. The Academic Journal community has developed increasing awareness of text recycling and duplicate publication as distinct from plagiarism requiring careful policy.
Text detection software (Turnitin, iThenticate) is now routinely used by journals and universities to screen for verbatim copying. Image manipulation detection tools have become increasingly sophisticated in detecting duplicated or altered gel images, microscopy images, and clinical photographs. These technological tools have contributed to an increase in detected misconduct cases — reflecting better detection rather than necessarily increased misconduct rates.
Gray areas abound in authorship attribution. Who deserves authorship on a paper? The International Committee of Medical Journal Editors (ICMJE) criteria require that authors have made substantial contributions to the conception or design of the work, or acquisition, analysis, or interpretation of data, and that they have approved the final version. Ghost authorship (unattributed contributors) and gift authorship (attribution without qualifying contributions) both violate these norms, yet both remain common in practice.
Retraction and Correction
When errors or misconduct are discovered in published research, the Peer Review and publication system has mechanisms for correction — corrections for minor errors that do not invalidate conclusions, retractions for errors or misconduct serious enough to invalidate the paper's core findings.
The Retraction Watch database, launched in 2010, tracks retractions across scientific literature and has documented tens of thousands of retractions across all fields. High-profile cases — particularly in medicine, where retracted findings may have influenced clinical practice — have drawn public attention to the vulnerability of the scientific record.
The retraction process is often slow, contentious, and incomplete. Authors contest retraction notices; journals and publishers move cautiously to avoid legal action; editors may be reluctant to damage relationships with prominent authors. Studies show that retracted papers continue to be cited positively long after retraction, because authors and researchers fail to notice retraction notices or do not have convenient access to up-to-date retraction information.
Some institutions have developed more robust responses to misconduct findings. Universities that conduct thorough, transparent investigations, impose meaningful sanctions, and proactively notify journals of problems in affiliated researchers' publications set a higher standard than those that protect institutional reputation at the expense of scientific integrity. The culture of a research institution around misconduct investigation and response is itself an ethical indicator.
Training Programs
Responsible Conduct of Research (RCR) training is required for all researchers supported by NIH and NSF funding. Universities have developed a variety of program formats to meet this requirement and to instill ethical norms more broadly in their research communities.
Online training modules (most prominently the CITI Program — Collaborative Institutional Training Initiative) provide standardized instruction on research integrity, human subjects protection, laboratory animal care, data management, authorship, and conflict of interest. These modules are widely used as a baseline training requirement and must be completed before researchers can conduct IRB-regulated research.
More intensive training takes place through journal clubs, research group discussions, and graduate seminars that engage students in case-based ethical reasoning. Case studies based on real misconduct incidents — with institutional details appropriately anonymized — are particularly effective at helping researchers recognize how misconduct often begins with small compromises that escalate.
Mentorship is the most powerful vehicle for transmitting research ethics norms. Graduate students learn ethical conduct primarily by observing how their mentors respond to pressures — the temptation to include promising but unreplicated data, the ambiguity around authorship on a competitive paper, the conflict between publishing quickly and verifying results thoroughly. Principal investigators who model ethical behavior and discuss ethical challenges openly create laboratories where students develop genuine ethical judgment rather than merely formal compliance.