Math has a publishing problem that is easy to miss if you only look at citation counts and journal labels. A new analysis shows how shortcuts, paper mills, and pay to publish outlets have shaped where math papers appear and how people get rewarded.
The authors argue that the rules of the game, from rankings to citation targets, make it tempting to chase numbers instead of truth.
In a joint report they describe how this pressure plays out in mathematics and why it matters beyond campus walls.
Lead author Ilka Agricola, a professor of mathematics at the University of Marburg, coordinated the working group for the International Mathematical Union (IMU) and the International Council of Industrial and Applied Mathematics (ICIAM).
The team reviewed how bibliometrics shape hiring, promotion, and funding, then mapped the tricks that exploit them.
They looked at paper mills, predatory journals, and citation cartels, and they traced how these behaviors spread into normal practice.
“Fraudulent publishing is a worldwide problem of substantial size in all scientific fields,” wrote Agricola. The authors explain that the issue spans seniority levels and countries, and it is not confined to one specialty or system.
The most popular yardsticks are not neutral. The impact factor gives a single score to a journal, which then bleeds into judgments about authors, departments, and grants.
This critique lines up with the Leiden Manifesto, a widely cited set of principles that warns against using one number to judge research.
Mathematics is especially vulnerable because papers are fewer, author lists are shorter, and citation counts are lower. Small pushes can swing rankings a lot, and that makes gaming easier.
Clarivate selects Highly Cited Researchers using papers in the top 1 percent by citations and then adds a qualitative screen. That label carries weight in university rankings and funding pitches.
The report recounts a striking episode from 2019. The institution with the highest number of math HCRs was China Medical University in Taiwan (CMU), even though it does not have a math program.
The disconnect highlights how affiliation games and metric chasing can warp what counts as excellence.
The authors summarize Edward Dunne’s audit of the 2019 group. There were 89 math HCRs that year, but their citation patterns differed from prizewinning mathematicians.
The median self-citing score for HCRs was more than twice the level seen in top cited mathematicians and prizewinners combined.
They also note that low quality mega journals remain in major databases, which keeps the door open for salami slicing (an academic publishing term for splitting one substantial piece of research into several very small papers, each reporting only a slice of the findings) and citation trading.
When the database includes everything, it becomes harder to tell signal from noise.
The IMU and ICIAM issued practical recommendations to reset incentives, reduce the weight of raw counts, and steer evaluation back to expert reading. They call on funders, universities, and scholars to act together.
“The opportunity cost of misaligned incentives is immense,” wrote Agricola. The note urges decision makers to base resource allocation on expert judgment rather than commercial rankings.
They suggest simple changes that add up. Read the papers, not just the profiles. Stop rewarding volume for its own sake. Avoid venues that exist to sell acceptance rather than advance knowledge.
This is not just a turf war over academic prestige. When junk research gets a free pass, it muddies the evidence that guides policy, industry, and public debate.
“Fraudulent publishing undermines trust in science and scientific results and therefore fuels antiscience movements,” said Agricola.
You can do basic checks even if you are not a specialist. Look for journals that are indexed in curated math databases like zbMATH Open or MathSciNet. If a journal is missing, be cautious and keep digging before you rely on a result.
Watch for red flags. Sudden special issues that publish hundreds of papers in a few weeks, editorial boards with unrelated expertise, and references stuffed with the authors’ own work are all warning signs.
Be wary of generic solicitations and flattery. Real editors rarely invite strangers to submit or to guest edit out of the blue, and reputable journals do not pressure authors to cite them to boost the impact factor.
Remember how bibliometrics and rankings work. They compress a messy landscape into simple scores, and they often reward attention rather than substance. That may help a sales pitch, but it will not tell you whether a theorem is correct.
The message is not to abandon measurement, but to put it in its place. Read first, count second, and reward the work that holds up under scrutiny.
The authors point out that mathematics has tools and culture that make reform possible. Careful reviews, studies, and community databases already exist, so shifting emphasis back to reading and judgment is achievable.
If institutions change how they measure success, the market for junk will shrink. That would free up time and money for work that actually advances the subject.
The study is published in Notices of the American Mathematical Society.
—–
Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates.
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.
—–