Sex, Lies and Robots
We dip into the literature (Part 1 of 8)
As promised,⌘ we’ll now spend some time looking at deception.
That’s the perfect title, isn’t it? I’m using a little licence here. An article caught my eye, “A bibliometric analysis of publications on the ethical considerations of sex robots (2003‒2022)”, in Humanities and Social Sciences Communications. The journal has an impact factor of 3.7; they claim 3.7 million article downloads in 2021. Springer Nature is often considered a solid publisher. What could possibly go wrong?
Quite a lot, actually
All you’ll find today at Nature.com is a retraction notice. Look around really carefully and you might still dig up a copy of the offending article. It begins with such promise:
Robots play a crucial role in society, with pioneers like Joseph Engelberger envisioning their use beyond factory settings as early as 1989. The development of sex robots, stemming from the sex industry’s adoption of new technologies, represents one of the most ethically complex applications of robotics. Integrating them into society raises numerous moral challenges, intersecting with sociology, philosophy, and psychology.
There are just a few teensy problems with this peer-reviewed piece. It lists ten authors it claims are the most prolific authors on, well, sex robot research. One of them seems to have been made up. Then there is a cardiologist (!) and a professor of prosthodontics (the mind boggles); the remaining seven just don’t study sex robots either. Most of them are run-of-the-mill roboticists; some are surgeons.
Brief moments of hilarity
[Three inappropriately listed authors] were brought together by their mutual focus on robot-assisted radical cystectomy. The green cluster, comprised of [three more names], compared perioperative outcomes of retroperitoneal robot-assisted partial nephrectomy (RPRAPN) and transperitoneal robot-assisted partial nephrectomy.
The authors seem a little confused. One hopes that cutting out bladders and kidneys never, ever, ever has anything to do with sex. There’s collateral evidence that the article was made by a bot with the temperature set to ‘hot’:
The “H-index” is a metric that reflects an academic impact of a researcher’s work, as discussed by Wang et al. (2021). Influential factors (IF) has been considered a paramount metric for evaluating scholarly impact.
Did you spot the problem? The right term is ‘impact factor’ not ‘influential factor’. This sort of stuff is common in two circumstances, both bad. One is where the AI simply hallucinated weird shit; the other is that it was instructed to take plagiarised text and change some of the words, to defeat plagiarism detectors.1
Things come to a head
It all blew up when one of the authors fingered by the article complained to Retraction Watch. He didn’t want to acquire a reputation as a robot sex researcher. Er. Researcher into robot sex. Uhh. Researcher into (mostly male) humans having sex with robots.
Springer Nature then rapidly pulled the article out of circulation. Can we blame the editors? Should we blame them? I guess they might have read through the article just once before publishing it! The biggest problem with this article is it’s crap. This is a burgeoning problem.
It’s not about sex (much)
Sex sells. It’s likely that the weirdly fucked-up nature of the robot sex article has engaged your attention. The problem is that everyone is competing for your attention. All the time. And they often succeed, transiently.
The above image is not an adult novelty toy. It’s an archetypal example of ‘AI slop’—this time a journal image so bizarre that Altmetric ranked it in the top 5% of all articles before it too was rapidly retracted. Everyone had to gawk at this issue of Frontiers. See how the authors didn’t even bother to fix the spelling of the labels; nor, it seems, did the editors. Our problem though is far, far bigger than a few rubbish papers. It’s almost overwhelming:
Predatory journals are on the rise. These journals will publish anything. Phallic mice, for a price.
Papermills are big business. They churn out apparently adequate ‘research papers’, ‘reviews’ and so on, selling authorship to the highest bidders. Some articles are AI-made, some are put together by smart-ish humans, and some are simply stolen.
Almost everyone is tempted to game the system, particularly the h-index, a measure of how often articles are cited by other articles. In a future post, we’ll find out why this is happening (Knowing Goodhart’s law,⌘ you’ve likely worked out the answer already). Even ‘good researchers’ are pressured to perform in unreasonable ways, exchanging quality research for larger quantities of dross.
Unscrupulous behaviour seems to be on the up. We’ve also discovered that a lot of historical ‘research’ has unsavoury aspects, including fake data and faked images. Often this is driven by personal greed.
Powerful medical companies have honed their skill at pulling the wool over the eyes of clinicians, institutions and even entire countries. They are now expert at magically making problems with new devices and drugs recede into the background. Even problems like “It doesn’t actually work”.
AI is rising. ‘Slop’ is becoming more sophisticated. Were you to glance extremely rapidly through the offending paper I started with, you could be forgiven for deciding it’s just an extremely boring compilation by inept authors, illustrated with dubious diagrams and suspicious statistics. You’d hope that an expert on robot sex would spot the problem in a heartbeat, though.2
Peer review is often piss-poor, and editors seem to be slacking off. Perhaps some are just overworked?
Large language models like ChatGPT can now confabulate⌘ very convincingly. There’s a flipside too: AI use by journals. Two years ago, the entire editorial board of the Journal of Human Evolution resigned in protest against Elsevier apparently imposing sloppy AI editing on the editorial staff.3
Looming over all of this is the question “How do we tease out the rubbish from the valuable research?” The anatomically implausible giant rat genitals are obviously fake, but a lot of the literature is more subtly wrong. This sort of thing is only going to get worse, so I’ll devote the next 7 posts to the 7 problems above. But to prepare, here's my dross filter…
How I read a paper
I believe that to do the best job we can of processing scientific papers, we need the following:
A deep, joined-up understanding of each subject.
Insight into our own biases, and how we use Science to combat these.
Years of practice writing, reading and critically reviewing manuscripts.
A solid knowledge of statistics, with the practical ability to test for the presence of errors and evil.
Working knowledge of medical deception, including that involving AI.
I have a few skills here and there, but I’m not a forensic statistician’s backside. It’s also impossible to be knowledgeable about everything you read. Here’s how I compensate. Steal the bits that work for you.
Is the paper in a predatory journal? Mostly don’t bother! (My next post covers these predators).
Has the paper been withdrawn? Remarkably, some people are still citing papers that were withdrawn ages ago.
Take a systematic approach. First, read the last line of the conclusion. If it says “more work is required”, then the chances are, there’s little or nothing of worth in the paper. Harsh, but useful.
Next, read the Abstract. Does it even make sense? If it doesn’t, bin it. Is it too good to be true? Flag this, and read on with great caution. Check the conclusion in the abstract. A common problem with bad papers is that the abstract doesn’t even synch with the body of the article.4
Next, the Discussion. The key part here is the ‘study limitations’. If these are frivolous or completely lacking, I furiously cast the paper aside. Time wasted! The more the authors self-critise, and the more they highlight the limitations of their study, the more I tend to trust it. That’s how Science works.⌘
Is there advertising hype? Are the stats bad? A black art, but some signs are common and obvious. If numbers are only reported as relative values (e.g. 33% reduction in death rate) rather than absolute values (deaths decreased from 0.3% to 0.2%), I’m suspicious. If the limitations in the Discussion are nowhere present in the Abstract, <deep suspicion>.5
I then read the article carefully. This often hurts the brain, and takes some time. A common and obvious problem is authors who go beyond the bounds of the study. For example, if they’re making causal assertions,⌘ then we want either prospective, blinded randomisation, or a solid analysis using a directed acyclic graph.⌘ There’s a host of other bad things to look for, but fortunately, we’re not alone.
Help is at hand
Many of us haven’t come across terms like ‘seeding study’, ‘last observation carried forward’, ‘stealth corrections’, ‘salami slicing’, ‘Benford’s law’, and ‘p value hacking’. I’d struggle to spot a western blot faked using Photoshop. All is not lost. There are thousands of other people out there who are as pissed off as I am about bad studies. Angry about evil.
They are also less quiet than they used to be. Here are a few resources:
PubPeer is magnificent. They currently have over 200,000 incisive analyses of publications. Many of these look for bad statistics, or compare an image to millions of others, hunting for duplicated or stolen pictures, and doctored western blots. They also have a browser plugin that will scan pages you read and warn you about flagged articles.6
Retraction Watch keeps its finger on the pulse. You can get a daily post if you want—they are often hugely informative. Some are a giggle-a-minute.
There are lists of predatory journals. For alt-med scams, there’s Quackwatch. Science-based Medicine is often incisive. For the connoisseur, there’s Data Colada.
Then we have people like Leonid Schneider who have devoted considerable energy to debunking. His For Better Science is not everyone’s cup of tea—My God he’s angry—but I find it a fun read. I don’t know where he gets the energy.
This may not be your scene. But if it is, there’s help at hand. Guidelines like EQUATOR provide checklists for most types of study. The list can be intimidating—about 600 at last count (CONSORT, STROBE, PRISMA, SPIRIT, STARD, CARE, AGREE, ARRIVE, SQUIRE, and so on). Much later, I’ll propose a solid solution, but first we must explore further.
A matter of trust
“You’ve a good heart. Sometimes that’s enough to see you safe wherever you go. But mostly, it’s not.”
Neil Gaiman,7 Neverwhere, 1996.
Before, I’ve described⌘ how every single one of us is vulnerable to deception. We are most vulnerable when we think “I’ve got this. Common sense will see me through.”
But we have great power too. We know that Science⌘ singles itself out from everything else using self-criticism. This is our continual strength. And we are not alone. Thousands upon thousands of people still do good science.
My 2c, Dr Jo.
In my next post, I’ll look at predatory journals, how to spot them and what we can do about them.
The graphic of the fembot at the start of my post was AI-generated. Subsequently I did a fair bit of modifying in GIMP, including putting the cover of the offending publication on the journal the bot is reading. It seemed only fair.
⌘ This symbol indicates another of my Substack posts where I explore the topic in more detail.
A third possibility is really, really incompetent authors, I guess.
There is actually quite a large literature on sex robots. Some of it is rather disturbing, so I chose not to go there. Even in a footnote.
Elsevier denies this, instead blaming a “formatting glitch”. It would be unusual for the entire editorial board of a respected journal to resign over a formatting glitch, though! Such mass resignations seem to be a growing trend.
Sometimes this is just incompetence, or a rushed amendment at the last moment, under time pressure. Nobody is perfect, and not everyone knows how to write a paper well.
Most people only read the abstract. If you want your product to look good to the masses, don’t mention its limitations in the abstract. This also, however, makes the deception obvious.
Some of the flagging is benign; some far less so.
We should note that Gaiman has all sorts of accusations against him at present. I don’t know how this will pan out.





Thank you once again, for this incredibly useful post - one that I shall recommend to my senior students as they embark on their research journeys. I especially appreciate the suggestions on ways to spot, or at least be alerted to the possibility of, integrity issues or just plain garbage. You speak of the energy of Leonid Schneider - I’d have the same thing to say about you, actually - I have no idea where you find the time and fortitude (not to mention brain power) to put out all your content. I’m glad you do though.
Censurship, pipe me aboard.