Research
XrAE is our own work. But the problem it addresses and the approach it takes are consistent with what the academic community has independently documented. These ten studies confirm that the threat is real, that detection works, and that our methodology is sound.
The Threat
Research documenting the scale, effectiveness, and mechanisms of AI-assisted persuasion.
Schoenegger et al. · arXiv (preregistered, N=1,242), 2025
AI-generated persuasion turned out to be twice as effective as human persuasion at deception. But the study also found something hopeful: people who recognized the techniques became more resistant, about one percentage point per exposure. XrAE's approach to making techniques visible aligns with this finding. Awareness actually works.
Bai et al. · Nature Communications, 2025
AI persuasion matched human effectiveness on policy issues. The surprising part: telling people the content was AI-generated didn't reduce its impact. They still shifted their positions. Source labels aren't enough. You have to detect the actual techniques.
Goldstein et al. · PNAS Nexus (N=8,221), 2024
AI-generated propaganda matched the persuasive power of real foreign propaganda in a study of over 8,000 participants. When humans and AI collaborated on propaganda, the results were worse than either alone. Scalable detection isn't a nice-to-have at this point.
Argyle et al. · PNAS, 2025
Persuasive techniques shifted attitudes 2.5 to 4 percentage points per exposure. The troubling finding: changed attitudes didn't lead to more tolerance. People became more convinced of their new positions without becoming more open-minded. Persuasion quietly erodes democratic norms.
The Methodology
Independent studies whose findings are consistent with the technical approaches XrAE uses.
Horych et al. · NAACL 2025 Findings, 2025
Models trained on AI-generated labels actually surpassed the model that created those labels, by 5 to 9 percent. The key was volume and consistency over individual label perfection. These findings are consistent with the results XrAE achieves independently.
Gilardi et al. · PNAS, 2023
LLM annotations outperformed human crowd workers by 25 percentage points at a fraction of the cost. The consistency gap was stark: AI labels agreed with themselves 91 to 97 percent of the time, while human annotators managed 56 to 79 percent. The broader field is converging on what XrAE already does.
Piskorski et al. · ACL / SemEval, 2023
The shared task that established a standard 23-technique taxonomy for persuasion detection in news. XrAE's own taxonomy covers the same territory and goes further, adding eight addiction pattern codes and detection across spoken media formats.
The Response
Work on detection systems, interactive tools, and the case for automated persuasion analysis.
Wang et al. · CHI 2025, 2025
Research on interactive bias detection tools from CHI, the top venue for human-computer interaction. Two findings stuck with us: people trust detection tools more when there's human oversight, and teaching people about techniques works better than just showing them a dashboard.
Modzelewski et al. · arXiv (preprint), 2026
When AI persuasion is subtle, human detection drops by about 20 percent. People simply can't catch it on their own. The study produced a benchmark of 65,000 labeled texts with 196 linguistic features. XrAE's detection codes cover similar territory through an independently developed taxonomy.
Robison · Medium (commentary), 2025
A widely-read synthesis that connects the dots across the research: the threat is documented, awareness alone doesn't protect people, and automated detection is the logical response. Explicitly makes the case for systems like the one we built.