All Content

AI Tools vs AI Text: Detecting AI-Generated Writing

This study evaluates the effectiveness of AI writing detection tools in discerning AI-generated content from human-written abstracts in the field of foot and ankle surgery, revealing significant challenges in accuracy and implications for medical publishing.

-Steven R. Cooperman & Roberto A. Brandão




View PDF


This paper tested six AI writing detection tools (GTPZero, Writer, Scribbr, Undetectable.ai, Copyleaks, and ZeroGTP) and found an overall accuracy rate of 63% and a false positive rate of 25%.

Key Points for Educators

  • Detection Accuracy: The study found that online AI text detectors achieved only 63% accuracy in identifying AI-generated abstracts, raising concerns about their reliability for editorial decisions.
  • Rapid Technological Advancement: Within three months, the performance of AI detection tools improved significantly, yet rewording of AI-generated content led to decreased detection rates, illustrating the arms race between AI generation and detection.
  • Ethical Responsibility: The findings underscore the necessity for researchers to disclose their use of AI tools, as current detection technologies cannot adequately vet for undisclosed AI involvement in research.
  • Generative AI's Potential: While there are risks associated with misuse, the study highlights that AI can enhance the clarity and efficiency of scientific writing when used responsibly.
  • Practical Application: The authors demonstrated the capabilities of AI by generating the abstract of the paper itself using ChatGPT, showcasing both the quality of AI writing and the importance of ethical considerations in its application.