In a moment that quickly became a global talking point on the use of artificial intelligence in journalism, an internal note generated by ChatGPT was accidentally published in the business section of the Dawn newspaper.
The slip-up, which has since gone viral across social media and become a hashtag, #DawnGPT, on social media platforms, has prompted a public explanation from the newspaper.
The unedited text, clearly a suggestion from the AI model to its user, appeared within an article discussing recent auto sales figures
The note read:
“If you want, I can also create an even snappier ‘front-page style’ version with punchy one-line stats and a bold, infographic-ready layout — perfect for maximum reader impact. Do you want me to do that next?”
Screenshots of the error were instantly circulated on platforms like X (formerly Twitter) and Facebook, generating a mix of amusement and reflection.
One user commented, “When you let ChatGPT write your news story and forget to proofread… AI, but make it too real. ”
Another noted, “A cautionary tale for newsrooms everywhere. The last human editor is still the most vital.”
The incident has opened a wider conversation about the increasing reliance on large language models (LLMs) like ChatGPT in the media industry and the critical need for rigorous human editorial oversight.
Dawn Issues Public Explanation

Following the rapid viral spread, Dawn took to its official social media channels to address the gaffe.
The newspaper confirmed that the extraneous text was indeed an unintentional inclusion from an AI-assisted draft that was not fully proofread before publication.
The statement emphasized the paper’s commitment to editorial standards, even while exploring new tools
“A newspaper report in today’s Dawn was originally edited using AI, which is in violation of Dawn’s current AI policy.
The Dawn AI policy is available on our website.
The original report also carried AI-generated artefact text from the editing process, which has been edited out in the digital version.
The matter is being investigated, and the violation of AI policy is regretted.
— Editor”




























