15 Million Illegal Telegram Channels Removed in 2024 Thanks to AI Moderation
Telegram has leveraged artificial intelligence (AI) to remove over 15 million suspect groups and channels as part of its crackdown on illegal activities. This effort, announced in late 2024, reflects a significant shift in the platform’s moderation strategy. AI tools, combined with a dedicated team of human moderators, have helped Telegram identify and restrict access to content violating its terms of service, such as illegal trade, explicit material, and other unlawful activities.
Telegram CEO Pavel Durov emphasized that the platform’s search capabilities, previously exploited for illicit purposes, are now safer. He reassured users that problematic content identified by AI is no longer accessible, and urged users to report any remaining violations via designated channels. Durov also noted that the initiative aligns with Telegram’s commitment to privacy, balancing the need for security with the platform’s values.
As part of these efforts, Telegram updated its terms of service to clarify that it may share user data, including IP addresses and phone numbers, with law enforcement upon receiving valid legal requests. This policy, previously limited to combating terrorism, now extends to broader criminal activities like drug trafficking and child exploitation. Durov highlighted that such measures have been in place since 2018 but were streamlined for clarity this year.
These changes come in response to growing criticism of Telegram’s role in facilitating illegal activity. While the platform has long been praised for its privacy features, its unmoderated groups and channels have also attracted misuse. AI-powered moderation, coupled with transparency reports, aims to deter misuse while preserving Telegram’s core principles of user freedom and security.
Critics, however, remain skeptical, pointing to the platform’s challenges in sustaining these efforts long-term. The updates also raised concerns about the potential misuse of shared data in regions with limited safeguards for political dissidents.
This milestone represents a broader trend of tech platforms adopting AI to tackle harmful content. Telegram’s actions signal its readiness to address safety concerns while navigating the complexities of privacy and compliance.