Doctoral intern dismissed for disrupting ByteDance AI training

In response to a report last Friday circulating on WeChat about a security lapse in Bytedance’s AI department, the company issued a statement saying that an intern had this summer maliciously interfered with model training in the course of AI commercialization. The TikTok owner underlined that it had not affected online operations or any other [...]

featured-image

Every Wednesday and Friday, TechNode’s Briefing newsletter delivers a roundup of the most important news in China tech, straight to your inbox. Sign up In response to a report last Friday circulating on WeChat about a security lapse in Bytedance’s AI department, the company issued a statement saying that an intern had this summer maliciously interfered with model training in the course of AI commercialization. The TikTok owner underlined that it had not affected online operations or any other commercial project.

Online speculation that the incident had affected “over 8,000 GPU cards, causing millions of dollars in losses” was exaggerated, the company said. Why it matters: This incident reveals security management issues in ByteDance’s technical training processes. With interns often granted critical roles in company projects, there is an urgent need for robust security protocols in tech companies.



Details: After investigation, the person involved was found to have been interning with the commercialization tech team and had not worked with the AI Lab, according to ByteDance. The intern had been dismissed in August. Last Friday, a message spread in several WeChat groups, on China’s leading social media platform, stating that the large model training of a leading tech company was infiltrated by an intern, who inserted destructive code to influence the reliability of training results.

The message claimed that over 8,000 GPU cards were affected, with losses extending to tens of millions of dollars. This incident took place in June when a doctoral student from a university interned with ByteDance’s commercialization tech team, local media outlet Jiemian reported. Frustrated with the team’s resource allocation, the intern deployed attack code to disrupt the model training tasks.

The intern exploited a vulnerability in AI app-building site Hugging Face (HF) to inject destructive code into Bytedance’s shared models, leading to unstable training performance that failed to meet expectations, according to the same Jiemian report. Furthermore, the AML (automated machine learning) team was unable to determine the cause of the issues. However, the intern targeted the team’s model training tasks rather than the company’s Doubao model that was already on the market.

Context: In 2023, China’s market for large model platforms and related applications was worth RMB 1.765 billion ($250 million), according to market intelligence firm IDC . Last year, the three leading AI models in the Chinese market were Baidu AI Cloud , SenseRobot, and Zhipu AI.

Support TechNode With a small team, TechNode provides timely news and thoughtfully researched articles for worldwide readers interested in learning more about the Chinese tech industry. One-time Monthly Annually One-time $60 $120 $365 Other Donation amount $ Monthly $10 $20 $30 Other Donation amount $ Annually $60 $120 $365 Other Donation amount $ Your contribution is appreciated. Donate Now Related.