Huawei’s AI research division, Noah's Ark Lab, has denied allegations that its newly open-sourced Pangu Pro MoE model copied or plagiarized rival Alibaba’s Qwen series models.
The controversy began following the public release of Pangu Pro MoE as open source. A research paper posted on GitHub by @HonestAGI reported an “extraordinarily high correlation” between Pangu Pro MoE (a 72B-parameter model) and Alibaba’s 14B-parameter Qwen-2.5 model. The findings suggested Pangu Pro MoE may have been developed by “upcycling” Qwen instead of being trained independently.
Huawei has publicly refuted the plagiarism allegations, stating that while the development team did reference open-source code from other large language models in line with industry standards, there was no use of proprietary datasets or unauthorized copying. According to Noah's Ark Lab:
"The code implementation of some foundational components in Pangu Pro MoE referenced industry open-source practices, including portions of open-source code from other open-source LLMs," Noah's Ark Lab stated. "We strictly adhered to open-source license requirements and clearly marked copyright declarations in the source code files. This is not only a common practice in the open-source community but also aligns with the collaborative spirit advocated by the industry."
Huawei added that the Pangu Pro MoE was trained entirely on Huawei Ascend NPUs and is optimized to solve large scale distributed training challenges efficiently, marking a technical step forward in the AI field.
The Pangu Pro MoE boasts performance comparable to leading models such as Qwen 3 32B, and even exceeds some larger models like the Llama 4 Scout in certain benchmarks. Technical documentation highlights the introduction of the "Mixture of Grouped Experts" architecture for improved load balancing and efficient inference processes.
Despite the controversy, Huawei’s open-sourced approach and detailed technical disclosures have drawn both scrutiny and support from segments of the developer community, who emphasize the importance of transparency, legal compliance, and collaborative spirit in advancing artificial intelligence.
The Pangu Pro MoE boasts performance comparable to leading models such as Qwen 3 32B, and even exceeds some larger models like the Llama 4 Scout in certain benchmarks. Technical documentation highlights the introduction of the "Mixture of Grouped Experts" architecture for improved load balancing and efficient inference processes. The Pangu Pro MoE boasts performance comparable to leading models such as Qwen 3 32B, and even exceeds some larger models like the Llama 4 Scout in certain benchmarks. Technical documentation highlights the introduction of the "Mixture of Grouped Experts" architecture for improved load balancing and efficient inference processes.Stay Informed with the Latest news and trends in AI
The form has been successfully submitted.