Advertisement
Advertisement
Artificial intelligence
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
The latest draft of the law targeting academic fraud is the first to specifically mention AI. Photo: Reuters

Chinese students risk losing degrees if caught using artificial intelligence to write papers

  • Draft laws being considered by the country’s top legislative body address the use of the technology for the first time
  • Concerns about students using software such as ChatGPT have been growing, prompting some universities to issue guidelines on the use of the technology
Students in China who use artificial intelligence to write papers could lose their degrees under a draft law being considered by the country’s top legislative body.
Any law would be the first of its kind in China and comes in the wake of growing concerns about the use of AI in academic settings and the heightened risk of plagiarism.

An earlier draft of the same law targeting “individuals who have obtained academic degrees through fraudulent means”, which the Ministry of Education put out for public consultation in 2021, did not include any reference to the use of AI to write papers

The updated version, which refers directly to AI for the first time, was submitted by the States Council, the country’s cabinet, and will be reviewed by the Standing Committee of the National People’s Congress during a five-day session that ends on Friday, according to China News Service.

The draft law also allows for degrees to be revoked when they have been obtained using stolen or forged identities or through bribes, according to state media reports.

Concerns about students using the technology to cheat have been growing since ChatGPT emerged as a market leader late last year.

08:54

Is China’s technology falling behind in the race for its own ChatGPT?

Is China’s technology falling behind in the race for its own ChatGPT?
Although the software is not officially available in China, there are ways to access it and there are also a number of Chinese chatbots that operate along similar lines, including the Ernie Bot developed by the search engine Baidu and Spark from the voice and language recognition developer iFlyTek.

A quick search of Chinese social media shows that a number of posts on how to use generative AI tools to write papers are widely available on platforms such as Weibo and Xiaohongshu.

There have also been media reports about how students have been using the technology to write papers or school work.

Shangguan News, a media outlet based in Shanghai, reported in March that research from East China Normal University had found that around 40 per cent of college students in Shanghai used ChatGPT to help with their research.

Universities and academic journals around the world have already been introducing rules and guidelines on the use of AI to prevent its misuse.

Beijing to restrict use of generative AI in online healthcare activities

For example, China’s Jinan University has said its journal’s philosophy and social sciences section will not accept papers solely or partly authored by any large language model, including ChatGPT, and authors will need to explain the use of AI tools in their papers.

The latest guidelines from the University of Hong Kong, which imposed a temporary ban on generative AI tools earlier this year, now allow it to be used as a teaching aide. It has also given its students free access to ChatGPT but there is a strict cap on the number of times it can be used each month.

“For a law that addresses a very new technology, it’s important to undertake consultation with relevant stakeholders to understand how it is currently being used, and the pros and cons of using such technology for student learning,“ said Zhou Yongmei, a professor of practice in institutional development from Peking University.

02:19

Is AI better at maths than mathematicians?

Is AI better at maths than mathematicians?

She said a consultation will generate better understanding of the broad impact on students, educators, school administrators, and law enforcement agencies and potentially identify ways to leverage the positive effect of this technology while mitigating the risk of it being misused.

“Moreover, a law can only be effective if it is enforceable. Without clear definitions or ways to detect the use of generative AI, a law will not have the intended effect,“ she added.

7