Should we let AI take over mundane work? Not so fast
- It may seem that outsourcing tasks to generative AI like ChatGPT could increase productivity
- However, it might actually limit the supplementary learning that occurs when people do certain tasks, so AI must be deployed with care
For example, a study by researchers from US universities published in May found that when answering random patient questions from a social media platform, ChatGPT outperformed physicians in terms of both quality of information and empathy. And OpenAI has a said GPT4, the state-of-the-art model underlying ChatGPT Plus, could score in the top 10 per cent in the SAT reading exam for college admissions, the GRE verbal test for graduate school admissions and the uniform bar exam for lawyers.
This case raises an important question: how does generative AI affect human capability when people no longer need to develop logical arguments and verify facts when creating content?
Using generative AI to produce work is akin to outsourcing it to other people. Thus, we can infer its impact on human capability by referencing outsourcing.
We conducted research on how contributions to the well-known programming language, Python, changed after the introduction of a crowdsourcing program called Internet Bug Bounty (IBB), which rewards people for finding security vulnerabilities in open source software – but excludes the official system maintainers, who have been reporting and fixing bugs, and making enhancements, from the reward.
The IBB program should have made the Python maintainers’ job easier, giving them more time to perform other tasks and hence increasing their productivity in those tasks.
Interestingly, we found the opposite. The official Python maintainers not only discovered fewer bugs after the IBB program, they also made fewer enhancements to Python. This means their overall productivity fell after bug reporting was outsourced.
We looked at many reasons for the drop in productivity, including a loss of motivation due to not being rewarded in the IBB program. We concluded that it was mainly due to a lack of inter-task learning. There was a bigger drop in productivity among Python maintainers who reported more bugs before the IBB program than those who reported fewer. The drop was also bigger when the bug reporting and enhancement tasks were more closely related.
We surveyed a number of Python maintainers, most of whom said that finding vulnerabilities helped them learn and gain ideas for how to enhance the program. This implies that when performing a task, we not only contribute to the outcome, we learn at the same time.
This inter-task learning side-effect is illuminating. It suggests that learning and other capabilities may be sacrificed when we enjoy the power and convenience of generative AI. The short-term gain in productivity could be offset by the long-term loss in capability. Worse, this capability may be key to our contribution to future tasks, including, possibly, intelligent knowledge work.
So how should generative AI use be managed in an organisation? First, distinguish between objectives – are we seeking work output or learning and training? In most places, work output may be the priority, so it would be reasonable to actively deploy generative AI tools. Even so, we must be aware this may slow employees’ learning of how to perform tasks.
How AI development has fostered a digital ‘sweatshop’ in poor countries
Second, start classifying tasks by nature, in terms of whether they are interrelated. Generative AI tools may be better deployed for tasks that have little relation to other tasks, for example, pure automation jobs. We might wish to discourage AI’s use for deeply intertwined tasks.
Third, more training should be provided once generative AI is deployed. People are intrinsically lazy. If we let them use generative AI without supplementing it with learning opportunities, their ability may deteriorate over time.
At school, the priority should be learning rather than work output. We must seek to prevent the unrestrained use of generative AI in homework or assignments. If students don’t do homework themselves but outsource it entirely to AI, they may soon lose the ability to reason, innovate and produce more advanced knowledge.
We must revisit the common question, “Why waste time doing mundane work when AI can do it for us?” This is a dangerous sentiment because we do work not only to produce an outcome, but to learn as well. Such learning builds the foundation for future knowledge production.
We should not get carried away by generative AI, thinking that only work outcomes matter. The learning itself is equally important, if not more so.
Kai-Lung Hui is Elman Family Professor of Business and senior associate dean at Hong Kong University of Science and Technology Business School
Jiali Zhou is an assistant professor at the American University in Washington DC
The views expressed here are the authors’ own