AI Copilots Are Changing How Coding Is Taught

Generative AI is transforming the software development industry. AI-powered coding tools are assisting programmers in their workflows, while jobs in AI continue to increase. But the shift is also evident in academia—one of the major avenues through which the next generation of software engineers learn how to code.

Computer science students are embracing the technology, using generative AI to help them understand complex concepts, summarize complicated research papers, brainstorm ways to solve a problem, come up with new research directions, and, of course, learn how to code.

“Students are early adopters and have been actively testing these tools,” says Johnny Chang, a teaching assistant at Stanford University pursuing a master’s degree in computer science. He also founded the AI x Education conference in 2023, a virtual gathering of students and educators to discuss the impact of AI on education.

So as not to be left behind, educators are also experimenting with generative AI. But they’re grappling with techniques to adopt the technology while still ensuring students learn the foundations of computer science.

“It’s a difficult balancing act,” says Ooi Wei Tsang, an associate professor in the School of Computing at the National University of Singapore. “Given that large language models are evolving rapidly, we are still learning how to do this.”

Less Emphasis on Syntax, More on Problem Solving

The fundamentals and skills themselves are evolving. Most introductory computer science courses focus on code syntax and getting programs to run, and while knowing how to read and write code is still essential, testing and debugging—which aren’t commonly part of the syllabus—now need to be taught more explicitly.

“We’re seeing a little upping of that skill, where students are getting code snippets from generative AI that they need to test for correctness,” says Jeanna Matthews, a professor of computer science at Clarkson University in Potsdam, N.Y.

Another vital expertise is problem decomposition. “This is a skill to know early on because you need to break a large problem into smaller pieces that an LLM can solve,” says Leo Porter, an associate teaching professor of computer science at the University of California, San Diego. “It’s hard to find where in the curriculum that’s taught—maybe in an algorithms or software engineering class, but those are advanced classes. Now, it becomes a priority in introductory classes.”

“Given that large language models are evolving rapidly, we are still learning how to do this.” —Ooi Wei Tsang, National University of Singapore

As a result, educators are modifying their teaching strategies. “I used to have this singular focus on students writing code that they submit, and then I run test cases on the code to determine what their grade is,” says Daniel Zingaro, an associate professor of computer science at the University of Toronto Mississauga. “This is such a narrow view of what it means to be a software engineer, and I just felt that with generative AI, I’ve managed to overcome that restrictive view.”

Zingaro, who coauthored a book on AI-assisted Python programming with Porter, now has his students work in groups and submit a video explaining how their code works. Through these walk-throughs, he gets a sense of how students use AI to generate code, what they struggle with, and how they approach design, testing, and teamwork.

“It’s an opportunity for me to assess their learning process of the whole software development [life cycle]—not just code,” Zingaro says. “And I feel like my courses have opened up more and they’re much broader than they used to be. I can make students work on larger and more advanced projects.”

Wei Tsang echoes that sentiment, noting that generative AI tools “will free up time for us to teach higher-level thinking—for example, how to design software, what is the right problem to solve, and what are the solutions. Students can spend more time on optimization, ethical issues, and the user-friendliness of a system rather than focusing on the syntax of the code.”

Avoiding AI’s Coding Pitfalls

But educators are cautious given an LLM’s tendency to hallucinate. “We need to be teaching students to be skeptical of the results and take ownership of verifying and validating them,” says Matthews.

Matthews adds that generative AI “can short-circuit the learning process of students relying on it too much.” Chang agrees that this over-reliance can be a pitfall, and advises his fellow students to explore possible solutions to problems by themselves so they don’t lose out on that critical thinking or effective learning process. “We should be making AI a copilot—not the autopilot—for learning,” he says.

“We should be making AI a copilot—not the autopilot—for learning.” —Johnny Chang, Stanford University

Other drawbacks include copyright and bias. “I teach my students about the ethical constraints—that this is a model built off other people’s code and we’d recognize the ownership of that,” Porter says. “We also have to recognize that models are going to represent the bias that’s already in society.”

Adapting to the rise of generative AI involves students and educators working together and learning from each other. For her colleagues, Matthews’ advice is to “try to foster an environment where you encourage students to tell you when and how they’re using these tools. Ultimately, we are preparing our students for the real world, and the real world is shifting, so sticking with what you’ve always done may not be the recipe that best serves students in this transition.”

Porter is optimistic that the changes they’re applying now will serve students well in the future. “There’s this long history of a gap between what we teach in academia and what’s actually needed as skills when students arrive in the industry,” he says. “There’s hope on my part that we might help close the gap if we embrace LLMs.”

Source: IEEE Spectrum Computing