Dumming down students…

Dumming down students...

larcombe

ChatGPT, based upon GPT-3 and soon to be its successor, GPT-4, will be available to students for the rest of their lives. These are tools that we, [we whom have largely left the education system] could never have imagined while we were learning. Here are some thought about how AI is going to affect current students of any level and what skills we should teach them to make them effective and contributing citizens in society?

First, we must educate them about the society they exist in now, not the one we would like to exist, or the one that we as former students/parents existed in. It is today’s society that includes the ChatGPT tool.

If a teacher from the time of the first handheld calculator found themselves in a position where students suddenly had access to this new tool, the answer was to teach the students how to use the calculator, not to pretend they won’t have access to it in the real world.

It is the same with the ChatGPT and other AI tools. Teach them to think. Teach them to craft the right questions for the AI. Have a conversation with them to confirm their understanding of the answers that they are getting, and how to distinguish between good AI answers and suspect/inadequate answers.

The problem with ChatGPT is that it doesn’t know when it’s wrong. It will be just as confident (and convincing) with incorrect answers as true ones.

The point of these computer science classes and assignments is to teach students the fundamentals of coding that they’ll need to move onto more advanced topics.

The 4th year classes I took as part of my Computer Science education built on the knowledge I gained from 3rd year, which built from 2nd year, etc.

If you don’t put in the effort on the early work, you miss the fundamentals that you need to succeed and if these students don’t do them, they won’t understand or recognise when the AI gives them an incorrect result or how to fix it.

To be honest, I don’t really like my calculator analogy because AI is different from calculators. Calculators take over the grunt work of the process. AI takes over EVERYTHING. If you sit down to a calculus exam with a standard calculator, you won’t pass the test because you don’t understand the problems well enough to know how to use the calculator to solve the problem.

ChatGPT is different. It will do EVERYTHING for you. You don’t need to know anything to use it and so you do not know if it’s being accurate or giving you well-presented garbage.

That’s what is so dangerous about this. There’s such an emphasis on getting good grades for students, educators try to combat the desire to cheat by making cheating difficult. During the pandemic, when students learned from home, there was a massive uptick in cheating and now university programmes are receiving enrolled students that can’t even complete basic high school assignments.

The urge to use a tool that lets you completely skip the entire learning process, one that earns you an ‘A’ on your hardest assignment in a couple of seconds, is very dangerous because kids don’t learn the basics of these topics. Using an AI to solve a problem you could solve yourself differs from asking it to solve a problem that is beyond your understanding because you have no ability to tell if it’s done it correctly.

However, reading the documentation and API information around their earlier GPT-3 models, you see they definitely have capabilities to judge how likely the output is to be correct. In the OpenAI playground it is color coded if you turn it on, and in the API you get more detailed information. ChatGPT as exposed to the public right now is basically a tech demo, a toy version for us to play with as they use our input to better train the model. Just as the current GPT-3 models, I’m certain once this is fully available, they will expose ways to tell how confident the responses are.

I think this will be more disruptive even than what I have already written above. When I was young, programmers had to know a LOT about how the hardware they were programming for worked. You’d have printed manuals with information such as the physical memory addresses of the graphics adapter, and you’d memorize common base addresses and such. Ensure the processor interrupt lines (IRQ) were available for hardware, and direct memory access channels (DMA) was free for information transfer. When you needed performance, you used assembly language, telling the computer exactly how to move bytes of data around in memory and which machine code instructions to run against them.

Today, universities churn out computer science graduates who have never learned, let alone used, these details and just barely know such concepts exist. Those things are all still there, though. The need to address memory and execute machine code didn’t go away. A small percentage of engineers still know and work on such things, but it has mostly been abstracted. That same abstraction was iterated upon heavily in the last 20 years. When I started working in IT, I would have a chap (probably Alex Robinson or Mark Lee) at the data-center put a CD in a server and manually respond to prompts, then download software manually, then type up config files, etc. Today, all of that is automated. Data-centers wheel in entire racks of servers, plug them in, turn them on and they boot up and are remotely configured, automatically using technologies such as kubernetes and ansible, then added to a pool of resources. Another script detects they’re online and starts adding virtual systems to them as needed. Those systems pull down code or containers and start serving content to the world.

What I’m seeing here is that what those kids in college right now are learning about python and c++ and kubernetes will be redundant soon after graduating. The use case of this technology isn’t that it can do your homework, or grade your papers. The actual use case is that most programmers won’t be necessary anymore. Programmers will instead interact with an API for GPT-type AI and natural language to write code dramatically faster than they do now. Most of them simply won’t need to know exactly how the code works – there will be a small subset of engineers to escalate any of those problems to.

I don’t think this will be limited to IT either. How much time does a professor spend making lesson plans and grading papers? How many people make a living writing content for websites? Providing customer support? This has the potential to disrupt virtually any profession where the primary task is language in / language out, whether that language is human or machine. If someone can do it remotely today, this sort of technology will take over some of the work tomorrow. At first likely by people interacting with it directly, but then by automated processes interacting with it by API and spitting out the final product.

So, what are the best subjects/areas/concepts to focus on for someone about to enter university and get a computer science degree, in order to be best prepared for this ‘Brave New World’ ChatGPT will usher in? Languages!