First looks – GPT-4

First looks - GPT-4

larcombe

I first spoke about GPT-4 maybe a year ago. The upscaling of the size and complexity of the datacube is almost mindboggling – but does it make it any better?

Being on the early beta, I have been testing GPT-4.0 for a couple days now. I compared it with GPT-3.5 in a side-by-side test. To be honest, I did not expect GPT-4 to be a substantial change to most of the things I currently use 3.5 for, however…

Some of my most common uses:

  • “Rewrite this paragraph and explain it like I am five”
  • “Create a bash shell script that does________”
  • “Write an calc formula that:”
  • “Summarise this test into 6 lines:”
  • “Create a list of “
  • “Give me a list of all the cities with:”

After comparing the 3.5 and 4.0 results, here are my conclusions:

  • GPT 4.0 is better at being more concise and to the point
  • GPT 4.0 is better at summarizing longer forms of content (around 25k words)
  • GPT 4.0 is better at answering more questions when you reach a certain level of complexity
  • GPT 4.0 is better in different languages
  • GPT 4.0 can accept image prompts (although it outputs only text)

The most significant difference for me from using GPT4.0 will be the conciseness and the summarisation of longer articles.

I haven’t noticed any difference in the range of training data for GPT4.0. When asking what year it got its latest information, it says September 2021 (the same year as GPT3.5)

But if the rumors are true, that GPT4.0 will eventually be able to browse the current web, this will open up many more doors for users.

I can tell you that this version is no more intelligent than previous versions and it still appallingly bad at basic mathematics.