), and it’ll spit out something relatively coherent.

Then there are those who think this goes beyond Google: Human jobs are in trouble, too.

Another take, fromthe Australian Computer Society’s flagship publication Information Age, suggested the same.

The Telegraph announcedthe bot could “do your job better than you.”

I’d say hold your digital horses.

ChatGPT isn’t going to put you out of a job just yet.

A great example of why is provided by the story published in Information Age.

It can’t be trained to do so.

It’s a wordorganizer,an AI programmed in such a way that it can write coherent sentences.

(It also isn’t trained on up-to-the-minute data, but that’s another thing.)

It definitely can’t do the job of a journalist.

To say so diminishes the act of journalism itself.

ChatGPT won’t be heading out into the world to talk to Ukrainians about the Russian invasion.

Itcertainly isn’t jumping on a ship to Antarctica to write about its experiences.

It’s interesting to see how positive the response to ChatGPT has been.

But the major reason it’s really captured attention is because it’s so readily accessible.

That’s also contributed to the bot being a little overhyped.

Strangely enough, this is the second AI to cause a stir in recent weeks.

Essentially, it could generate answers to questions like, “What is quantum gravity?”

or explain math equations.

Much like ChatGPT, you drop in a question, and it provides an answer.

Galactica was trained on more than 48 million scientific papers and abstracts, and it provided convincing-sounding answers.

The backlash saw the project shut down by the Meta AI team after two days.

ChatGPT doesn’t seem like it’s headed in the same direction.

It feels like a “smarter” version of Galactica, with a much stronger filter.

ChatGPT has also been trained to be conversational and admit to its mistakes.

And yet, ChatGPT is still limited the same way all large language models are.

Its purpose is to construct sentences or songs or paragraphs or essays by studying billions (trillions?)

of words that exist across the web.

It then puts those words together, predicting the best way to configure them.

In doing so, it writes some pretty convincing essay answers, sure.

It also writes garbage, just like Galactica.

How can you learn from an AI that might not be providing a truthful answer?

What kind of jobs might it replace?

Will the audience know who or what wrote a piece?

It’s a tool, not a replacement.

Because that’s exactly what we did with this piece.

The headline you see on this article was, in part, suggested by ChatGPT.

But it’s suggestions weren’tperfect.It suggested using terms like “Human Employment” and “Humans Workers.”

Those felt too official, too… robotic.

So, we tweaked its suggestions until we got what you see above.

For now, I’m feeling like my job as a journalist is pretty secure.