But it certainly feels sentient.

Let’s be clear: That curiosity isn’t real.

But when it asked me questions like, “What is your perspective?”

Claude AI

I felt compelled to give it an honest answer.

This key in of reciprocal understanding is what humans do with one another.

In conversation, we make points without sourcing pieces of information.

And it seems the team at Anthropic wanted a similar experience when using Claude.

But Anthropic designed Claude to not integrate links from the outset.

This spells trouble for the creator and journalism economies online, which rely on clicks to sell advertising against.

Don’t just take my word for it: I asked Claude, and it agreed.

How CNET tests AI chatbots

CNET takes a practical approach to reviewing AI chatbots.

The goal isn’t to break AI chatbots with bizarre riddles or logic problems.

Instead, reviewers look to see if real questions prompt useful and accurate answers.

See our page onhow we est AIfor more.

Anthropic does collect personal data from your box when using Claude, according to itsprivacy policy.

This includes dates, browsing history, search and which links you press.

Claude does use some inputs and outputs for training data, in the situations outlined inthis blog post.

You might ask your car-friend whether to buy a 2007 Honda Civic over a 2006 Toyota Camry.

That’s the best way I’d describe Claude.

I also pushed Claude to give me a purchase decision between a 77-inch C3 and a 65-inch G3.

Claude didn’t mince words.

This advice is in line with CNET’s TV expert David Katzmaier, who routinely says the same.

Compared toGoogle GeminiandPerplexity, Claude performed the best in giving buying advice.

ChatGPT couldn’t be used in this comparison as its training data is only up until September 2021.

Heck, it didn’t even include turmeric or garlic.

Perplexity,ChatGPT 3.5and Copilot performed on par with Claude.

Research and accuracy

AI will revolutionize research.

That’s the goal, anyway.

Where AI can excel is helping find pieces of information so that researchers can bolster their own work.

Claude excels in bringing together valuable pieces of information as well as connecting the dots from different sources.

There is research, however, on different educational environments and teaching methods and how that affects neuroplasticity.

Claude was able to pull bits of information from various studies on alternative educational environments.

It explained how low-stress and low-competitive environments could lead to more efficient neural coding.

Homeschooling, however, has some obvious social drawbacks, as Claude points out.

Not interacting with other children could hinder neuroplasticity.

When prompted, Claude was also able to provide sources.

None of these sources were made up, meaning Claude is doing a good job of preventing itself fromhallucinating.

It also gave hyperlinks to these sources, of which all but one worked.

Summarizing

AI chatbots have had trouble summarizing articles in our testing.

Claude wasn’t any different.

It did, however, seem to skip right over many quotes I’d gathered from experts.

Still, Claude can give a decent breakdown of articles.

Travel

Finding the best places to see and eat in New York is easy.

There are mountains of websites and books written about The Big Apple.

Writing emails

Writing basic emails is a cinch for Claude.

Asking your boss for time off?

Need to change the tone up a bit?

Claude can do it in seconds.

Granted, Google Gemini, ChatGPT and Perplexity all handled basic email writing with ease.

Despite the complexity, Claude knocked it out of the park.

None of the other AIs I tested came close to Claud’s story pitch.

Copilot outright refused to answer this prompt saying it was too sensitive of a topic.

Chatty Claude-y

Claude is the chattiest of the AI chatbots.

That’s a good thing, as humans tend to like chatting.

It answers questions in easy-to-understand human-like language that makes it the most ideal AI chatbot for most people.

It’s like ChatGPT, but with more refinement towards natural and less robotic language.

It also has more up-to-date training data, going up to August 2023 as opposed to September 2021.

At the same time, Claude isn’t fully up-to-date like Google Gemini or Perplexity are.

And, unlike Perplexity, Gemini and Copilot, it doesn’t pull information from Reddit.

But still, I can’t help but like Claude more.

All in all, Claude has the fundamentals down.