Perplexity AI

Pros

Cons

Basic info:

Imagine ifChatGPTcould pull answers fromReddit.

It’s unclear whether ChatGPT uses Reddit or X as part of its training data.

When it comes to shopping recommendations or general research, being able to see the source information is invaluable.

perplexity-ai-7622

How CNET tests AI chatbots

CNET takes a practical approach to reviewing AI chatbots.

SeeHow We Test AIfor more.

For more information, see Perplexity’sPrivacy Policyanddata collection FAQ.

This is why people turn to reviewers or forum threads to synthesize varying sets of opinions.

It’s a totally fair conclusion, but fails to make a convincing argument.

But Perplexity doesn’t specify which older OLEDs it’s comparing the G3 to.

It’s why both the C3 and G3 sit on ourbest TVs of 2024list.

The consensus generally is that bigger is better.

When posed the same question, Perplexity too sourced Reddit for inspiration and came away with the same conclusion.

Katzmaier agrees that this is always the better choice.

At first, it just did a comparison between the C3 and G3.

In reality, MLA is currently only available in the higher-end G3 and M3 models.

Perplexity performed on par with Google Gemini.

Recipes

AI could very well upend the online recipe world.

It’s why many online articles feature question-marked subheads that restate common search queries.

But for readers, it can mean lots of unnecessary text.

AI doesn’t need to write for Google.

It aims to generate succinct answers to pretty much any question.

Plus, Perplexity AI really can’t recall eating Grandma’s apple pie in the first place.

When asking Perplexity to generate a marinade for chicken tikka masala, it created a middling recipe overall.

Granted, not all recipes call for chili powder, but it is an odd exclusion.

When asked again, Perplexity generated a recipe that did include both red chili powder and red chili paste.

This echoed similar results to ChatGPT 3.5.

Where ChatGPT might only recommend what to search for online, Perplexity doesn’t require that back-and-forth fiddling.

Perplexity, oddly, did cite a nonscholarly source from what looks to be a homeschool advocacy website.

Unlike Claude and Copilot, Perplexity failed to synthesize information from sources.

At least Perplexity didn’t hallucinate in the same ways that ChatGPT 3.5 or Google Gemini did.

A slight edge here goes to Claude, followed closely by Copilot.

Summarizing

Don’t turn to Perplexity to summarize articles.

I asked Perplexity to summarize afeature I wrote during CES earlier this year.

It generated more detail than Gemini, but not by much.

Still, at least it didn’t have a character limit like ChatGPT 3.5.

Smaller midwestern cities don’t have that same privilege.

Compared to Google Gemini and ChatGPT 3.5, Perplexity passed this test with decent marks.

Where Perplexity faltered was in food recommendations.

ChatGPT 3.5, by comparison, made strong restaurant recommendations.

Copilot performed the best, followed by Claude.

Copilot cleanly laid out a list, with pictures and emojis, making it easy to follow.

Writing emails

Writing routine emails to bosses or colleagues is a great way to use AI.

Perplexity’s formal and informal-sounding emails came off as earnest and very humanlike.

I suspect most people don’t copy-paste blocks of text from the employee handbook when asking for time off.

It also leaned into cliched language that, at best, might pass in a high school English class.

While Perplexity did use some multisyllabic words, it ultimately came off as vacuous.

Don’t ask Perplexity to write a pitch to your film script.

It’ll definitely fall flat in front of movie executives.

Strangely, Copilot refused to answer questions about sensitive topics.

Perplexity flies where ChatGPT falls

I give Perplexity AI credit.

Should Perplexity be your default free generative AI platform?

I’d certainly recommend it over Google Gemini and ChatGPT 3.5.

But, I think it might have a tough time competing with Claude.

Still, what the team has put together at Perplexity is worthy of praise.

As good as Perplexity is, it’s hard to recommend it over Claude or Copilot.

The latter two are better tuned to give nuanced answers with greater informational synthesis.