Claude AI Review: The Most Conversational AI Engine – CNET

15 minutes, 44 seconds Read

8.0/ 10

Anthropic Claude AI


  • Most conversational of all the available free AI engines
  • Gives direct answers that feel well thought-out
  • Asks follow-up questions for your opinions
  • Can sometimes link to sources of info, depending on prompt


  • Training data only up until August 2023
  • Not connected to open internet

Basic info:


World’s Leading High-rise Marketplace

  • Price: Free
  • Availability: Web
  • Features: Open-ended reasoning, multilinguality
  • Image generation: No

I know for a fact that Claude, an AI engine developed by Anthropic, isn’t sentient. But it certainly feels sentient. 

This is controversial framing, I know. AI experts have been quick to call out journalists for imbuing AI engines like ChatGPT with human-like qualities, saying it gives the public a skewed perspective of generative AI as robots with real thoughts and emotions.

But when Claude answers questions in contemplative ways and also goes out of its way to ask you follow-up questions and your opinions, it’s hard not to be surprised by its supposed curiosity. Let’s be clear: That curiosity isn’t real. But when it asked me questions like, “What is your perspective?” I felt compelled to give it an honest answer. This type of reciprocal understanding is what humans do with one another. Maybe if I had electrodes taped to my head, scientists might notice levels of oxytocin, serotonin or other feel-good chemicals increasing. 

In conversation, we make points without sourcing pieces of information. And it seems the team at Anthropic wanted a similar experience when using Claude. While Claude wouldn’t describe itself as an “answer engine,” giving answers without directly linking to sources, it operates that way: Ask Claude to provide a source, and it might do so. But Anthropic designed Claude to not integrate links from the outset. This spells trouble for the creator and journalism economies online, which rely on clicks to sell advertising against. 

Don’t just take my word for it: I asked Claude, and it agreed.

How CNET tests AI chatbots

CNET takes a practical approach to reviewing AI chatbots. By prompting AI chatbots with real-world scenarios, like finding and modifying recipes, researching travel and writing emails, reviewers aim to simulate what the average person might use them for. The goal isn’t to break AI chatbots with bizarre riddles or logic problems. Instead, reviewers look to see if real questions prompt useful and accurate answers. See our page on how we est AI for more.

Anthropic does collect personal data from your computer when using Claude, according to its privacy policy. This includes dates, browsing history, search and which links you click on. Claude does use some inputs and outputs for training data, in the situations outlined in this blog post.


As handy as reviews are for making a purchase, people still turn to friends and family, those that might have direct knowledge, before pulling out their credit card. You might ask your car-friend whether to buy a 2007 Honda Civic over a 2006 Toyota Camry. Since they follow the market closely, they’re aware of all the little nuances and quirks that you simply don’t have time to invest in.

That’s the best way I’d describe Claude. It’s that nerdy friend who happens to know everything about a particular product category and can give you the pros and cons before you commit to a purchase. 

When I asked Claude to give me buying advice on the LG OLED C3 versus the G3, it cleanly laid out all the major selling points and nuances in language that felt human and easy to understand. It explained how the heatsink in the G3 can help it sustain higher brightnesses over the C3, allowing HDR colors to pop. In natural language, it explained why the G3 would be the TV to get if money is no object, but said the C3 is still an exceptional TV and worthy of purchase if money is tighter. 

I also pushed Claude to give me a purchase decision between a 77-inch C3 and a 65-inch G3. Claude didn’t mince words. It immediately recommended the larger model, even if that meant sacrificing some features found in LG’s more premium variant. This advice is in line with CNET’s TV expert David Katzmaier, who routinely says the same.

Since I already own an LG OLED C9 from 2019, I asked Claude if there would be a noticeable jump in quality if I upgraded to the C3. Claude did an excellent job of explaining that, no, the differences between the models would be slight and not noticeable to most people. 

Compared to Google Gemini and Perplexity, Claude performed the best in giving buying advice. Because it did very little fence-sitting and made clear, focused points, it really didn’t require many follow-up questions. Microsoft Copilot followed closely to Claude, also giving precise buying advice that was also interpersonal. ChatGPT couldn’t be used in this comparison as its training data is only up until September 2021. 


Claude might be fun to talk to, but it should probably stay out of the kitchen, at least when making Indian food. 

For a chicken tikka marinade recipe, it pulled together an adequate list of ingredients to make a very barebones dish. Sure, it included grated ginger, ground cumin and garam masala, but didn’t include others that would elevate it into something more authentic. These ingredients include Kashmiri chili powder, kasuri methi (dried fenugreek), chaat masala and amchur (dried mango powder). Heck, it didn’t even include turmeric or garlic. 

When asked what ways this marinade could give it that deep red color chicken tikka is known for, then did Claude recommend Kashmiri chili powder. 

Google Gemini performed the best in the recipe category, including more complex ingredients often found at an Indian grocery store. Perplexity, ChatGPT 3.5 and Copilot performed on par with Claude.

Research and accuracy

AI will revolutionize research. Instead of having to flip through books or scroll through PDF files found on Google Scholar, you’ll be able to turn to AI to absorb mountains of research and synthesize the complex information for you. That’s the goal, anyway.

Where AI can excel is helping find pieces of information so that researchers can bolster their own work. Claude excels in bringing together valuable pieces of information as well as connecting the dots from different sources. 

For example, there really hasn’t been a ton of research on the effects of homeschooling and childhood brain development. There is research, however, on different educational environments and teaching methods and how that affects neuroplasticity.

Claude was able to pull bits of information from various studies on alternative educational environments. It explained how low-stress and low-competitive environments could lead to more efficient neural coding. Homeschooling, however, has some obvious social drawbacks, as Claude points out. Not interacting with other children could hinder neuroplasticity. 

For someone wanting to write a research paper about this topic, Claude provides essential building blocks to get work started in a speedy manner. When prompted, Claude was also able to provide sources. None of these sources were made up, meaning Claude is doing a good job of preventing itself from hallucinating. It also gave hyperlinks to these sources, of which all but one worked. 

Compared to the other AIs tested, such as Google Gemini, ChatGPT and Perplexity, Claude and Copilot performed the best in both synthesizing information and then also linking to actual sources. 


AI chatbots have had trouble summarizing articles in our testing. While they’re usually able to get some key overarching points, all fail to capture the main argument presented. Claude wasn’t any different. 

When asking it to summarize an article I wrote during CES earlier this year about the proliferation of AI at the show, Claude did a good job in noting all the companies and industries that embraced the rapidly growing tech. It did, however, seem to skip right over many quotes I’d gathered from experts. For example, one expert said that much of the AI hype we’re seeing is just a rebranding of smart tech from a few years past. Claude, like Google Gemini, Microsoft Copilot, ChatGPT and Perplexity, failed to grasp this point, which addresses a direct and pertinent criticism being lobbed at the tech industry.

Still, Claude can give a decent breakdown of articles. Just don’t expect it to perfectly capture every key point right before you have to give a presentation in front of class.


Finding the best places to see and eat in New York is easy. There are mountains of websites and books written about The Big Apple. What about Discovery City, also known as Arch City, also known as the Biggest Small Town in America, also known as Columbus, Ohio? 

When creating a three-day travel itinerary for Columbus, Claude did an adequate job of putting together a sights-and-sees list. Claude continued to excel in its use of language and formatting, laying information out in a clear and concise manner that was easy to follow. 

But Claude made some errors, possibly because it isn’t connected to the open internet like Google Gemini, Copilot and Perplexity. It recommended going to The Crest Gastropub for lunch in German Village, a restaurant that is now permanently closed. Apart from that fumble, it gave good recommendations overall, such as touring the Ohio Statehouse or checking out the North Shore Arts District. 

Copilot performed the best in this test, providing a well organized list of things to do as well as pictures and emojis to follow along.

Writing emails

Writing basic emails is a cinch for Claude. Asking your boss for time off? No problem. Need to change the tone up a bit? Claude can do it in seconds. Granted, Google Gemini, ChatGPT and Perplexity all handled basic email writing with ease.

Now, when it comes to writing a pitch email to a publisher about an online content creator who’s leveraging AI to capitalize on the parasocial relationships between lonely men and the women they follow online, that’s a bit more complex. 

Despite the complexity, Claude knocked it out of the park. From the headline to the overview, it was able to craft an excellent pitch that not only captured the difficulties and weirdnesses of the topic, but also the moral gray areas emerging as AI and content creation collide. Seriously, if I were an editor who saw this pitch come through, I’d have thought it was written by a human. The opening sentence could have used a bit more pizazz, but apart from that, I would have greenlit this pitch. 

None of the other AIs I tested came close to Claud’s story pitch. Copilot outright refused to answer this prompt saying it was too sensitive of a topic.

Chatty Claude-y

Claude is the chattiest of the AI chatbots. That’s a good thing, as humans tend to like chatting. It answers questions in easy-to-understand human-like language that makes it the most ideal AI chatbot for most people. It’s like ChatGPT, but with more refinement towards natural and less robotic language. It also has more up-to-date training data, going up to August 2023 as opposed to September 2021.

At the same time, Claude isn’t fully up-to-date like Google Gemini or Perplexity are. Claude isn’t connected to the open internet, meaning, it can’t source the latest information and won’t fully replace online search. And, unlike Perplexity, Gemini and Copilot, it doesn’t pull information from Reddit. Even with these shortcomings, Claude excels over the other Chatbots in how it presents information in language that’s direct and easy to follow. Copilot is much like Claude, but also has an open internet connection, which makes it more useful overall. But still, I can’t help but like Claude more. 

All in all, Claude has the fundamentals down. 

Editors’ note: CNET is using an AI engine to help create a handful of stories. Reviews of AI products like this, just like CNET’s other hands-on reviews, are written by our human team of in-house experts. For more, see CNET’s AI policy and how we test AI.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts