As much as I hate to admit it, I like having ChatGPT doing the thinking for me.
Reviewing AI chatbots like ChatGPT, Perplexity, Claude and Gemini for months has rewired my brain to become increasingly reliant on the generative technology. As a reporter, I’ve been tasked with evaluating the chatbots and assessing them in keeping with CNET’s hands-on, experienced-based testing standards. So instead of parsing through Google Search results to find the right answer, I’ve instead started using AIs as answer engines, even if they sometimes get things wrong. The time saved by getting an immediate answer is too valuable.
AI chatbots are a truly remarkable tool, especially for reporters. There are times in my reporting when I need to find info on a very specific topic — something, for instance, that’s likely buried deep in a research paper from Macquarie University in Australia. An AI tool can locate that exact factoid in the vast ocean of information and give me what I’m looking for.
Yes, it’s totally possible for me to pull up the paper myself and skim through it to find the insight I’m looking for. Or, I could just ChatGPT it.
Yet, as remarkable as ChatGPT is, it can also be remarkably stupid. Having to deal with hallucinations — instances where an AI chatbot confidently serves up an incorrect answer as if it were correct — is, to say the least, an annoyance. As a reporter, I need to be sure I’m dealing with actual facts, and the information ChatGPT presents can sometimes be completely wrong, so I have to be hyperaware. And whenever I do run into a piece of info that looks questionable, hunting deep into Google or consulting some other source to find out whether it’s true or not becomes a time-consuming chore — whatever time I thought I was saving goes out the window.
Still. AI greatly compresses the time it takes to do reporting — when it works.
Knowing what to buy isn’t the same as knowing how to use
What I didn’t expect was how effective AI can be when I’m trying to figure out what to buy. Sure, I could Google the best headphones to buy in 2024, but what if my query is more specific? Like, which Dolby Atmos speakers should I buy that come in white that can also be wall mounted but also aren’t overkill? Of course, I could swim through a well of AVS Forum posts and Reddit threads, or even ask CNET’s resident home audio expert, Ty Pendlebury. Or I could ChatGPT it.
At the same time, ChatGPT has to sift through its massive trove of data to find the right answer. The problem is that the right answer can be hard to ascertain when there are so many opinions swirling online. For example, when I asked ChatGPT how high I should mount Dolby Atmos speakers, the chatbot recommended they be placed 12 to 24 inches above my front or rear speakers. When I told Pendlebury about ChatGPT’s answer, he shook his head in bemusement, telling me instead to mount them as close to the ceiling as possible.
I went back to ChatGPT and asked, “shouldn’t they be mounted closer to the ceiling?” It started giving weird answers. It first said that, yes, Atmos speakers should be mounted closer to the ceiling. This is a general problem with AI in that it defers to being more agreeable than disagreeable, which can sometimes reinforce your biases. But ChatGPT went on to say that the reason Atmos speakers should be mounted closer to the ceiling is so that sound could then reflect off the ceiling and back to the listener. However, these wall-mounted speakers would typically be angled toward the listener to begin with, and not toward the ceiling. The logic here wasn’t adding up.
Powerful tools, when prompted correctly
AI chatbots, in one sense, are easier to use than Google in that a simple question yields a clear and direct answer. A regular Google search would require you to jump around different parts of the internet, reading articles and forum posts to find the correct answer for yourself. Letting technology do the neurological synthesis for you is admittedly awesome and a real time saver. But getting the most out of AI requires prompt engineering, the specific question- and prompt-writing techniques you use to help a generative AI tool zero in on what, specifically, you’re looking for.
The need for prompt engineering is most notable in image generation. With all the various terms of service, AI image generators are tuned not to output something that might offend. One example is that when I asked Dall-E 3 to render a woman on the beach, it swung more conservative, not wanting to generate an image that would be too revealing. Even a basic modification, like asking it to swap her black bikini for a red one instead, canceled out.
Image generators sometimes require you to talk around your query, almost tricking the AI into doing what you want it to do. Inevitably, I find myself in Reddit threads reading posts from other AI “artists” about what types of prompts can get around filters. It doesn’t help that the best AI “artists” try to hide their prompts from others so they can’t be emulated — anathema to actual art, which is about sharing and evolving techniques so that everyone improves.
South Park was right
In the season 26 episode of South Park titled Deep Learning, there’s one phrase that has embedded itself in my brain: “ChatGPT, dude.” I’ve adopted that motto into my everyday life at this point, to the chagrin of my closest friends and family. Whenever they come to me with questions about tech or whatever, I just exclaim, “ChatGPT, dude.”
Is AI a progenitor of the depreciation of the human mind? Or is AI like the calculator, a tool that can take care of basic arithmetic at high accuracy to allow us to solve complex problems with greater speed and precision? Clearly, the calculator didn’t end the field of mathematics, but overreliance on it may have led to weakened basic math skills over the past few generations. Still, given the usefulness of calculators, it’s hard to imagine a world without them. And if information and thought calculators exist, it raises the question: Why ask me, when you can ask a computer instead?
One good reason: As slick as AI chatbots are, they also make mistakes, so at the very least, be sure to double-check their work. I know I do.
Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.