The point is that you should probably be skeptical even of your most trusted sources.
But what if, instead of your mother, it’s agenerative AI modellike OpenAI’sChatGPTtelling you something?
Should you trust the computer?
This week, the Austin, Texas, conference has spotlighted artificial intelligence.
Experts discussed the future andthe big picture, with talks ontrust, thechanging workplaceand more.
Here are five bits of advice on how to be smarter than the AI.
An LLM doesn’t.
The solution, Wu said, is to be more specific and structured withyour prompts.
verify the model knows what you’re asking it to produce.
Focus on what exactly you want, and don’t assume the LLM will extrapolate your actual question.
The problem goes beyond just getting things wrong.
Sap said chatbots can appear confident in an answer while being completely wrong.
The solution to this is simple: Check the LLM’s answers.
You might see different outputs.
The most important thing is to verify with external sources.
That also means you should be careful about asking questions to which you don’t know the answer.
“Make conscious decisions about when to rely on a model and when not to,” she said.
“Do not trust a model when it tells you it is very confident.”
AI can’t keep a secret
The privacy concerns with LLMs are abundant.
Don’t share sensitive or personal data with an LLM, Wu said.
But it’s all mimicry; it’s not truly human, Sap said.
“The way that we use language, these words all imply cognition,” Sap said.
“It implies that the language model imagines things, that it has an internal world.”
Thinking of AI models as human can be dangerous it can lead to misplaced trust.
“Humans are much more likely to over-attribute human-likeness or consciousness to AI systems,” he said.