How GPTs are Shaping our Perception of Truth

Описание к видео How GPTs are Shaping our Perception of Truth

How are AI models shaping our perception of truth in a digital world? In this video, we dive deep into how LLMs like GPTs influence what we believe to be true and where the boundaries of AI’s “knowledge” really lie. We will see why these models are powerful yet fallible, and what you can do to critically engage with AI-generated information. From “hallucinations” to biases and crowd-consensus effects, this video explores the threats behind letting these models shape (and sometimes distort) our understanding of reality and truth.

00:00 - From Google to AI answers
01:14 - Why GPTs are reshaping truth
03:37 - How GPTs seem “intelligent”
09:02 - The unexpected biases of GPTs
10:28 - The crowdsource effect
13:18 - Epistemic trust and AI’s truth
16:51 - Guardrails and limitations
18:15 - The potential risks of GPTs
19:37 - Practical insights

📺 WATCH NEXT

   • Embrace AI in your Career and Life (t...  

   • AI will NOT replace Software Engineer...  

🌍 RELATED CONTENT

→ The article in ACM Qeue: "GPTs and Hallucination: Why do large language models hallucinate?" 🌐 https://queue.acm.org/detail.cfm?id=3...

→ The related paper: "The Hallucinations Leaderboard – An Open Effort to Measure Hallucinations in Large Language Models" 🌐 https://arxiv.org/pdf/2404.05904

WHO AM I?

Hi 👋 I’m César, a computer scientist, software engineer, and educator. On this channel, I share my PhD experiences and provide science-based strategies and tools to help you become a computer scientist.

🌍 My Blog: https://www.cesarsotovalero.com/blog
👨‍💻 My GitHub: https://github.com/cesarsotovalero
🧑‍💼 My LinkedIn:   / cesarsotovalero  
🐦 My Twitter:   / cesarsotovalero  

#AITruth #GPTModels #ArtificialIntelligence #AIHallucinations #TrustInAI

Комментарии

Информация по комментариям в разработке