LogotypeSlidebook

ChatGPT vs Perplexity

Title slide for a presentation comparing ChatGPT and Perplexity. The title reads 'ChatGPT vs Perplexity' with a subtitle: 'A Minimal Comparative Look — UX, Answers, and Use Cases'.  Two badges, one for each tool, are displayed below the title.. Last fragment

ChatGPT vs. Perplexity: A Comparative Look

Welcome everyone. Today we’ll open with a clean, minimal lens on ChatGPT versus Perplexity. The scope is simple: how they feel to use, how they answer, and where each shines in practical use cases. This is not a benchmark; it’s a directional, designer-friendly comparison to set context for the deeper dive. We’ll start with UX flow and interaction feel, move to answer behavior and trust signals, then end with use-case fit and patterns. As we go, watch for subtle differences in guidance, browsing, and summarization styles—these often drive adoption more than raw model specs.

Read more about slide →
A slide titled 'What This Presentation Covers' with a bulleted list of topics including Criteria, UX & Workflow, Answer Style, Web & Citations, Reasoning, Creativity, Coding, Speed, and Pricing, Privacy, and Recommendations.. Last fragment

Presentation Overview: Evaluating AI Agents

Open by setting expectations: this slide is the roadmap for today. Point to the header and the thin accent line as the structure marker. We’ll move left to right. Walk the bullet list quickly: evaluation criteria, the UX and workflow we’ll use, how answers are styled, and how we handle the web and citations. Continue: how reasoning is surfaced, how we encourage creativity, coding considerations, and speed expectations. Close the list with the bundle: pricing, privacy, and recommendations. Now gesture to the icon grid: each minimal icon mirrors a section—these will reappear as section markers. Set the pace: everything here will unfold in the same order—so the audience always knows where we are.

Read more about slide →
Slide illustrating the methodology for comparing AI tools, including icons representing consistent prompts, time-bound reviews, and use of public features.. Last fragment

AI Tool Comparison Methodology: Neutral Tests & Scoring

Title: Introduce that this slide is about the comparison method, emphasizing neutrality and transparency. Paragraph intro: Explain that we test tools across real user scenarios used every day. Reveal scenarios in order: research, drafting, coding, then Q&A. With each, reinforce that they mirror common workflows. Criteria: State that we score outputs on response clarity first, then sourcing, then usability—keeping the focus on what helps users act on answers. Checklist cards: Walk through the three rules. First, same prompts where applicable to ensure fairness. Second, a time-bound review to avoid moving targets. Third, only publicly available features—no private betas or secret flags. Close by reaffirming the goal: make the process reproducible so anyone could replicate the comparison.

Read more about slide →
Slide comparing ChatGPT and Perplexity, highlighting their personalities and use cases. ChatGPT is described as conversational, versatile, and long-form capable, while Perplexity is described as search-oriented, concise, and source-forward.. Last fragment

ChatGPT vs. Perplexity: Positioning and Personality

First, set the stage: this slide contrasts positioning and personality. The background gradient is there to add depth without distraction. Point to the left: ChatGPT. Emphasize the feel—conversational, versatile, long-form capable. As each descriptor appears, highlight how it supports exploratory, dialog-heavy work. Now shift to the right: Perplexity. Call out search-oriented, concise, and source-forward. Stress how this supports getting to cited answers fast. Draw attention to the slim vertical bars under each side—these aren’t scores, just a soft emphasis marker to show flavor, not ranking. Close with guidance: choose the tool by intent. If you need breadth and dialogue, use ChatGPT; if you need quick, cited synthesis, use Perplexity.

Read more about slide →
Slide comparing two UX workflows: Chat-centric with chat bubbles and tools panel vs. Search-blend with search bar, result cards, and synthesized answer.. Last fragment

Comparing Chat-Centric and Search-Blend UX Workflows

Start by framing the comparison: we’re looking at two different UX patterns for the same goal—moving from question to insight. On the left, call out the Chat-centric flow label. Explain that the primary surface is a conversation. The tools live to the side, ready to assist, but the user mostly says “continue, refine, analyze.” Emphasize how momentum stays in the thread while tools augment the response. Point to the alternating bubbles and the tools panel: Search, Analyze, Table, Chart. Mention that context and quick shortcuts are nearby to speed up iterative refinement. Shift to the right, the Search + chat blend. Here the first-class action is a query. Results appear as inline source cards with credibility tags. The assistant then synthesizes with citations, and the user follows up or opens sources. Summarize with the captions: chat-first favors flow and iteration; search-blend favors verification and source-centric exploration. Ask the audience which suits their users’ trust and pace needs.

Read more about slide →
Slide comparing ChatGPT and Perplexity AI responses. ChatGPT's response is in a card with a bullet point list, while Perplexity's is a concise paragraph with source citations.. Last fragment

Comparing ChatGPT and Perplexity AI Response Styles

First, point to the header and read the shared prompt: “Summarize the key points of topic X.” Emphasize that both cards answer the same prompt but in different styles. Next, reveal the ChatGPT-style card. Describe the structure: a friendly opening sentence to set context, followed by a short, readable bullet list of key points. Note the approachable tone and slightly more narrative framing. Then, reveal the Perplexity-style card. Highlight the concise single-paragraph answer focused on essentials. Mention the compact source badges and how they foreground citations without clutter. Finally, call out the design intent: same prompt, different response shapes—narrative plus bullets versus concise summary plus sources. Invite the audience to consider which style fits their use case.

Read more about slide →
Slide comparing Perplexity and ChatGPT in terms of live web access, citations, and traceability of information.. Last fragment

Comparing Web Results and Citations in Perplexity and ChatGPT

First, set the stage: this slide is about how tools handle live information and how they show sources. Point to the horizontal line as a throughline: live access, citations, and then traceability. Start with Live web access: explain that Perplexity generally searches the web by default, surfacing fresh results. ChatGPT’s browsing depends on plan and mode; if browsing is off, it answers from its training data. Move to Citations & links and read the neutral copy: Perplexity typically provides inline sources; ChatGPT can browse, depending on the plan and mode, and may reference sources accordingly. Finish with Traceability: encourage the audience to click through at least one citation, check timestamps, and keep the URL for reproducibility. Emphasize that links and visible domains increase trust in the answer. Conclude with the takeaway: prefer modes that show citations when accuracy and auditability matter.

Read more about slide →
A slide displaying two panels: one with a structured outline and the other with a concise synthesis, both addressing the impact of remote work on team productivity.  The slide emphasizes the importance of choosing the right explanation style for different contexts.. Last fragment

Structured vs. Concise Explanations: Clarity in Communication

Start by framing the slide: we’re comparing how explanations can feel clear and structured without exposing internal reasoning. Point to the small stack on the right: Prompt flows into Analysis, which produces Output. Emphasize that structure exists even when the underlying chain-of-thought is not shown. Bring attention to the left column. Walk through the outline: Objective, Context, Evidence, Trade-offs, Actions. Then the five numbered steps—clarify, segment, gather, analyze, recommend. This is the “explain like a plan” mode. Now move to the right column. Read the concise synthesis: the key takeaways show the same logic compressed—task type matters, hybrid cadence helps, async norms are key, measure outcomes not presence. Tie it together with the caption: both forms can be rigorous. Which one to use depends on the prompt, audience, and time you have. Close by suggesting a practical approach: start with the concise synthesis; expand to the outline only when deeper inspection is needed.

Read more about slide →
Slide showcasing two contrasting writing styles: narrative/brand voice (lyrical, warm) and concise copy/taglines (punchy, clear). Color-coded sections and distinct fonts emphasize the difference, while animations draw attention to each style.. Last fragment

Crafting Compelling Copy: Narrative vs. Concise Styles

Set the frame: We are comparing tone, not just wording. Left is narrative/brand voice—right is concise copy/taglines. Point to the left card: Read one line aloud to show warmth, rhythm, and imagery. Emphasize how it feels like a note from a person. Point to the right card: Read a couple of taglines. Highlight clarity and punch. These are for fast recall and high-visibility placements. Call out the tiny palette icon: It signals style exploration. We can swap palettes or tonal choices while keeping meaning intact. Wrap with guidance: Adjust tone via prompts; specify audience and length for consistency. Add constraints like channel (email vs. ad), emotion (warm vs. bold), and reading level.

Read more about slide →
Slide displaying code snippet examples and highlighting the use of AI assistants for coding tasks like code generation, error explanation, and documentation summarization.. Last fragment

Coding & Technical Tasks with AI Assistants

Start by framing the slide: we’re focusing on coding and technical tasks that benefit from assistant tools. Walk through the left column, one bullet at a time as they appear: First, generating code snippets—emphasize quick scaffolds and language patterns. Second, explaining errors—point out stack traces, misconfigurations, and how to get to root causes. Third, summarizing docs—stress time savings on long API pages and changelogs. Fourth, linking to references where provided—mention that citations help verify and dive deeper. Now connect to the tools positioning: Note that ChatGPT excels at iterative coding assistance—refining code through back-and-forth. Note that Perplexity is strong at surfacing relevant sources and references quickly. Finally, draw attention to the right: the code block as a visual metaphor. Mention highlighted lines as error handling and summarization steps, the docs badge to suggest citations, and the blinking cursor to imply ongoing iteration. Close by encouraging the audience to match the task to the tool: iterate code with ChatGPT, pull sources fast with Perplexity, and combine both for best results.

Read more about slide →
Slide comparing pricing tiers for ChatGPT and Perplexity, highlighting the availability of both free and paid options for each.. Last fragment

Pricing & Plans for AI Chatbots

We’re keeping pricing high-level and non-committal to avoid outdated specifics. First card: ChatGPT — emphasize that both free and paid tiers exist, without naming prices. Second card: Perplexity — similarly, free and pro tiers are available. Close with the disclaimer: pricing and features evolve, so confirm details on the official sites before making decisions.

Read more about slide →
Slide on privacy, controls, and safety showcasing a checklist with icons, badges, and a title emphasizing responsible data handling.. Last fragment

Privacy, Controls, and Safety in Data Handling

Start by acknowledging that great outcomes require mindful handling of data and safety. Point to the title and the shield and eye: the shield represents protection, the eye represents transparency. Walk through each checklist item as it appears: First, review data settings: remind the audience where to find and configure data controls. Second, avoid sensitive info: emphasize not sharing personal, confidential, or regulated data. Third, cite sources when needed: describe how citations help others evaluate claims. Fourth, verify critical claims: encourage cross-checking with trusted references for high-impact decisions. Close with the two badges: Chat history controls vary by plan: set expectations that capabilities differ across tiers. Source transparency improves verification: the more visible the source, the easier it is to validate. Invite one quick question before moving on.

Read more about slide →
Four cards comparing Perplexity and ChatGPT for different tasks: Research (Perplexity), Drafting (ChatGPT), Quick Answers (Perplexity), and Tutoring (ChatGPT).. Last fragment

Choosing the Right AI Tool: Perplexity vs. ChatGPT

First, frame the slide: this is persona-driven guidance, not hard rules. We are matching tasks to strengths. Next, point to the top-left card: when you need research with citations or linked sources, lean Perplexity. Move to the top-right card: for long-form drafting, back-and-forth iteration, and style control, lean ChatGPT. Highlight the bottom-left: for quick answers plus clickable links, Perplexity is fast and source-forward. Then bottom-right: for structured tutoring or multi-step orchestration, ChatGPT excels at guiding and planning. Emphasize the footnote: both tools are capable; outcomes depend heavily on how you prompt and iterate. Close with a reminder: start with the persona that fits your task, then adapt as the work evolves.

Read more about slide →

Want to try this presentation?

Try in Slidebook →