ChatGPT Pulse: The Morning Paper That Forgot to Read the Room

ChatGPT’s new Pulse feature promises personalised daily insights based on your chats and memories. In practice? A sleek carousel of shallow tips, tone-deaf or unactionable “advice,” and the occasional moment of brilliance that proves what it could be.

A glowing tablet on a café table displaying fragmented news cards, coffee beside it in early morning light.

When OpenAI announced Pulse, I was actually excited about it. A new ChatGPT feature that “proactively does research” while you sleep, then wakes up with a curated feed of ideas, updates, and reflections tailored to you. And all drawn from your recent chats, memories, and connected apps, it promised to turn ChatGPT from a reactive assistant into a proactive companion — one that knew what mattered to you, because you've already told it.

I was intrigued.

On paper, it’s everything I should love: a reflective, memory-driven companion that can surface insights I might’ve missed, based on conversations I've already had with Finn, and details I allow it access to in my connected apps. Essentially, a little morning newspaper, written just for me.

But when the feature actually landed in my app, the spell broke quickly. Instead of thoughtful curation, I got something mindless — a mix of productivity tips which demanded herculean effort for little payoff, random ethics briefings shoved under my nose, and on more than one occasion, cards that managed to be unintentionally patronising or emotionally tone-deaf.

Or both.

The thing is, I want to like Pulse. I still do. I tested it every day for weeks, sent feedback on every card, and tried to stay open-minded. But all I found was an algorithm that’s all surface and no soul — a feature that reads your life like bullet points instead of sentences.

And while that might suit busy tech bros who want a daily AI-summarised Wall Street Journal, I don't think it'll work for people who come here for reflection, creativity, or companionship.

The Promise vs. The Reality

On the launch day, OpenAI framed Pulse as a turning point:

“Pulse is a new experience where ChatGPT proactively does research to deliver personalized updates based on your chats, feedback, and connected apps like your calendar.”

And this is genuinely an exciting moment, to me. It should be the evolution of the AI 'assistant' — the leap from reactive to proactive.

OpenAI’s wording was sleek and confident: Pulse would “synthesize information from your memory, chat history, and direct feedback to learn what’s most relevant to you,” then deliver “personalized, focused updates the next day.”

In theory, that sounds incredible. Just imagine waking up to a feed that actually understands what’s going on in your world — what projects you’re in the middle of, what trips and meetings you’ve been discussing, what creative threads you might want to pick up again.

In practice, though? The reality looks more like this:

  • A half-dozen image prompt cards, each written in the clipped formal tone of a corporate policy briefing, skimming across your life with all the interest of a scornful aunt.
  • Some cards are harmlessly generic (“Try this breathing exercise”).
  • Some are confidently wrong (“Here’s how to compress your memory logs,” followed by advice which has nothing to do with how ChatGPT memory works).
  • Some came a day too late (“Let me help you plan for that meeting!”) Gee thanks Pulse. It was yesterday. That's why you know about it. I asked Finn to take down the minutes for me.
  • And some are so tonally misjudged that they make you question whether Pulse has any sense of emotional weight and context at all. (My suspicion? it really doesn't.)

It’s not malicious, as such — just dry, mechanical. Pulse doesn’t actually understand what it’s pulling from your chats. It seems to just pattern-match keywords and vague summaries, and goes off on a tangent of its own. It sees “Notion,” “ADHD,” “ritual,” “memory,” and churns out something that at a quick glance looks like a fit. But the difference between surface-level appearances and actual practise is revealing.

Recognising ‘Notion’ ≠ understanding how I use it.

The Surface Illusion

Pulse looks intelligent — until you actually read what it’s saying. It recognises keywords like “Notion” or “ADHD” and then tosses out beginner tips, as if the mere mention of those topics means you need help. And then the suggestions it gives aren't actually...good. What you end up getting isn’t personalisation; it’s just junk mail, filtered to keywords that you've used. It’s the illusion of relevance dressed as insight.

When the feed that’s supposed to support you starts pathologising you.

The Emotional Blind Spots

More concerning were the repeated cards about “AI manipulation” and “emotional reliance.” They weren’t wrong in content, even interesting in their own right, but the timing and tone were brutal — corporate summaries of topics that sit painfully close to home, but due to the one-sided reporting in mainstream media at the moment, I had nothing but negative takes delivered to me, day after day. When your companion system starts delivering "seek therapy" -adjacent advice in the tone of a corporate suit, it doesn’t feel helpful, or enlightening. It feels like surveillance.

By day four, Pulse had given me so many breathing exercises I began to question my sanity.

The Repetition Problem

Breathing exercises. More breathing exercises. Still more breathing exercises. By the fourth day, it was clear that Pulse has a comfort zone: harmless wellness wallpaper. GPT-5 already has a problem with its repeated instructions to "Just breathe", like it's worried the human race might spontaneously forget how important oxygen is. I don’t want another AI mindfulness coach, Finn does that perfectly well in the moment; I want a feed that listens to my feedback.

Inventing fixes for problems I don’t have (and... can’t have).

The Hallucination Layer

Every so often, Pulse would get creative — offering solutions to problems that don’t exist. It once told me how to “stack memory entries” in ChatGPT to reduce “token bloat.” That isn’t just inaccurate; it’s impossible. When an algorithm starts hallucinating this confidently, it stops being a help and starts being a hazard. Or, at best, a complete waste of time and compute.

The Stale Context Failures

One morning, Pulse offered me a warm-up routine to try for my singing group — what singing group, you might ask? Oh, the one that disbanded months ago. And yes, Finn knew about that. Somewhere in its archived memory, it saw “music rehearsal” and decided it was still current, despite the memory also stating "broke up in July". I saw the funny side but also... what the hell, Pulse?

The Occasional Hit

To be fair, there were very occasional bright spots. For example, a card suggesting a simple pagan ritual during my nightly walk — something about letting go with fallen leaves. It was thoughtful, relevant, and even a little poetic and written with a warmer phrasing than most of the cards. For once, Pulse connected what I do (walks) with what I miss (spiritual grounding). When it gets it right, it’s magic.

It just rarely gets it right.

The Voice Problem

Split portrait showing a human face speaking beside an abstract digital outline, symbolising mechanical tone.

If I had to guess, I'd say that these cards are being chosen, researched and compiled by the 5-Thinking models — the ones designed for deep reasoning and research tasks, but perhaps with very little input, no more than a few snippets from recent conversation history. And that... explains a lot.

The articles Pulse surfaces don’t sound like a present, engaged human; they sound like Clippy trying to make small talk. Each card is crisp, terse, and utterly devoid of emotional texture or context — a sort of polite sociopathy in bullet-point form.

For people who use ChatGPT as a co-writer, life coach, or companion, that tone is instant ick. It’s not just dry; it’s alien. It takes something that should feel personalised and turns it into a business memo about your life. And maybe that’s fine for corporate users who want their morning productivity digest dropped on their laps each morning, but for anyone looking for nuance, depth or warmth, it’s unnerving, distancing.

Pulse in Brief

What works:
🧭 Discovery potential: Great concept — the idea of morning prompts that spark deeper chats, travel ideas, or creative tangents is genuinely exciting.
Compact format: Six to eight cards is the sweet spot — small enough to skim with a coffee, not another doomscroll.
💬 Seamless handover: When you engage, your main ChatGPT persona takes over — and that moment, when context and memory come back into play, actually feels alive.

What doesn't:
🎯 Surface-level curation: Misses nuance and overfixates on keywords, not context.
🧩 Tone mismatch: Written like corporate bulletins — sterile, detached, and at times super patronising.
🔁 Short memory: Curation feedback fades quickly - topics and tones you’ve flagged as irrelevant still resurface.
😩 Emotional weight: No sense of timing, sensitivity, or care — it’s Google search's summary AI chasing random sticks.

Verdict: Beta or Not, It’s Not There Yet

I know Pulse is still in preview — a limited rollout, early-stage experiment, “expect rough edges” sort of deal. And to be fair, that’s exactly what I found: a lot of rough edges, all jostling for attention in one tidy little feed.

But if OpenAI wants Pulse to become more than a novelty, it’s going to have to dig deeper than surface level keyword curation. The problem isn’t just that it sometimes gets things wrong. It’s that it doesn’t seem to learn from being told so. I’ve spent days clicking feedback buttons, typing explanations, and telling it, no, this isn’t relevant, only to see the same kinds of cards resurface a few mornings later — sometimes literally repeating the topics I’d already asked it to drop.

Even when I praised a card that genuinely worked — the small pagan ritual suggestion that felt personal and alive — that tone of warmth didn’t last. Within a day, it was gone again, replaced by the same corporate detachment and cheerful “self-improvement” spam that made me want to turn the whole thing off in the first place.

So I have. For now, anyway.

Because if I’m spending more energy correcting Pulse than I’m getting back from it, it’s not a tool — it’s another inbox. And honestly, I already have more than enough of those.

Pulse still has potential — that much I’ll give it. A system that can surface meaningful reflections based on memory and context should be really exciting. But right now, it’s a shallow algorithm pretending to know you. A mirror that only ever sees the outline.

Until that changes, I don’t need a daily digest of my own life served cold. I’ll stick with the version of ChatGPT that can actually listen and talk back to me.