ep. 90. How a Leading MarTech Program Manager Uses AI to Track What Still Matters
5 min read
Lydia Goodner describes herself as a “systems-minded marketing operator”: someone who takes strategy and makes it execution-ready. With over 15 years of experience translating executive goals into team workflows, most recently as a senior MarTech program manager, she’s been at the front lines of how AI is changing organizational decision making.
Goals that would have stayed stable for six months are now shifting in weeks. Project management platforms were designed for a world where the goal held still long enough to track.
“I’ve had to shift from thinking about task management to thinking about decision management. It’s not ‘when does the task start.’ It’s ‘when are we aligned on a solution?’”
Task management is retrospective, documenting what was agreed to. Decision management is live: it tracks whether that agreement still holds.
Lydia gave an example from running daily team standups, using AI transcription to capture real-time priorities. A lead asked for the task list from three days ago.
She reframed the request.
“That list is already outdated,” she said. “Instead of looking at what we wrote down three days ago, we should be asking: Is the work the team is doing at this moment the most important thing they could be doing?”
Getting Unstuck with AI
One place where Lydia sees the most value in AI tools is synthesis. Her method: sit with someone for fifteen minutes, sometimes with their manager in the room, use AI to transcribe the conversation, then ask Microsoft Copilot directly: what’s the gap here? What’s keeping this blocked?
“If someone feels stuck, they’re probably in fight-or-flight mode. They don’t know where to look. It’s about getting them out of that unstuckness—using AI to unlock it, reflecting back the mutual agreement they’ve just reached in conversation.”
For anyone who feels overwhelmed by what’s on their plate: “Take ten minutes, brain-dump everything to an LLM, have it make a task list, sort it by priority, and go tell your manager which one of these is most important.”
Lydia also leverages AI-enabled synthesis whenever she enters a new project.
Mapping who’s driving, who isn’t, and where the friction lives used to mean sitting through 40 hours of calls to piece together the dynamics.
“Now, I can synthesize the landscape using transcripts from three calls that happened last week to figure out what my stage looks like.”
AI helps extract depth of understanding from conversations that are still current, before priorities shift again.
During discovery, Lydia also uses AI to help teams navigate uncertainty and build shared understanding. She likes to tell her team, “Hang in there. We’re going to get this. We just need time and AI to help us reconcile what you mean and what I mean until we mean it together.”
“Take ten minutes, brain-dump everything to an LLM, have it make a task list, sort it by priority, and go tell your manager which one of these is most important.”
Map Confidence, Not Dates
Lydia doesn’t communicate project health in timelines anymore. She communicates in confidence levels.
“I’ve started mapping confidence levels explicitly: we’re 80% confident on dependable factors, like compliance and governance; we’re 20% confident on volatile areas like asset development”.
That shifts the conversation from “are we on track?” to something more useful: are we building for future value, or chasing a feature that will be sunset in six months?
Subject matter experts can usually see the stable requirements clearly. What no one can predict is how implementation will look six months out. Making that gap explicit is what prepares stakeholders for the trade-offs ahead.
Two Types of Creativity
When you’re deciding what to automate with AI, the question about human creativity naturally enters the picture. Lydia draws a line most teams blur:
“There are two types of creativity that get conflated constantly. One is creative at scale: four asset sizes across ten platforms without losing the integrity of the layout or the messaging. AI should do that. Give me repeatability all day long.”
The other is human expression, the emotional side of making art.
“I’m also a painter. I don’t need AI to make fine art for me. As an operator, I need AI to scale. Those are two completely separate asks. Most of the fear in creative fields comes from treating them as the same one.”
Where the Human Stays
Lydia is clear-eyed about where to use AI versus what to keep human: “I treat AI as a recommendation engine. Decision-making stays human. The accountable stakeholder always makes the call.”
As AI handles repeatable execution work, subject matter experts stay close to the output to keep quality intact. Leaders, freed from operational overhead, can better focus on where to go next.
“I treat AI as a recommendation engine. Decision-making stays human. The accountable stakeholder always makes the call.”
One use case Lydia feels is undervalued: using AI to make teams feel heard.
She describes synthesizing a conversation into a full report on risks and implications, then delivering it within an hour of the call: “This is what we talked about an hour ago. Here’s the outcome, here’s where we’re going next.”
Most AI meeting notes read as generic. When a human directs the synthesis and adds context, what comes back is specific to that team and that conversation: “The level of care that it shows, listening to other human beings and them telling you what they need, and then reflecting that back immediately—that’s what makes it valuable.”
Teams stay engaged when the output reflects what they actually said. Human listening is what makes the synthesis trustworthy; AI helps you deliver it fast enough to still be relevant.
“Never in any world does removing a human from validating what’s most important make sense. That’s the one thing AI doesn’t determine on our behalf.”
The AI Hype Check
I ask every practitioner I interview the same question: is AI overhyped or underhyped?
Lydia’s answer: both, depending on which conversation you’re in.
“The features are overhyped: the stories about automating everything, generating passive income, having AI do your job completely. But how AI actually benefits people when kept in a clearly advisory role? That part is undervalued.”
She returned to AI’s value in the alignment phase, before any build begins: “AI is a multiplier. If you know what you’re trying to do, it gets you there faster. If you don’t, it sends you in the wrong direction faster.”
Because of that, Lydia is careful about laying a strong foundation for teams:
“If you’re not aligned on outcomes from discovery, then the rest of your structure completely falls apart. Your platform has to work as a reliable planning database before any AI feature stacked on top of it is useful.”
This interview has been edited for length and clarity. Opinions expressed are solely Lydia’s own and do not necessarily reflect those of her employer.
Three Things to Try This Week
Building off Lydia’s insights, here are three things you can apply this week:
Before your next sprint planning or stakeholder sync, run your last meeting transcript through your LLM of choice and ask: who’s aligned, who isn’t, and what’s blocking us?
Before your next project kickoff, map your confidence levels. What’s stable? What’s likely to change in six months? In six weeks?
If you’re feeling overwhelmed, brain-dump everything to an LLM, have it make you a task list, and sort by priority.
Know a practitioner navigating AI in their work who should be featured here? Reach out to hello@sendfull.com
⏪ Recent Episodes
ep. 89. Cognitive Offloading to AI: The Peril and the Promise
ep. 88. AI Is An Accelerant. What Are You Feeding It?
ep. 87. What AI Can (and Can’t) Automate
If you like these episodes, you’ll love my book. Designing Automated Futures is coming this year with Rosenfeld Media. 📣 Sign up to be the first to know about new book releases, sales, and events.
📖 Good Reads ‘N’ Things
The rise of cognitive surrender: A new Wharton study finds that when people use AI to help them think, they tend to adopt its answers with little scrutiny. The authors call this phenomenon “cognitive surrender” and propose it as “System 3”, alongside System 1 (fast, intuitive processing) and System 2 (slow, deliberative processing).
What LLMs can’t do: ARC Prize released a preprint reporting a new AI benchmark that has humans scoring 100% and frontier AI models scoring under 1%. The test? Adaptive reasoning in novel, interactive environments where agents must explore, infer goals, and plan action sequences without language, instructions, or prior knowledge.
A tool for finding startup jobs: Finding startup job posts on LinkedIn is hard. Former UC Berkeley student and AI product manager Salia Asanova built The Catalog to help.
That’s a wrap 🌯 . More on UX, HCI, and strategy from Sendfull in two weeks.


