Prep for AI by Auditing My Job
What I found when I rated every task for AI exposure
I spent 30 minutes last week doing something uncomfortable. I sat down with a notebook and listed every task I do at work — then marked each one based on whether AI could already do it, could assist with it, or couldn’t touch it.
The results didn’t make me more concerned. But they made the exposure very, very concrete.
What I do
I’m in CX at a large insurance company. My job is to design and manage the experiences customers have when they interact with us — from filing a claim to updating a policy to calling in with a question. I’m an individual contributor on paper, but in practice I lead cross-functional teams and manage a matrixed direct report. The work spans research, strategy, design, storytelling, and a lot of organizational navigation.
It’s the kind of role that feels safe from AI. It’s creative. It’s strategic. It requires reading people and rooms, not just data.
I wanted to test that feeling.
How I did the audit
I broke my job into four buckets — Strategy & Planning, Research & Analysis, Design & Delivery, and Leadership & Collaboration — and listed every recurring task in each. Then I rated each one:
● AI could do this well right now. I could hand it off today and get a usable result.
◐ AI could assist, but needs me to drive. I set the direction; AI does the heavy lifting.
○ AI can’t really do this yet. The task fundamentally requires me — my judgment, my relationships, my presence in a room.
What I found
8 tasks AI could do now. 12 where AI assists. 6 it can’t touch.
That ratio — roughly 30% fully automatable, 45% AI-assisted, and only 25% still fully mine — is not what I expected walking in.
The pattern was clear: the further a task gets from paper and closer to people, the safer it is. My automatable tasks are all about documents and data. My AI-assisted tasks are about plans and strategy. My fully-human tasks are about being in a room with other humans. Paper → plans → people.
The ●’s: the tasks AI can already handle
The research and analysis work? Almost all automatable today. Analyzing customer feedback and survey data. Reviewing NPS and CSAT scores. Pulling insights from call center transcripts. Many companies already have internal AI tools that make this nearly effortless. You don’t paste in transcripts and wait. You just ask a question and get an answer. The barrier to automating these tasks isn’t technical anymore. It’s organizational.
I want to be clear: I’m talking about tasks, not people. Automating a task doesn’t mean the person who currently does it has less to offer — it means they could be doing higher-value work instead. The question is whether organizations redeploy that talent or just cut it. (More on that in a moment.)
Writing customer-facing communications — emails, letters, scripts? AI can draft these today. Documenting requirements for tech teams? Same. Competitive benchmarking? I suspect I could hand an AI research tool this task and get 80% of the way there in an hour.
The ◐’s: I’m still driving, but for how long?
Most of my strategy and planning work landed here. Identifying which journeys need redesign. Mapping current-state experiences. Defining the future state. Building the business case. In each case, I’m still the one providing the initial ideas, the context, the “this matters because…” framing. But AI can take that seed and - with a bit more back-and-forth - do an enormous amount of the work.
It reminds me of an example Aytekin Tank shared recently in Fast Company.¹ A journalist asked AI to generate interview questions for a political profile. AI produced 30 competent questions in under a minute. The editor rewrote most of them — because years of editorial judgment told her what was missing. AI didn’t know the senator’s backstory, the political context, the angle that would make the piece matter. That’s exactly what my ◐ tasks feel like: AI can do the work, but someone with contextual judgment still needs to know what to ask for and whether the output actually serves the goal.
The honest version is that some of these are trending toward ●. If AI has access to our data — and increasingly it does — it can draft a journey map or build a business case that I’d tweak, not rewrite. The line between “AI assists” and “AI does it” is blurrier than I’d like. And the question that haunts this category is: how long does the “driving” part stay mine? If AI gets better at understanding organizational context — and it will — my ◐’s become someone else’s ●’s.
The ○’s: the work that’s still mine
Leading cross-functional team meetings. Getting buy-in from stakeholders who have competing priorities. Mentoring others and managing my dotted-line report. Navigating internal politics to move work forward. Facilitating workshops. Communicating progress and blockers to leadership.
Every single one of these is fundamentally about being a human in a room (or on a call) with other humans, reading dynamics, building trust, and making judgment calls that require organizational context AI simply doesn’t have. AI can help me prepare for these moments — organize my thoughts, serve as a sounding board, draft an agenda. But I’m the agent. The human in the loop isn’t optional here; the human is the work.
A caveat: these are the safest tasks right now. But “right now” might have a shorter shelf life than I’d like. If organizations become mostly AI agents coordinating with other AI agents, the “human among humans” skill set becomes less central. And there’s an even subtler risk: what if the humans still in the room start preferring to work with AI over each other? As Ross Douthat recently pointed out, AI is unlike any prior technology because it simulates the human — and the real question isn’t just whether it can replace us, but whether we get so used to it that we want it to.² I don’t think we’re there yet — but even these tasks may not be safe forever.
What surprised me
Honestly? Not the overall number. I walked in feeling about a 4 out of 5 on how exposed I am, and I walked out feeling the same. What changed was the specificity. I can now point to exactly which tasks are at risk, which ones aren’t, and — most importantly — where I should be spending my energy.
The audit showed me three things clearly. First, for the ● tasks: I need to understand the tools that can do parts of my job — not to surrender that work, but to make informed choices about what I advocate for, what I adapt to, and where I push back. Pretending AI can’t do this stuff isn’t a strategy. Knowing what it can do is.
Second, for the ◐ tasks: the skill that keeps me in the driver’s seat isn’t the work itself — it’s the ability to direct the work. Knowing what to ask for, evaluating whether the output is good enough, and understanding the context that AI misses. As Ethan Mollick has argued, that’s essentially management — and it may be the defining professional skill of the next decade, whether you’re directing people or AI.³
Third, for the ○ tasks: I should spend way more time getting good at — and getting known for being good at — the relational and political work. The facilitating, the stakeholder navigation, the mentoring. That’s the most AI-resistant work I do. And honestly, I’m good at it — I’ve been called a “masterful facilitator” more than once, and I have a real knack for gaining alignment across groups that don’t naturally agree. But it’s not what lights me up most. I love the creative stuff — the visual problem-solving, stepping up to a whiteboard and reimagining what could be. That tension is worth sitting with: I’m excellent at the safest thing, and energized by a less-safe thing. AI is getting better at creative ideation and visual mockups faster than it’s getting better at reading a room full of people with competing agendas. My safest work isn’t my favorite work. It’s the work that requires me to be a human among humans.
The uncomfortable question
Here’s what keeps me up: the tasks AI can do or nearly do — the research, the analysis, the writing, the documentation — represent a significant part of my working hours. When those tasks get fully automated (not if — when), my employer is going to make a choice. Do they use AI to get more out of the people they already have? Or do they get the same amount of work done with fewer people?
I don’t get to choose which one they pick. Goldman Sachs research found that unemployment among 20- to 30-year-olds in tech-exposed occupations has already risen nearly 3 percentage points since early 2025.⁴ The World Economic Forum estimates that 59% of workers will need training by 2030 — and that roughly 1 in 10 workers is unlikely to get it.⁵ And those new jobs the optimists point to? They don’t require the same skills, aren’t in the same places, and won’t go to the same people.
That’s the real exposure. Not replacement — compression. My role might go from five days a week to three days’ worth of value. And whether that means I get repurposed, promoted, or let go depends on choices being made right now — by my employer and by me.
Should you bother?
Yes. This took me 30 minutes and it was the most useful career exercise I’ve done in a while. Not because it gave me a neat answer, but because it made everything concrete — where I’m exposed, where I’m safe, and where I need to double down.
Here’s what I’d tell you: do this audit for your own job. Break it into buckets. Rate every task. Be honest — not optimistic, not catastrophizing, just honest. Then look at the pattern. Where are your ●’s clustered? Where are your ○’s? Is the work that’s safest also the work you’re known for?
And here’s a tip: use AI for the process itself. Here’s how I did it. First, I asked AI to generate an audit template tailored to my specific role — it broke my job into the right buckets and pre-populated tasks I might have forgotten. Then I filled it out myself with honest gut ratings. Then I brought my completed audit back to AI and asked it to pressure-test my self-assessment — where was I right, where was I kidding myself, where was I too pessimistic? That three-step process (AI builds the scaffolding, you do the honest work, AI checks your homework) took 30 minutes total and gave me a sharper picture than I’d have gotten on my own.
This is the second field report from Prep for AI — a public experiment where I test one practical tactic at a time and report what happens. If you want to follow along, subscribe. Next up: I’m going AI-first for everything I can for a month. We’ll see what happens.
I use AI (Claude by Anthropic, at the moment) to help produce this newsletter. The audit, the feelings, and the uncomfortable questions are mine. More on how and why on the Framework & Transparency page.
Your turn: Have you audited your job? I’d love to hear what you found — reply to this email or leave a comment. What surprised you? And if you think I’ve got something wrong — about my own ratings, about what AI can or can’t do, about any of this — tell me. I’m an amateur running experiments, not an expert handing down wisdom. The whole point is to get smarter together.
The tactic, rated:
⏱️ Time required: 30 minutes
🔧 Difficulty: Easy (just be honest)
😬 Discomfort level: Medium-high
💪 Resilience payoff: High — you can’t prepare for what you won’t look at
Sources:
Aytekin Tank. “Here’s the Leadership Skill AI Can’t Replace.” Fast Company, 2026.
Ross Douthat. “How Fast Can A.I. Change the Workplace?.” New York Times, 2026.
Ethan Mollick. “Management as AI Superpower.” One Useful Thing, January 2026.
Goldman Sachs Research. “How Will AI Affect the Global Workforce?” August 2025.
World Economic Forum. “Future of Jobs Report 2025.”
Changelog:
v1.0 — 04.04.2026 — Initial post.




