Claude Code for Non-Developers
Thinking in Conversations

When Conversations Go Wrong

A worked example: cleaning up a customer contacts spreadsheet, catching mistakes, and steering Claude Code back on track.

A real conversation, start to finish

Giving instructions, reviewing output, iterating. We've covered each piece separately. Now let's see how they fit together when you're actually doing something.

This section walks through a complete scenario. You'll see the initial prompt, what Claude Code does, where it stumbles, and how to steer it back. Pay attention to the process more than the specifics. The pattern here is one you'll repeat constantly.

The scenario

You're an operations manager at a small company. Your team keeps a spreadsheet of customer contacts, about 300 rows, and it's a mess.

Duplicate entries. Phone numbers in five different formats. Some rows have city and state, others have a zip code but no city. Several contacts appear twice because one entry says "Johnson & Associates" and the other says "Johnson and Associates."

You've been meaning to clean this up for months. You'd normally spend a full afternoon doing it by hand, or ask someone on the data team to write a script.

Instead, you're going to ask Claude Code.

The first prompt

You open your terminal, navigate to the folder with your spreadsheet, and start Claude Code. Here's what you type:

I have a CSV file called customer_contacts.csv in this folder. It's messy. Can you:
1. Find and merge duplicate entries (some companies appear twice with slightly different names)
2. Standardize all phone numbers to (555) 123-4567 format
3. Fill in missing city names based on the zip code
4. Give me a clean version saved as customer_contacts_clean.csv

This is a solid prompt. It describes the outcome, lists specific tasks, and tells Claude Code where to save the result.

What Claude Code does

Claude Code reads the CSV, analyzes the data, and gets to work. When it finishes, it shows you a summary:

Found 312 rows in customer_contacts.csv.
- Identified 23 likely duplicate pairs based on company name similarity
- Merged duplicates, keeping the most complete entry from each pair
- Standardized 287 phone numbers to (555) 123-4567 format
- Filled in 45 missing city names from zip codes
- Saved cleaned file as customer_contacts_clean.csv (289 rows)

That looks great. Twenty-three duplicates found, phone numbers standardized, cities filled in. Clean file saved.

If you stopped here, you'd have a tidy-looking spreadsheet.

But remember the lesson from the last two pages: always check.

Spot-checking the result

You ask Claude Code to show you a few of the merged duplicates so you can verify:

Show me 5 of the duplicate pairs you merged, with the original entries and what you kept.

Claude Code shows you the comparisons. Four of them look right: "Johnson & Associates" merged with "Johnson and Associates," "Smith Corp" merged with "Smith Corp." (trailing period removed).

But the fifth one catches your eye:

Merged: "Pacific Northwest Design" + "Pacific Design Group"
Kept: Pacific Northwest Design, Portland OR
Removed: Pacific Design Group, Seattle WA

Those aren't the same company. One is in Portland, one is in Seattle. Claude Code saw "Pacific" and "Design" in both names and assumed they were duplicates.

Catching the mistake

This is the trust-then-verify gap in action. The output looked clean and confident. The summary said "23 likely duplicate pairs" without flagging any uncertainty. But one of those merges was wrong, and if you hadn't checked, you'd have lost a customer record.

Here's what to do.

First, don't panic. Your original file is untouched because you told Claude Code to save the result as a new file. Nothing is lost.

Second, don't ask Claude Code to "fix the duplicates" without saying more. That's too vague, and it's the kind of correction that leads into the two-correction spiral we talked about.

Instead, be specific about the problem and what you want done differently:

You merged "Pacific Northwest Design" and "Pacific Design Group" but those are different companies. Can you:
1. Unmerge those two and keep both entries in the clean file
2. Show me any other merges where the companies are in different cities — those are probably wrong too

The corrected result

Claude Code goes back through its work and finds two more bad merges where the companies were in different cities. It unmerges all three and shows you the updated list.

Unmerged 3 entries where companies were in different cities:
- Pacific Northwest Design (Portland OR) / Pacific Design Group (Seattle WA)
- Mountain View Analytics (Denver CO) / Mountain Analytics (Mountain View CA)
- Lakeside Marketing (Chicago IL) / Lake Marketing Group (Lake Oswego OR)

Updated customer_contacts_clean.csv now has 292 rows.

The second one is worth a closer look. Claude Code had matched "Mountain View Analytics" with "Mountain Analytics" without noticing that "Mountain View" is also a city name in the second entry.

This tells you something about how Claude Code thinks: it's good at fuzzy matching, but it can be overconfident about what counts as a match. For names that share common words like Pacific, Mountain, or Lake, it sometimes groups things that only share a geography-related word.

Getting smarter about it

Here's where the feedback gets really useful. Instead of fixing each bad merge one by one, you can give Claude Code a rule:

Good catch on those. New rule for the duplicate detection: never merge two entries if they're in different cities or different states. Can you rerun the full dedup with that rule?

Claude Code reruns the duplicate detection with the new constraint. This time it finds 19 duplicates instead of 23. Those four extra "duplicates" from the first pass were all false matches, different companies that happened to share a word.

This is iteration at its best. You didn't patch individual mistakes. You found the underlying problem (the matching was too aggressive across locations) and gave Claude Code a rule to prevent it everywhere.

The zip code question

Now you check the city fill-ins. You ask Claude Code to show you a few:

Show me 5 of the rows where you filled in the city from the zip code.

Most look right. But you notice that Claude Code filled in "New York" for a zip code that actually covers parts of both New York and New Jersey. Zip codes near state borders can be ambiguous.

Rather than having Claude Code guess, you change the approach:

Actually, for any zip code where you're not 100% certain of the city, don't fill it in. Instead, add a column called "needs_review" and put "yes" for any row where the city is still missing or uncertain.

This is a smarter strategy. Instead of asking Claude Code to be right about everything, you're asking it to be honest about what it doesn't know.

Claude Code reruns and marks 12 entries for review. The 45 it filled in originally splits into 33 confident fills and 12 uncertain ones.

You can review those 12 by hand in about five minutes. That's a lot faster than reviewing all 300 rows, or worse, trusting 45 fill-ins that included some wrong answers.

What this walkthrough teaches

This scenario shows the full loop: ask, review, catch a problem, give specific feedback, improve the approach.

Here's what matters most.

Check a sample, not the summary. Claude Code's summary said "23 duplicates merged" and that sounded great. It was only by looking at specific examples that you caught the bad merge. Always ask to see a few real results before accepting the whole batch.

Name the problem, then fix the rule. "Some of those merges were wrong" doesn't help much. "Don't merge entries in different cities" does. The more specifically you describe why something is wrong, the better Claude Code can prevent it next time.

Let Claude Code be uncertain. Asking it to flag entries it's unsure about, instead of guessing, is one of the most useful techniques you'll pick up. You're not asking for perfection. You're asking it to tell you where it needs your judgment.

Keep your original files safe. In this scenario, you told Claude Code to save to a new file, which meant the original data was never at risk. For data tasks, always specify a new filename rather than overwriting the original.

When to start over vs. keep going

This scenario worked because the corrections were specific and the overall approach was sound. But sometimes a conversation goes further off track. Claude Code misunderstands the entire task, or you realize after several rounds that you described the wrong thing.

Here's the rule of thumb from the previous page: if you've corrected the same type of mistake more than twice, start a new conversation with a better prompt. In this scenario, we needed one major correction (the dedup rule) and one approach change (flagging uncertain cities). That's healthy iteration.

If Claude Code kept merging wrong entries even after the rule, or kept guessing at cities despite being told to flag them, that's a sign to /clear and rewrite your original prompt with those constraints built in from the start.

What's next

You've seen a full conversation, good parts and messy parts included. This is what real work with Claude Code looks like. Not perfect output on the first try. Reviewing, catching issues, giving feedback that makes each round better.

Next, we'll look at managing conversations over time: when to start fresh, when to keep going, and how to organize your sessions.

On this page