Claude Code for Non-Developers
Automating Repetitive Tasks

Exercise: Automate a weekly feedback report

Put your automation skills to work by categorizing customer feedback, generating a summary report, and saving the workflow as a reusable slash command

What you'll practice

This exercise ties together the whole module. You'll spot a task worth automating, run it once interactively, then save it as a slash command you can reuse every week.

The scenario: you have a folder of customer feedback files. You ask Claude Code to categorize and summarize them, then turn that process into a one-word command. Next week, you drop new files in the folder and type /feedback-summary. Done.

Before you start

You need two things:

  1. Claude Code installed and working.
  2. A project folder. You'll create the sample feedback files in the next step.

Since you're working with sample data here, there's nothing at risk. In your real work, remember the golden rule from this module: always work on a copy.

Tip: This exercise builds on Module 3's data skills and Module 4's automation concepts. If something feels unfamiliar, flip back to those pages. The exercise walks you through each step.

Step 1: Set up your project folder

Create a folder and start Claude Code:

Mac or Linux:

mkdir ~/Documents/feedback-exercise
cd ~/Documents/feedback-exercise
claude

Windows (PowerShell):

mkdir ~\Documents\feedback-exercise
cd ~\Documents\feedback-exercise
claude

Now ask Claude Code to create sample feedback files:

Create a folder called "feedback" with 20 text files, each containing a realistic customer feedback message. Name them feedback-001.txt through feedback-020.txt.

Make the feedback varied and realistic:
- About 5 bug reports (things that are broken or not working)
- About 5 feature requests (things customers wish existed)
- About 5 praise messages (customers saying what they love)
- About 5 complaints (frustration about experience, pricing, or support)

Each file should be 2-4 sentences of natural-sounding customer feedback. Don't include labels or categories in the files — just the raw feedback text, as if a customer wrote it.

Claude Code will create the folder and all 20 files. Verify what you have:

How many files are in the feedback folder? Show me the contents of three random files.

You should see 20 files with realistic feedback messages. The content shouldn't include any category labels — that's Claude Code's job in the next step.

Step 2: Categorize and summarize

This is the core task you'll eventually automate. For now, do it interactively so you can see how it works:

Read all the files in the feedback folder. Categorize each one as either:
- Bug Report
- Feature Request
- Praise
- Complaint

Then generate a summary report that includes:
1. Total number of feedback items
2. Count for each category
3. Two representative quotes from each category
4. A brief one-paragraph overall summary of what customers are saying

Show me the categorization of each file first, before generating the report.

Claude Code will read through all 20 files and show you its categorization. This is your chance to review.

Look through the list:

  • Does each category assignment make sense?
  • Did anything get miscategorized? A complaint about a bug could be either "Bug Report" or "Complaint." There's no single right answer, but the choice should be consistent.

If something looks wrong, tell Claude Code: "File feedback-007.txt is more of a feature request than a complaint. Recategorize it."

Once you're satisfied, let it generate the summary report.

Save that report as feedback-report.md

Review the report:

Show me the contents of feedback-report.md

Does the report look like something you'd actually send to your team? If the formatting needs work, ask for changes: "Use bullet points instead of numbered lists" or "Add a section header for each category."

Heads up: Check the numbers. If Claude Code says there are six bug reports but you only created five, that's worth catching. Count the files in each category against the total.

Step 3: Save it as a slash command

You've done the task once interactively. Now save it so you never have to explain it again.

Ask Claude Code to create the command for you:

Create a slash command called /feedback-summary that does what we just did. It should:
1. Read all .txt files in the feedback folder
2. Categorize each file as Bug Report, Feature Request, Praise, or Complaint
3. Generate a summary report with counts, representative quotes, and an overall summary
4. Save the report as feedback-report.md

Use $ARGUMENTS so I can optionally specify a different folder. If no folder is given, default to the "feedback" folder.

Claude Code will create a file at .claude/commands/feedback-summary.md. It's a plain text file with instructions — readable English, not code.

Check what it created:

Show me the contents of .claude/commands/feedback-summary.md

You should see a Markdown file with clear instructions and a $ARGUMENTS placeholder. Look for two things: does it mention the default folder, and does it include your four category names? If you want to adjust the instructions later, you can edit this file directly or ask Claude Code to update it.

Step 4: Test the slash command

Now test it. First, create a fresh batch of feedback files to simulate next week:

Create 5 new feedback files in the feedback folder, named feedback-021.txt through feedback-025.txt. Make them a mix of categories, different content from the first batch.

Clear your conversation so Claude Code starts fresh, the way it would next week:

/clear

Now run your new slash command:

/feedback-summary

Claude Code reads the instructions from your slash command file and runs the whole workflow. It should read all 25 files (the original 20 plus the five new ones), categorize each one, and save a fresh feedback-report.md.

Check the results:

Show me the report. How many files did it process? Are the five new files included?

The report should cover all 25 files. If it only processed the original 20, the slash command might need tweaking. Tell Claude Code: "The report is missing the five new files. Fix the /feedback-summary command to pick up all .txt files in the folder."

You can also test the $ARGUMENTS feature. Create a second folder with different feedback:

Create a folder called "support-tickets" with 5 sample support ticket text files.

Then run:

/feedback-summary support-tickets

Claude Code should process the support-tickets folder instead of the default feedback folder. If it doesn't, the $ARGUMENTS handling in the command file needs fixing — ask Claude Code to update it.

Step 5: Verify your setup

One final check. Make sure everything is where it should be:

Show me the folder structure of this project, including the .claude folder

You should see something like:

feedback-exercise/
├── .claude/
│   └── commands/
│       └── feedback-summary.md
├── feedback/
│   ├── feedback-001.txt
│   ├── feedback-002.txt
│   ├── ... (23 more files)
│   └── feedback-025.txt
├── support-tickets/
│   └── ... (5 files)
└── feedback-report.md

The key piece is .claude/commands/feedback-summary.md. That file is your automation. Next week, you drop new feedback files in the folder, start Claude Code, type /feedback-summary, and get a fresh report.

What you just did

You went through the full automation cycle from this module:

  1. Did the task interactively — categorized feedback and generated a report by describing what you wanted.
  2. Reviewed and refined — checked the categorizations, adjusted the output format.
  3. Saved it as a slash command — turned a one-time task into a reusable /feedback-summary command.
  4. Tested the command — ran it on new data to confirm it works without re-explaining everything.

That's the pattern. Any task you do more than twice with Claude Code is a candidate for this cycle: do it once, refine it, save it.

If something went wrong

Slash command didn't work? Make sure the file is at .claude/commands/feedback-summary.md (not somewhere else). The .claude/commands/ folder must be inside your project folder — the one you started Claude Code from.

Claude Code missed some files? Check that all files end in .txt and are in the right folder. Ask: "List every file in the feedback folder and tell me which ones you read."

Categories seem wrong after using the slash command? The command file might need clearer category definitions. Ask Claude Code to update it:

Update /feedback-summary to include definitions for each category. A bug report is about something broken. A feature request is about something new the customer wants. Praise is positive feedback. A complaint is frustration about experience, pricing, or support.

Report overwrote your previous one? The command saves to feedback-report.md each time, replacing the old version. If you want to keep a history, ask Claude Code:

Update /feedback-summary to include today's date in the report filename, like feedback-report-2026-02-08.md.

Going further (optional)

If you finished quickly and want to extend this:

  • Add a chart. Ask Claude Code: "Update /feedback-summary to also create a bar chart showing the count for each category. Save it as feedback-chart.png."
  • Write a CLAUDE.md. Create a CLAUDE.md in your project folder with category definitions, preferred report format, and any rules (like "always include exact quotes, never paraphrase"). Run /feedback-summary again and see if the output improves.
  • Export as HTML. Ask Claude Code: "Update /feedback-summary to save the report as a styled HTML page I can open in my browser, with color-coded sections for each category."
  • Process real feedback. If you have actual customer feedback from your work, copy the files into the feedback folder and run /feedback-summary on real data. That's the whole point.

What's next

That's Module 4. You can spot tasks worth automating, run bulk operations safely, process files and documents, and save your workflows as slash commands.

Module 5 takes the next step: building tools you and your team can actually use. Dashboards, calculators, internal utilities — things you open in a browser, not in the terminal.

On this page