Manuals Over Magicians: Why Note-Taking Beats Know-It-Alls
Part 1: Stop Hiring Experts, Start Hiring Learners
I’m interviewing again. After years of freelance work—underpaid and with little say in product development—I’m reminded that interviewing is a nightmare. You never know what they want.
Yesterday, I interviewed for a support role and was asked about sorting algorithms. I promise customers don’t care if support knows programmatic efficiency. They care whether the product works for their use case and whether someone can explain complex issues to their mom. Nobody cares if we switch from bubble to heap sort.
My bad interview instinct is to focus on what matters—work ethic. You’re supposed to humble-brag about trivial details until time runs out. I want to tell interviewers, “Look, I may not know every answer, but I know how to get them. I’m first in, last out, friendly, and I document everything so no one else has to reinvent the wheel.” But I get the next interview by giving a monologue about SAML or DNS in Simple English Wiki language.
The real difference between senior and non-senior workers isn’t how many frameworks they know; it’s consistency. Many new hires have sharper technical skills than their managers but no grasp of the product, use cases, history, or where the money comes from. They may know new programming languages the company can’t afford to refactor to and use. Technical fluency does not equate to competence on the job.
Sometimes the smartest grads make the worst employees, eager to “improve processes” before understanding the ones that exist. They want to skip steps. Since interviews rarely focus on goals or learning, candidates are rewarded for sounding bored, as if experience and curiosity can’t coexist. Companies want people who can do the job today, even if they overlook those who could learn it faster and do it better. The irony is that learning, not static expertise, is what defines a capable employee. The Peter Principle is alive and well.
If I could redirect the conversation, I’d talk about why there’s no “voodoo” in software, so the most vital discipline for any company is documentation. Everything else comes second. [Hardware is another story!]
Part 2: Boring Technical Writing Is Everything
My wheelhouse is getting ink to paper. I standardize procedures, define use cases, clarify job roles. Again, there’s no magic or voodoo. If you document things clearly, anyone can perform any task.
Once written down, patterns appear: gaps in training, product inconsistencies, and areas ripe for automation.
I’ve seen the cost of poor documentation. One company I worked with had sixteen support employees purely because of bad management and worse writing. On day one, I flagged mistakes in their installation docs; two months later, they were still there. Half that team and a few weeks of focused documentation could have cut tickets, improved response time, raised satisfaction (of customers and employees), and driven sales.
Technical issues happen. When they repeat, and everyone rushes to blame everyone else, the real problem is unclear ownership. If employees can’t define their product’s use cases and company roles, customers can’t.
Across nearly ten companies I worked with, the same issue repeats: no knowledge sharing. The “solution” is always a Slack channel—a black hole where people ping the same product managers for answers. Those answers should live in documentation or, better, in the product. Every unanswered question should trigger a follow-up: update a doc, record a walkthrough, do something. Sometimes the info exists but needs better tags or callouts so AI can find it. Sometimes the product needs clearer tooltips or error messages. Either way, the answers must be shared wider.
Product and documentation teams should treat support questions as design feedback. Every repeated question reveals a naming flaw, UX gap or missing visual cue. Good documentation and good design evolve together. They both aim to remove friction.
How to Start Documentation
Start with a glossary. Define every branded term once and use it everywhere. Never let one thing have three names. This is why we “Google” results, not search, and put on “Band-aids,” not adhesive bandages.
Make sure marketing knows the glossary by heart and sales understands the definitions.
Follow a model. Decide how to separate docs—by guide type, by feature, or both. Some companies keep “step-by-step guides,” “marketing material,” and “troubleshooting” separate; others group customer docs and marketing docs. Consistency matters, because each grouping can have its own conventions.
Keep verbs consistent. Repetition is good. A solid formula: Create single, Create multiple, Edit, Delete, Troubleshoot. Parallel verbs help readers scan.
Name everything consistently. Use the glossary (step 1) and verbs (step 3). For instance, image “scan_create_3callout” shows it’s the callout after step 3. Consistent naming makes audits painless.
I recently interviewed a company that uses a verb_name convention. I prefer name_verb for sorting.
Name and tag images consistently too. Maybe the images reveal the doc endpoint. Maybe they should include dates, because the product gets a facelift every few months.
Split by platform and role. Label content by OS, version, permission level, and date. Never make users guess. A Mac user shouldn’t wade through Linux setup. Ideal setup or support guides let users skip all the irrelevant fluff.
It is especially easy to make customizable onboarding guides using AI!
Add metadata for intent. Tag articles by motivation, not just the words used on the page and their synonyms. You may be explaining settings or email report features, but the metadata should explain the why: security, fraud prevention, analytics, compliance. If you have an AI chatbot that answers customer questions, tagging is as important as the article itself.
Write examples. Most users learn by copying before customizing. They don’t want to think until they managed to replicate your example.
Schedule audits. Check regularly for accuracy, screenshots, and terminology drift.
Define a house style. Set tone, grammar, and inclusivity rules that match your product voice.
Close the loop. Every internal question should lead to a doc update or explanation of how to find the info. Some also lead to product improvement. Using Slack to answer a question is fine once in a while, not all the time. Systemic question asking indicates clear issues with the product education process.
When companies apply these principles, documentation becomes infrastructure. When they don’t, confusion multiplies, and entire departments solve the same problem in parallel.
Part 3: Supporting Support
When a ticket comes in, most companies lean on that veteran who “knows everything” or the R&D lead who thinks everyone else is lazy. They should rely on a manual. Troubleshooting is where documentation proves its worth.
Any tech person can frantically ‘right click > inspect’, check network calls, click around a dashboard. Few can trace a symptom to its root quickly. A structured process is the difference between guessing and diagnosing. I’ve seen support teams waste hours gathering data that better automation or documentation could surface in seconds.
That bloated support team I mentioned earlier? Each agent had their own way of diagnosing issues. I promise only one of those methods was efficient and should have been written down. One. The others were “job security.”
Most support guides should entail the same basic steps, which I’ll outline:
The first step in handling any ticket is getting minimum details. Why? Because you can not begin to investigate without a minimum level of details. All strong ticket procedures require two basics: affected users’ emails and approximate incident time. Without that, diagnosis is guesswork. Most orgs require additional mandatory fields. But maybe a timestamp is enough to say, “We had an outage at that time.” Or a long investigation is still needed.
The next step is defining scope and severity. In 9 out of 10 companies I’ve worked with, triage was done internally.
Triage involves gathering context. If a customer reports high CPU usage, ask not just why but when. Does it spike when the product opens, during a specific function, or randomly? Pinpointing the trigger is half the battle because “it doesn’t work” really means “it doesn’t work on Internet Explorer” or “it fails when the internet cuts for a second.”
Next, determine who and how many are affected, another part of scope and severity. This might tell us:
Single user: likely local issues. Check drivers, background software, resources.
Almost always user error, nicknamed PEBCAK and ID-10-T. Meaning the fix is better education!
Organization: product and/or cybersecurity configuration or permission settings.
Multiple regions: service dependencies like Twilio or AWS might be affected.
Documentation should teach how to recognize each pattern and provide examples. Every unique troubleshooting path should generate a doc update. The first time troubleshooting a third party plug in may be exciting, like a treasure hunt. The 10th time should involve reading docs that get you straight to the solution.
Only now, if the info provided is insufficient for a fix, do we actually investigate. That’s why responsible orgs don’t require any investigation at all. They rely entirely on QA and self-service areas for education needs.
When observation is needed, you need commands and context documented:
Windows: Task Manager for CPU and memory.
macOS: Activity Monitor.
Linux: Terminal with commands like ‘top -o %CPU’ and ‘ps aux | grep {software}’
These reveal whether the process misbehaves or another program is the culprit. Good documentation explains what outputs mean: “If CPU spikes without I/O, suspect recursive scans; if both rise, check backups or antivirus.” It may also provide followups, suggesting the user terminate some tasks and then do additional checks.
Then the guide may move to environment checks:
Logs (
eventvwr.msc,/var/log/syslog)Great docs make it easier to query logs! There shouldn’t be guessing during queries.
Version alignment between client and server
Time sync (
timedatectl status)Disk usage (
df -hor Disk Usage Analyzer)
Each should have an SOP defining goals, commands, expected outputs, and escalation criteria. For example: If CPU >90% for over 10 minutes without I/O, gather logs A and B and escalate to Engineering.
Good SOPs include query examples and data limits to avoid excessive resource use. Without them, two engineers will gather different data—or none at all—wasting hours and missing root causes.
Yes, over time you develop intuition from patterns. Senior workers know an issue before the customer finishes their sentence. But even those patterns should be written down and handled in guides.
Troubleshooting must also connect to monitoring. Persistent issues should trigger alerts with documented thresholds. A command like: ‘grep -i "cpu" /var/log/product.log | tail -n 50’ might appear in a guide titled Collecting CPU Diagnostics. The point is consistency—everyone follows the same steps and produces comparable evidence.
Each new command and “cheat code” is written for others to use. Each monitor has a description.
Each closed ticket should refine documentation with new examples or tags like “regional outage” or “third-party delay.” That’s how troubleshooting becomes a science. When documentation leads, even HR and marketing can solve what once required senior engineers.
Hire Motivated People, Not Know-It-Alls
Interviews are broken. Too many companies hire based on experience matches instead of potential. They choose people who have done the same job in the same industry, not those who can learn. That rewards timing, not growth, and fills companies with static experts who maintain instead of improve.
At that messy company, a few employees told me, “We want to be gatekeepers. People should come to us when they have problems.” I nearly vomited at the thought of re-explaining something a hundred times instead of writing it once and letting an AI chatbot handle the rest.
Hiring should focus on work ethic, not trivia. Test whether candidates are motivated and curious, not whether they can recite acronyms or KPIs. In professional services, hire people who understand customer needs and can reason, adapt, and document—not those who memorize your blog.