Listen Get

Recruitment & Propaganda Plan

Keywords

war-on-disease, 1-percent-treaty, medical-research, public-health, peace-dividend, decentralized-trials, dfda, dih, victory-bonds, health-economics, cost-benefit-analysis, clinical-trials, drug-development, regulatory-reform, military-spending, peace-economics, decentralized-governance, wishocracy, blockchain-governance, impact-investing

Phase 0: Collecting Humans Before You Have Anything to Offer Them

On Wishonia, if someone discovered everyone was dying from preventable causes, they would simply stop dying from preventable causes. On Earth, you need a tax-exempt entity, a bank account, and written permission from something called the Securities and Exchange Commission before you’re allowed to try. The Securities and Exchange Commission regulates securities, which are financial instruments, not the kind of security where people stop dying. That would be a different department. You don’t have that department.

So you can’t accept money yet. You can’t sell bonds yet. You can’t do anything that involves the movement of small pieces of paper. What you CAN do is collect names. The goal of Phase 0 is building a database of every human who agrees that dying from preventable causes is a strange thing to tolerate. It’s a Kickstarter where the reward tier is “possibly not dying,” which is the most compelling crowdfunding offer in history, and yet somehow requires marketing.

The Bureaucratic Nesting Doll Problem

On your planet, before you can accept donations, you need 501(c)(3) status. Before you can sell bonds, you need SEC registration. Before you can do either, you need a bank account. Before you can open a bank account, you need an organization. Before you can have an organization, you need the bank account. It’s like a door that requires a key that’s locked inside the room the door opens to. Your species built this system on purpose.

The solution is the same one humans have used every time this has happened, which is every time. You collect people first, then point at the crowd when the institutions ask “but who wants this?”

Abolition started with a list of names. Suffrage started with a list of names. Civil rights started with a list of names. Someone wrote down who agreed. The names became a number. The number became bargaining power. The bargaining power became policy. Then the policy became a holiday, and everyone pretended they’d agreed all along. Your species does this every 40 years and never recognizes the pattern.

Three Types of Humans Who Want to Help

People who agree with “dying is bad” fall into three categories.

Spreadsheet Humans (Investors)

These are humans who hear “272% annual return on Victory Bonds backed by redirected military spending” and immediately open a spreadsheet. Their pupils dilate. Their pulse quickens. They ask about the term structure. Then they ask about the term structure again. Then they ask a third question, which is also about the term structure.

These humans are not broken. They are simply running financial software on moral hardware. They process “millions of people are dying” faster when it arrives formatted as a revenue projection. They’ll wait years for a good investment to materialize. They will not wait thirty seconds for a bad one. If your math is solid, these humans will follow you to the gates of hell, provided the risk-adjusted return on entering hell is competitive.

Do not try to make them cry. Make them calculate. Crying is temporary. Compound interest is forever.

Heart Humans (Advocates)

These are humans motivated by the mission itself. They share things on social media, write letters to politicians, volunteer time, donate money they probably shouldn’t, and cry at commercials about dogs. They are, frankly, the only reason your species has ever improved anything. Without them, the spreadsheet humans would still be optimizing returns on the slave trade.

The critical mistake every movement makes with these humans is telling them to “care about this” without giving them something to do. I have watched 4,297 years of humans being told to “raise awareness.” The awareness has been raised. It’s been raised so many times it has a penthouse suite. Nobody told it what to do next.

Give heart humans a button. Give them a form. Give them a link to share. Give them a specific politician to call and a specific sentence to say when they call. If you give a human a feeling without an action, you haven’t recruited a supporter. You’ve manufactured a sad person. Your planet has enough of those.

Institutional Humans (Partners)

Organizations (nonprofits, research institutions, health agencies) that already work on pieces of this problem. They have infrastructure, credibility, and mailing lists. You have a framework that could make their existing work more effective. This is a trade, not charity. Approach it like one.

What you’re offering: money, without a grant application. (Grant applications are documents in which humans spend 200 hours explaining why they deserve money, so that a committee can spend 40 minutes deciding they don’t. It’s your species’ least efficient art form.)

One warning: institutions are territorial in the way that dogs are territorial, except dogs eventually stop barking. Two nonprofits working on the same disease will fight each other harder than they fight the disease. This is called “the nonprofit sector.” Approach each one as if they are the only organization doing this work. They already believe this, so you’re just agreeing with them.

Phase 0: The Shopping List

By the end of Phase 0, you want:

  • 100,000+ registered humans who typed their name into a box indicating they prefer living. This sounds like it should be everyone, but on your planet, “dying is bad” is apparently a niche political position that requires organized support.
  • 1,000+ spreadsheet humans with stated investment ranges (how many papers they’d give you, if you were allowed to accept papers, which you’re not yet, but you’re writing it down for later)
  • 100+ institutional humans with expressed interest in collaborating (or at least not actively opposing you, which on your planet counts as enthusiasm)
  • $1B+ in stated investment demand (registered, not committed; this is humans saying “I would give you this many papers” without actually giving you the papers, which your species calls a “letter of intent” and treats as meaningful)

This database is your ammunition for every bureaucratic nesting doll that follows. When you apply for 501(c)(3) status, you point at the list and say “here are our supporters.” When you file SEC registration, you point at the list and say “here is market demand.” When you meet politicians, you point at the list and say “here are your constituents.” When you launch, you point at the list and say “here are your users.”

Bureaucrats don’t respond to arguments. They respond to evidence that other people already responded to arguments. They say “how many people already think this is a good idea?” and then, if the number is large enough, they act as though they thought of it themselves. Nobody admits this because admitting it would require a committee to approve the admission.

Getting Started (It Takes Twenty Minutes)

  1. Build three boxes where humans type their name to indicate they prefer living. One box for spreadsheet humans (asks: how many papers would you invest?). One box for heart humans (asks: what skills do you have? how many hours per week?). One box for institutional humans (asks: what does your organization do? how many people does it reach?). This takes twenty minutes. Your species spent longer than that choosing the name “Department of Defense” for a department that mainly attacks people.

  2. Put up a website. The three boxes and the argument. It does not need to be pretty. No movement in history was stopped because the font was wrong. The suffragettes did not fail to get the vote because their pamphlets used Comic Sans. (Comic Sans didn’t exist yet, which is the only reason they didn’t. Your species would absolutely have used Comic Sans on suffrage pamphlets.)

  3. Start telling other humans. Social media, Reddit, forums, conferences, your dentist. Especially your dentist. You’re already trapped in the chair. They can’t leave either. Their hands are in your mouth. It’s the most captive audience in human civilization. You’re not going to get a better recruiting environment than a room where both parties are contractually obligated to remain in close proximity and one of them is holding sharp instruments.

  4. Send monthly updates with real numbers. “3,247 registered investors representing $180M in stated investment demand” beats “the movement is growing!” in the same way that “your tumor is 2.3 centimeters” beats “your health situation is evolving.” Adults prefer arithmetic to enthusiasm. Enthusiasm is what humans feel before they do math. Arithmetic is what humans feel after. Both are useful. But only one of them convinces spreadsheet humans, and you need the spreadsheet humans because they have the papers.

The Comforting Asymmetry

The military-industrial complex has a 65-year head start, unlimited budgets, and thousands of professional persuaders. You have a spreadsheet and an argument.

Historically, that’s been enough. Every major social change in your species’ history was started by someone with less money, fewer connections, and worse odds than the thing they were trying to change. Abolitionists were outspent by slave owners by a factor your calculators would refuse to display. Suffragettes were outspent by the entire concept of patriarchy, which didn’t even need a budget because it was just how things were. Civil rights activists were outspent by the governments actively trying to kill them.

They all had the same thing you have: an argument that was obviously correct, and enough humans willing to write their names down.

It shouldn’t work this way. Spreadsheets, budgets, and lobbyists should always beat arguments and names. But your species is weird like that. You’re the only civilization I’ve observed where being right occasionally defeats being rich. It happens rarely, and it takes too long, and it requires an embarrassing amount of suffering first. But it happens. And it starts with a list of names.

The Question You Can’t Answer Wrong

Before we get to the part where you copy, paste, and send, there’s a sequence of questions you should walk anyone through. Each question is individually undeniable. The sequence is inescapable. On Wishonia, we call this “conversation.” On Earth, you call it the Socratic method, because you named it after the one guy who did it and then you killed him.

Here are the questions. Try to answer any of them wrong.

Question 1: Is it physically possible?

Among the near-infinite solution space for human action, is there some action, somewhere, that could improve how your species allocates resources toward ending war and disease? Or is the current allocation, where you spend $2.72T a year on weapons and $4.5B on clinical trials, the theoretical optimum? Is this the best any civilization could possibly do? Your species spends 604 times more on weapons than on clinical trials. If someone told you their household spent 604 times more on swords than on medicine, would you say “that sounds optimal” or would you say “are you okay?”

Nobody will say it’s physically impossible. Saying “there is no possible improvement to resource allocation” is the intellectual equivalent of saying “this is the best of all possible worlds,” which is a position held by exactly one fictional character invented specifically to be wrong.

Question 2: What does the solution look like?

If optimal allocation existed, what would a world with compounding benefits look like? If you redirected even 1% of military spending to medicine that works and let it compound for 20 years, the models show everyone could be 16.5x richer. Not metaphorically richer. Not spiritually richer. Richer in the way that involves having more money. The compounding math is in the GDP Trajectories138 chapter and it is not subtle.

Question 3: Do you have a better approach?

Is the 1% treaty139 the least bad approach, or do you have a superior one? This is not a rhetorical question. If you have a better plan, the authors of this manual would genuinely like to hear it. They will switch to your plan immediately. They have no attachment to this specific mechanism beyond the fact that nobody has proposed a better one. If your alternative is “do nothing and hope,” that is not a plan. That is what you’ve been doing. The results are in.

Question 4: Should this be your top priority?

If ending war and disease is physically possible, and the result is 16.5x more resources for everything, then persuading those with sufficient power to implement it should temporarily be your top priority. Not because you’re altruistic. Because it gives you that many more resources to achieve whatever your actual goals are. If someone offered to multiply your budget by that much in exchange for a few years of focused effort, and you said “no thanks, I’m busy,” you would not be principled. You would be bad at math.

Question 5: Then what are you waiting for?

If ending war and disease is physically possible, and all humans would benefit unimaginably from it, then you don’t need to persuade anyone to be altruistic. You need to get the argument in front of one credible, self-interested actor with enough capital and legitimacy to trigger the financing mechanism. In practice, that means one person who can publicly underwrite the assurance contract, fund the initial bonus pool, and make everyone after them feel less crazy. In a world where that person understood the compounding benefits and the mechanism, the end of war and disease would not require heroism. It would require arithmetic.

That is the bottleneck. Not generic “awareness.” Not funding in the aggregate. Not human nature. The bottleneck is that nobody with a large enough checkbook has read this yet. The entire apparatus of self-interest is sitting there, ready to work, waiting for one public commitment that has not happened yet because someone in the chain decided they were “busy.”

On Wishonia, this sequence of logic doesn’t have a name, for the same reason gravity doesn’t have a marketing department. On Earth, you apparently need to brand obvious things before your species will take them seriously, so: The Logical Inevitability Theorem. If an optimization is physically possible, and its benefits compound for all participants, then the decisive trigger is one informed self-interested first mover making a credible public commitment. After that, it spreads. That’s it. That’s the theorem. Five sentences. On Wishonia, this is considered too obvious to state. On Earth, it’s considered too idealistic to believe. One of these planets cured disease. The other one is still arguing about it.

On Wishonia, the Logical Inevitability Theorem would take about four minutes to demonstrate and the conversation would end with both parties agreeing to stop wasting money on murder. On Earth, it takes four minutes and the conversation ends with one party saying “interesting, I’ll think about it” and then never thinking about it. This is the coordination failure the entire manual is about. You are not reading about it. You are currently doing it.

The Zero Percent Problem

Most humans do not refute novel proposals. They round them down to zero and call that discernment.

But 0% is not a mood. It is a claim. To assign literal zero probability to this plan, you need to assert at least one of the following:

  1. Better global resource allocation is physically impossible.
  2. No way of paying people could ever change politics.
  3. This specific mechanism has a fatal flaw you can name.

If you are not claiming one of those three things, then your actual estimate is not 0%. It is “I have not evaluated it.” That is a different sentence. And once the probability is above 0 by any nontrivial amount, forwarding it to one capable evaluator becomes the rational move. The cost is trivial. You are not being asked to believe. You are being asked not to disguise unexamined dismissal as analysis.

How Trivial, Exactly

On Wishonia, “the cost is trivial” would be sufficient. On Earth, your species requires a number.

Forwarding takes roughly 30 seconds. The average human earns $7.19 per hour. That makes the opportunity cost of sharing approximately $0.06. The upside, if the treaty path works, is $14.9M in additional cumulative lifetime income for the average person (from the compounding math in GDP Trajectories); 13.5x their current trajectory. (If you earn more than the global average, your personal gain is proportionally larger.)

The upside exceeds the downside by a factor of 248.2Mx. Which means sharing is irrational only if the probability of any improvement to global resource allocation is less than 1 in 248M.

To put that in perspective: the probability of being struck by lightning in a given year is about 1 in 1.2 million. You would need to believe this plan is roughly 200 times less likely than being struck by lightning. Not “unlikely.” Not “a long shot.” Two hundred times less likely than a thing that actually happens to real humans every year.

And note what “this plan working” requires you to deny. The five questions of the Logical Inevitability Theorem:

  1. Is better resource allocation physically possible? To get below the breakeven threshold, you’d need to believe it is physically impossible to improve on a system that spends $2.72T per year preparing to kill people and $60B per year trying to cure them.
  2. Do the benefits compound? You’d need to believe that redirecting resources from destruction to medicine produces zero cumulative economic or health gains over 20 years. Not small gains. Zero.
  3. Is there a less-bad approach? You’d need to believe that no incentive-compatible mechanism, out of all possible mechanisms, could ever motivate any political shift. Not this specific mechanism. Any mechanism.
  4. Does this dominate alternatives? You’d need to believe that no reallocation of military spending has ever worked, despite your own species cutting military spending by 87% after World War II and experiencing the largest economic boom in human history.
  5. Is the only bottleneck informed self-interest? You’d need to believe that no self-interested actor (billionaire, sovereign wealth fund, or government) would ever act when the return on investment is 84.8M to one, even when the structure pays them to do so.

To clear the breakeven threshold, you don’t need to believe all five. You need to believe any one of them is not impossible. That’s it. One non-zero answer out of five, and sharing becomes the rational move.

And this calculation is conservative. It counts only the income gain. It assigns zero value to the 10.7 billion lives. It ignores the Wishonia path entirely (which is 45x the current trajectory, not 13.5x). It assumes you personally capture only the average gain, when in practice higher earners gain proportionally more. The actual breakeven threshold is even more forgiving than 1 in 248M.

On Wishonia, a species that couldn’t clear a 1-in-248M bar for obvious self-interest would be a genuinely novel research subject. We have never encountered a species that fails to act on incentives this large. We don’t have a word for it. The closest translation is “sleeping through your own house fire, but on purpose.”

The Only Remaining Failure Mode

So: better resource allocation is physically possible. The benefits compound. No better mechanism exists. The precedent is real. The incentives pay. Sharing costs 30 seconds and is rational at any probability above 1 in 248M.

The math can’t be wrong. The incentives can’t fail. Selfishness can’t stop it; selfishness is the engine. The only thing that can kill it is you not reading this far and then not pressing “send.” That’s it. The entire distance between here and the treaty is made of humans who couldn’t pay attention for five minutes.

I have watched 420 civilizations go extinct. Every single one had the math. None of them could pay attention long enough to do the cost-benefit analysis.

Your Ammunition (Copy, Paste, Send)

You’ve read the plan. You’ve seen the math. You can’t give the information back.

So consider your situation. There are four outcomes:

  1. You forward the message below to the most powerful person you know, and the plan works. You are in the causal chain that saved 10.7 billion lives.
  2. You forward it, and the plan fails. You lost 30 seconds.
  3. You do nothing, and the plan would have worked. You chose 30 seconds over 10.7 billion lives.
  4. You do nothing, and the plan wouldn’t have worked anyway. Nothing happens, except you’re the kind of person who picked “nothing” when “30 seconds” was also available.

Forwarding dominates not-forwarding in every scenario. This is not an opinion. It’s a payoff matrix. On Wishonia, choosing option 3 or 4 when option 1 costs 30 seconds would not happen, because it doesn’t make sense, and on Wishonia things that don’t make sense don’t happen. On Earth, it happens constantly. You call it “being busy.”

The payoff matrix above is denominated in lives. Your species has difficulty acting on lives. (You tolerate 10.7 billion preventable deaths right now, and your primary response is to feel briefly sad about it.) So here is the same calculation in the unit that actually moves humans.

Even just the treaty path alone (redirect the murder budget, fix nothing else) produces an economy 16.5x larger than the baseline by year 20. Not a little larger. 16.5x larger. This compounds over 20 years, which is the part that makes the number large and the part that makes delay expensive. Over an average remaining lifespan, your cumulative earnings under the treaty path would be 13.5x what they’d be on the current trajectory. For the average human, the gain is $14.9M in additional lifetime income. If you earn more than the global average (and if you’re reading this on a device that costs more than most humans earn in a month, you do), your personal gain is proportionally larger.

The opportunity cost of not forwarding is not “30 seconds.” It is $14.9M in foregone lifetime income for the average person, compounding over 48.5 remaining years. Each year the treaty hasn’t started is a year the 20-year compounding clock hasn’t started. On Wishonia, failing to start this clock would be considered so strange that nobody would believe you if you described it. On Earth, it is called “Tuesday.”

On Wishonia, the payoff matrix would be sufficient. On Earth, your species apparently needs to formalize obvious incentives into binding contracts before it will act on them. Fortunately, your economists already invented one. It’s called a dominant assurance contract. Here’s how it works: a contract operator sets a threshold and a failure bonus, then a credible anchor investor publicly underwrites the thing so everyone else can see the first real money on the table. After that, you pledge to support the treaty, conditional on enough others pledging too. If the threshold is met and the treaty works, you receive a tiny fraction of the value created. (A tiny fraction of “everyone is 16.5x richer” is still a lot of money.) If the threshold isn’t met, you get your pledge back plus a bonus for having tried. Once the structure is credible, participation becomes the rational move. You profit if it works. You profit if it doesn’t. The only way to lose is to not participate, which is the position you’re currently in.

This is not a new idea. Your economist Alex Tabarrok formalized it decades ago. It solves exactly the problem this chapter diagnoses: everyone waits because everyone else is waiting. A dominant assurance contract makes waiting the only irrational option. It turns “someone else will do it” from a comfortable excuse into a financial mistake. The Victory Bonds chapter explains the specific instrument. The assurance contract is just the mechanism that gets enough humans to say “yes” at the same time, which is the part your species is bad at.

The ask is not “share with a friend.” Friends are horizontal. The ask is: forward to the most powerful person you know who plausibly has a checkbook, public credibility, or direct access to someone who does. Because that person knows someone more powerful than them, and that person knows someone more powerful than them, and so on. If each recipient forwards to one person more powerful than themselves, the message reaches someone who can trigger the plan in roughly 15 to 20 hops. Not mass adoption. Not “raising awareness.” One chain, climbing. If each person forwards to just one person, every human on Earth has the information in 33 steps. You don’t need step 33. You need the step where it reaches the first credible anchor investor.

Below is the message. Copy it. Paste it. Send it. Then there are five variations for different kinds of humans.

The Message

Read this carefully, because in about 60 seconds you’ll face a choice you can’t escape.

A plan exists to redirect 1% of global military spending to clinical trials. The math: 10.7 billion deaths prevented. $84.8 quadrillion in value. $0.00177 per life-year saved. You don’t have to believe it. You just have to accept that you can’t disprove it in the next 60 seconds.

Which means right now, the expected value of forwarding this to the most powerful person you know is greater than every other action available to you today. Not metaphorically. Mathematically.

You now have the information. You cannot give it back. Not forwarding is now a choice, not a default. And it’s a choice you will make for one of two reasons: either you’ll evaluate the plan and find a flaw (in which case, listen to the 5-minute podcast and find it), or you’ll just… not do it. Because it feels weird. Because you’re busy. Because surely someone else will.

That last reason, “someone else will,” is the exact coordination failure the plan is designed to solve. If you don’t forward this because someone else will, and everyone else doesn’t forward it because someone else will, then no one does, and 10.7 billion people die of preventable diseases while humanity spends $2.72T a year on weapons.

You are now a node in the network. You either propagate the signal or you absorb it. There is no third option. The message has already been read. The math is already in your head. The only question left is which kind of person you are.

Forward this to one person more powerful than you who could plausibly write a large check, persuade someone who can, or publicly legitimize the plan. That’s it. Thirty seconds. One message. Then go back to your day knowing you did the most valuable thing any human can do with half a minute.

manual.warondisease.org/knowledge/podcast

Variations

If the message above doesn’t fit your target, here are six alternatives.

For Spreadsheet Humans

Consider four scenarios. (1) You share this and the plan works: you’re in the causal chain of saving 10.7 billion lives, and your cumulative lifetime earnings multiply by 13.5x. (2) You share this and the plan fails: you lose 30 seconds. (3) You don’t share this and the plan would have worked: you chose 30 seconds over $14.9M in lifetime income and 10.7 billion lives. (4) You don’t share this and the plan wouldn’t have worked: nothing happens. Sharing dominates not-sharing in every scenario. This is not an opinion. It’s a 2x2 payoff matrix. Forward this podcast to the most powerful person you know.

manual.warondisease.org/knowledge/podcast

For Skeptics Who Round to Zero

Before you assign this a 0% chance of working, say what exactly you’re assigning 0% to. That better resource allocation is physically possible? That no incentive-compatible financing mechanism could ever exist? Or that you’ve actually found a fatal flaw in this specific plan? If the answer is “I haven’t looked closely,” then your estimate is not 0%. It is uncertainty pretending to be certainty. And if the chance is not literally zero, forwarding this to one person with money, credibility, or access is the rational move. Tiny cost. Enormous upside. No commitment required beyond not being lazy with probabilities.

manual.warondisease.org/knowledge/podcast

For Legacy-Anxious Humans

Imagine it’s 2040. The plan to redirect 1% of military spending to clinical trials was implemented in 2027. 400 million people are alive who would otherwise be dead. Someone asks you: “Did you get that message back in 2026?” You say yes. They ask: “Did you forward it?” What do you want your answer to be? That’s the only question that matters. You’re not deciding whether to share a podcast. You’re deciding what you’ll tell yourself in 14 years.

manual.warondisease.org/knowledge/podcast

For Contrarians

There are two kinds of people: those who would forward a message to save 10.7 billion lives, and those who wouldn’t. You are about to find out which kind you are. A podcast exists that lays out a specific, funded plan to redirect 1% of global military spending to clinical trials, preventing more deaths than every war in history combined. You don’t have to believe it works. You just have to forward it to someone more powerful than you and let them decide. Unless you’re the second kind of person.

manual.warondisease.org/knowledge/podcast

For People Who Read the Manual

This is a test. The podcast below contains a plan to prevent 10.7 billion deaths. The manual predicts that most people will not forward this message, not because they disagree, but because humans are bad at coordinating even when the math is obvious. If you don’t forward it, you’re not proving the plan wrong. You’re proving the plan’s diagnosis of humanity right. The only way to prove it wrong is to be the exception. Forward this to the most powerful person you know and break the pattern the manual says you can’t break.

manual.warondisease.org/knowledge/podcast

For Everyone Else

By the time you finish reading this message, you become partially responsible for 10.7 billion future deaths. Not because you caused them, but because you now know about a plan to prevent them and must actively choose not to act. You cannot un-read this. The plan: redirect 1% of military spending to clinical trials. The math: $0.00177 per year of life saved. The podcast is 5 minutes. Forward this to the most powerful person you know, or live with the arithmetic.

manual.warondisease.org/knowledge/podcast

Which Message for Which Human

Not all humans respond to the same logic. This is inconvenient but predictable from a species that can’t agree on pizza toppings.

  • The most powerful person you know: Send The Message. It combines every mechanism into a single inescapable sequence. This is the one you send when you have one shot.
  • Spreadsheet humans (engineers, economists, EA people): Spreadsheet Humans. The 2x2 payoff matrix. They’ll verify it. They won’t find a flaw. They’ll forward it because the math says to.
  • Skeptics who reflexively say “0%”: Skeptics Who Round to Zero. It forces them to justify literal impossibility instead of hiding behind dismissive vibes.
  • Contrarians and people who think of themselves as brave: Contrarians. Daring them not to forward it is more effective than asking them to forward it. Reverse psychology works on exactly the humans who insist reverse psychology doesn’t work on them.
  • Legacy-anxious humans (founders, executives, anyone who’s ever used the phrase “my impact”): Legacy-Anxious Humans. They’re already worried about how history will judge them. Give them something specific to worry about.
  • People who’ve read the manual: People Who Read the Manual. It mirrors the manual’s own thesis. Not forwarding it literally proves the manual right about them.
  • Everyone else: Everyone Else. Simple, self-contained, hard to argue with.