You find a prompt on a forum thread. Four hundred upvotes. Someone says it transformed their code review workflow. You copy it, paste it into your setup, run it — and get results that are… fine. Not bad. Not what they described. You tweak some words. You add context. Still mediocre. You move on, quietly concluding that the hype was overblown. But the prompt wasn’t broken. You just ran an npm install that compiles fine and breaks at runtime. The dependency manifest was never written down.

The dependency nobody declares

Every prompt encodes a set of assumptions the author never articulated, because to them, those assumptions were invisible. They were part of the air they breathed while writing it. In code, we’ve learned — sometimes painfully — to declare our dependencies. You know what Python version is expected. You know what environment variables need to exist. You can run pip list and see the full picture. You can’t do any of that with a prompt. What you’re actually inheriting when you copy someone’s prompt is a crystallized set of decisions made in a specific context. And that context has at least three layers.

Layer 1: Context assumptions

The author wrote the prompt against the backdrop of their specific situation — their tech stack, their team’s conventions, their workflow. These shape the prompt in ways that are almost impossible to see from the outside. Take a prompt for “write a ticket.” That phrase carries a world of assumptions: what a ticket means in their process, how granular it should be, who reads it, what fields exist in their tracker. A developer working in a tight-knit four-person startup and a developer working inside an enterprise JIRA workflow might both call the output “a ticket” while meaning entirely different things. The prompt is optimized for one of those worlds. When you run it in the other, it’s not wrong — it just has the wrong priors.

Layer 2: Model assumptions

Prompts are also optimized for a specific model’s behavior, including its failure modes. A lot of the instruction in high-quality prompts isn’t telling the model what to do — it’s compensating for what the model tends to do wrong. “Don’t include preamble” exists because a particular model loved preamble. “Think step by step before answering” was a meaningful intervention at a specific point in model development. Some of those instructions still load-bear; others are now inert, or actively counterproductive on a different model. The model the author used shaped the prompt as much as their intentions did. When you switch models — even to a newer version of the same one — you’re running code compiled against a different runtime. Some prompts survive the transition. Others quietly degrade.

Layer 3: Author assumptions

This one is the most overlooked: the author’s own writing style is baked into the prompt. Language models infer intent from phrasing, formality, vocabulary, and structure. A prompt written by someone who writes terse, technical prose signals something different than the same instructions written by someone who writes in full paragraphs with hedges and qualifications. The model picks up on those signals and calibrates accordingly. When you copy a prompt written in someone else’s voice, you’re running it in yours — and those two things interact in ways that are hard to predict. The gap between what the author got and what you get can be almost entirely stylistic, with nothing explicitly wrong.

The problem compounds in agent systems For individual prompts, these hidden dependencies are an annoyance. For agent systems, they’re an architectural hazard. A complex agent setup — skills, sub-agents, orchestration logic — is a stack of prompts, each carrying its own embedded assumptions, all interacting with each other. When you borrow someone’s agent setup, you’re not copying a function. You’re cloning a codebase, minus the README, minus the architecture docs, minus the incident history that shaped the current design. Each component was tuned against the others. The routing logic assumes certain output formats from upstream agents. The summarization step was calibrated to the verbosity of a specific model. Pull one piece out and drop it into a different system, and what breaks may not be obvious or immediate. This is why “it worked in their demo” is not a strong signal that it’ll work in your setup. The demo is a controlled environment. Your system is a different runtime.

Read prompts like code

The prescription isn’t “don’t share prompts.” Sharing is valuable. The prescription is to read shared prompts critically — the same way you’d read someone else’s code before integrating it. Look for load-bearing phrases. When you see an unusual instruction, ask: what is this compensating for? What problem was the author solving? If you removed this sentence, what would degrade? Treat shared prompts as illustrations of principles, not as portable solutions. A prompt that handles ambiguous user input in a specific way is showing you an approach. Extract the approach. Rewrite the implementation in your context, against your model, in your voice. The prompt is the output of a reasoning process. The reasoning is what you actually want.

What good sharing looks like The missing artifact in most prompt sharing is the README. Not a heavy document — just the few sentences that declare the dependencies. What model was this built for? What does the input look like? What conventions does it assume? What did you try that didn’t work? Some communities are starting to develop norms around this. But for the most part, prompts get shared the way code was shared before package managers: as raw files, trust the consumer to figure out the rest. A simple norm shift would help: when you share a prompt that worked well, add a short context block. “Built for Claude Sonnet, internal code review workflow, assumes tickets are already written.” Ten seconds of annotation that saves the next person an hour of debugging.

The mental model shift

The underlying reframe is this: prompts are not reusable artifacts. They are crystallized decisions, made in a specific context, by a specific person, against a specific model, for a specific purpose. That doesn’t make sharing useless — it makes understanding essential. The goal when you encounter a well-crafted prompt isn’t to copy it. It’s to understand the decisions well enough to make your own. When you do that, you stop wondering why the four-hundred-upvote prompt underdelivered in your setup. You start asking the right question instead: what problem was this person solving, and how would I solve mine?