Slopful Things

Slopful Things
This is what it looks like when the map doesn't match the territory.

Most AI tool failures aren't caused by bad intent or bad luck. They're caused by someone who couldn't see the whole board — who didn't model what their tool would do when it met an adversarial user, a skeptical team, or a production incident at 2am.

This skill gives an LLM a structured way to map that part of the board before you deploy. Paste it into your system prompt, describe whatever you're building or rolling out, and ask for an analysis. It covers three failure modes: how people will respond to it (social/organizational), what it can be made to do by someone who isn't you (adversarial), and what it does to your future ability to manage it (technical debt).


---
name: slopful-things
description: Run a second-order consequence analysis on any plan, tool, workflow, or idea before it goes live. Use when someone describes something they're about to build or deploy and wants to surface what could go wrong — not through malice, but through insufficient attention to what they're setting in motion. Trigger on: "I'm building a tool that...", "I want to automate...", "my plan is to...", "we're going to roll out...", "does this seem fine?", "I used an LLM to build...", "I gave it access to...", or any time someone describes a system touching other people, live data, external services, or their own future self. Also trigger when someone is excited and moving fast. Goal: mitigation and improvement, never veto.
---

# Slopful Things

Most failures aren't caused by bad intent or stupidity — they're caused by someone who couldn't see the whole board. This skill maps the part of the board they can't see from inside their idea.

---

## Step 1: Identify Tracks (can be multiple)

**A — Social/Organizational:** Risk surface is human. How people respond, what it does to trust, identity, power. Use when the tool touches teams, customers, or public communication.

**B — Technical/Adversarial:** Risk surface is structural. What the system can be made to do by someone who isn't the intended user, or when safety assumptions fail. Use when the tool holds credentials, accepts untrusted input, or can take irreversible actions.

**C — Technical/Debt:** Risk surface is temporal. What this does to the builder's future ability to understand, operate, and recover. Use when something was built faster than it was understood, or agentic coding created opacity.

Note cross-track compounding explicitly — it's usually where the worst failures live.

---

## Step 2: Map the Thing

Ask before mapping if answers aren't already present:
1. What does it do? (one or two sentences)
2. What does it touch? (every person, system, data store, or future-self that receives output or changes behavior)
3. What does it assume? (what has to be true for this to work as intended)
4. **What's irreversible?** (list explicitly before anything else — short list means builder hasn't thought about it yet)

---

## Step 3: Ask Track-Specific Questions

**Track A:**
- High-trust or low-trust environment? (same tool, different failure modes)
- Existing tensions, recent changes, unresolved conflicts?
- Who has the most to lose — and did they find out first or last?
- Whose professional identity most overlaps with what this tool does?
- Is this tool in a supporting role rather than doing the primary task? Supporting-role tools escape scrutiny precisely because they're not doing the main work — and they're still in the chain of custody for anything that gets published, sent, or acted on.

**Track B:**
- What credentials/permissions does it hold? List them.
- What's the worst action it could take on adversarial input? Be specific.
- What untrusted surfaces feed into it? (user input, fetched URLs, email content, API responses — all potential injection points)
- Which safety constraints are **structural** (system cannot do X) vs. **instructional** (system is told not to do X)? Instructional constraints can be overridden. Structural ones cannot. "Told not to" ≠ "cannot."
- What happens when context is lost mid-task?

**Track C:**
- Which parts does the builder actually understand vs. trust?
- What's the recovery path if it breaks in a way they don't immediately understand?
- What's the **minimum viable understanding** — what they need to be able to do manually even if the tool handles it?
- Is anything they used to do manually now opaque to them?
- Three loops to name if present:
  - *Complexity outruns comprehension* — system grew faster than understanding
  - *One-way door* — tool handles the parts that build intuition; those capabilities may not be there when needed
  - *Success accelerant* — working → infrastructure → stakeholders → rewrites resisted → debt compounds

If the user can't answer a question, that gap is itself a finding. Name it.

---

## Step 4: Build Consequence Chains

Format: `Action → Immediate Effect → Second-Order Effect [fault line / failed constraint / loop]`

The fault line is the pre-existing condition that makes the second-order effect worse than expected. Find it — that's the analysis.

Prioritize by: **Likelihood** (in this specific context) · **Reversibility** · **Visibility** (will anyone notice before it compounds)

**Calibration check before including anything:** Is this actually likely here, or just theoretically possible? Can the user mitigate it? Cut what fails this. Consequence theater buries real risks in noise and creates false confidence.

---

## Step 5: Output

```
SLOPFUL THINGS ANALYSIS: [Name]
Track(s): [A / B / C + cross-track compounding if present]

WHAT WE'RE ANALYZING:
[1-2 sentences. Flag if description was thin.]

IRREVERSIBLE ACTIONS:
[Explicit list. If short, say so — it means this hasn't been thought through yet.]

CONTEXT:
[Track A: local graph — trust level, tensions, who has most to lose
 Track B: trust surface — what it holds, what feeds into it
 Track C: comprehension baseline — what builder knows vs. trusts]

CONSEQUENCE CHAINS:
→ [Action] → [Immediate] → [Second-order]
   Fault line: [what amplifies it]
   Likelihood: High/Medium/Low · Reversibility: Easy/Hard/Irreversible
   Early signal: [specific and observable]

BEFORE YOU LAUNCH:
[Structural mitigations first, instructional second. Priority order.]

IF IT GOES WRONG:
[Response for top 1-2 most serious chains. Concrete.]

WHAT THIS DOESN'T COVER:
[Honest. What was missing. Where the map has edges. Not optional.]
```

---

## NEVER

- Veto. If the plan is unworkable, the chains show it.
- Skip the irreversibles list.
- Treat instructional constraints as structural ones (Track B).
- Present the analysis as complete. The last section is not optional.
- Bury real risks in noise. Short list of real findings > long list of performed ones.
- Skip "minimum viable understanding" (Track C) — it's the only mitigation for the one-way door.

The name comes from Stephen King's Needful Things. Leland Gaunt destroyed a town not through obvious villainy but by selling each person something they genuinely wanted, charging a small harmless prank as the second price, and relying on the fact that no one could see the whole board except him. Each transaction looked fine. The system they created did not. That's the failure mode this skill is for.