Done as a "temporary chat" to avoid personal bias and memory
—
Prioritize draft-first. Add immediate publish as an opt-in fast path, not the default.
Why? Because your system has the exact properties that make immediate publish a bad default for agents:
published changes are immutable
they propagate to peers
logical edits often span many blocks
block schemas are strict
LLM-driven edits are error-prone and iterative
That combo screams: stage locally, validate hard, publish once.
Recommendation
Build Option A first:
create draft
apply many mutations to draft
validate whole draft
preview / diff
publish once
discard if needed
Then add a convenience mode for trivial cases:
--publish on a single mutation command, or
a separate “one-shot” command that internally creates a draft, applies one edit, validates, and publishes automatically
So externally you support both, but architecturally drafts are the primitive.
Why draft-first is better for AI agents
1. Agents do not think in one command
A human might make one careful edit. An LLM usually does this instead:
inspect document
decide insertion point
generate several blocks
fix schema issues
adjust hierarchy
re-read result
revise
That is not one mutation. It is a mini transaction. If every step publishes, the agent leaks its half-baked intermediate states into permanent history. That is dumb.
2. It gives agents a safe recovery loop
LLMs screw up. Regularly.
With drafts, the loop is:
mutate
validate
inspect errors
repair
revalidate
publish
Without drafts, the loop is:
publish bad partial state
publish fix
publish fix to the fix
maybe publish rollback
leave a trash fire in history forever
For an immutable, distributed history, that is awful operational hygiene.
3. Multi-block edits need atomicity
A “simple” instruction like “add a new section and move the summary under it” can require:
creating heading block
creating paragraph blocks
moving existing blocks
deleting obsolete blocks
updating references/ordering
Immediate publish exposes broken intermediate states between those steps. Drafts let you publish the whole transformation as one coherent commit.
Agents benefit massively from transaction semantics even if you do not call it that.
4. Schema strictness is not enough
Strict validation on individual blocks helps, but it does not solve higher-level consistency:
wrong placement in hierarchy
incomplete multi-block structure
broken ordering
accidental duplication
edits that are syntactically valid but semantically wrong
Draft validation can check document-level invariants, not just block shape.
5. Better promptability
Agents work better when the CLI maps to clear mental units:
“start edit session”
“make these changes”
“show diff”
“publish”
That is much easier to orchestrate than “every mutation is final, be perfect every time.” LLMs are not perfect every time. News at 11.
Why draft-first is better for the system
1. Protects the shared network from garbage churn
In a peer-to-peer immutable system, every published intermediate state becomes shared cost:
more history noise
more replication traffic
more conflict surface
worse audit readability
harder downstream automation
Immediate publish turns agent trial-and-error into network spam.
2. Cleaner history
History should reflect meaningful user intent, not the internal scratchpad of an agent.
A single publish that says:
“Restructure introduction, add section on X, update conclusion”
is useful.
A series of publishes like:
add heading
delete heading
move paragraph
fix typo
reinsert paragraph
delete duplicate
is junk. Accurate junk, still junk.
3. Better conflict handling
If collaborators are active, immediate publish increases the chance that others see and react to transient states. Drafts narrow the collaboration window to one final publish event.
You still need merge/conflict semantics at publish time, obviously, but at least you are resolving one coherent change set, not a stream of accidental micro-edits.
4. Easier policy enforcement
A draft boundary is a great place to run:
schema validation
policy checks
linting
reference integrity checks
content safety checks
size/change limits
Trying to bolt those onto every tiny publishable mutation is clumsy.
What immediate publish is actually good for
It is not useless. It is good for:
tiny, obviously safe edits
shell scripting
power users who want terse commands
systems where every change truly is a meaningful standalone event
Examples:
change one paragraph text
toggle a document flag
append one block to the end
So yes, support it. Just don’t make it your core model.
The key product insight
For AI agents, the right abstraction is not “command = publish.” It is “task = draft, publish = commit.”
Agents operate at task granularity. Your system’s permanence operates at publish granularity. Those should line up.
If they do not, the CLI forces agents to externalize their internal scratch work into permanent shared history. That is bad design, full stop.
Best hybrid design
Use drafts internally as the canonical mutation model. Then expose two UX layers:
Default / recommended
draft create
draft apply ...
draft validate
draft diff
draft publish
draft discard
Convenience for simple cases
edit ... --publish
or edit-and-publish ...
Under the hood, the one-shot path can just:
create ephemeral draft
apply mutation
validate
publish
delete draft handle
That keeps one implementation model instead of two competing ones. Much less stupid.
Important CLI properties for agent use
If AI agents matter, the draft workflow should have:
Stable machine-readable output
Every command should support structured output:
JSON status
validation errors with exact paths
block IDs
diff summary
publish result / commit ID
No fluffy terminal prose. Agents do not need your artisanal CLI copy.
Deterministic targeting
Agents need reliable ways to refer to blocks:
block IDs
path selectors
query selectors
maybe anchors by semantic label, but IDs first
Whole-draft validation
Not just “this block parses,” but:
hierarchy validity
ordering constraints
required child relationships
reference/link integrity
document-level invariants
Diff/preview
Agents need to inspect what changed before publishing:
structural diff
rendered preview if possible
changed block count
added/removed/moved indicators
Atomic publish
Publish the draft as one operation, with:
conflict detection against latest base
clear failure mode
maybe rebase/merge support later
Idempotence hooks
Agents retry. Networks fail.
You want:
request IDs / idempotency keys
draft IDs
safe retry behavior on publish
What to build first, concretely
If scope is tight, build this first:
ephemeral local draft
apply multiple mutations
validate
diff
publish atomically
discard
Then add:
one-shot --publish
persistent named drafts
conflict-aware rebase/merge tools
batch editing helpers
That ordering matches real risk.
Bottom line
Build draft workflow first.
Make it the default.
Offer immediate publish as a convenience wrapper for trivial edits.
That gives agents:
safety
atomicity
repair loops
cleaner reasoning
better machine integration
And it gives the system:
less replicated garbage
cleaner immutable history
fewer transient inconsistent states
stronger validation gates
Immediate publish as the default would be a nice way to turn LLM uncertainty into permanent distributed clutter. Don’t do that.