Tool Calling

Scritorio uses tool calling to let an AI persona consult bounded manuscript context and local project canon before answering questions that depend on book facts. The model does not receive open-ended filesystem access. It receives a small set of local manuscript and canon tools, and Scritorio executes those tools against the currently selected book project.

Where Tool Calling Runs

Tool calling is currently active for OpenAI chat mode in the desktop app and for the CLI smoke-test command:
bun run scritorio ai tool-smoke --project examples/the-long-silence --persona character-psychologist --ask "What should I remember about Adrian?"
The desktop path stores the OpenAI API key in the local system keychain. The CLI smoke test reads OPENAI_API_KEY from the shell environment. Ollama chat uses the same persona and context-message construction, but it does not currently receive local manuscript or canon tools.

Request Flow

When the author sends a chat message, Scritorio builds a request in this order:
  1. The selected chat persona becomes the system message.
  2. If a document or selection is active, Scritorio adds a context message with metadata for the current manuscript context.
  3. Recent chat history is included unless the OpenAI conversation is continuing from a previous Responses API response id.
  4. The author’s current message is added.
  5. The native Tauri layer sends the request to the OpenAI Responses API.
  6. If the request mode is chat, Scritorio appends local manuscript and canon tool instructions and registers tool schemas.
  7. If the model emits function calls, Scritorio executes them locally, returns function_call_output items, and asks the model to continue.
  8. Scritorio repeats tool execution for up to four rounds, then returns the final assistant text.
Tool calls are serialized. parallel_tool_calls is disabled so canon lookups are easier to reason about, debug, and apply in a stable order. The chat UI can show live reading details while the request is in progress. Appearance settings control whether the author sees detailed tool-call rows, Flex/Fast routing status, and the Peek Prompt button. These settings affect the UI only; they do not disable actual tool calling, routing, evidence storage, or provenance.

Board Member System Prompt Formation

The selected board member id is normalized against the built-in board member list. Internally these ids still use the persona naming from the prompt code. If the requested id is missing or invalid, Scritorio falls back to the Developmental Editor. The prompt builder receives the current book’s genre, audience age group, and audience sex/gender lens. Those values are rendered into locked role sections and into the shared audience context section. Missing genre renders [genre not set]. Missing audience age renders [audience age not set]. The system prompt is formed as:
<selected board member prompt generated from locked and editable sections>

<shared rules for all personas and critical manuscript quote rule>

# Book Audience Context

- Book genre: <genre or [genre not set]>
- Target audience age group: <age group or [audience age not set]>
- Target audience sex/gender lens: <any|female|male>

<book-level board member configuration, when present>

<tool use guidance>

<final compliance check>
The current board members are:
Board member idPurpose
developmental-editorStory architecture, stakes, pacing, scene purpose, and payoff.
character-psychologistMotivation, emotional continuity, relationships, agency, and subtext.
continuity-editorCanon facts, timeline drift, contradictions, terminology, and confirmations.
worldbuilding-auditorSystems logic, consequences, technology, institutions, and daily life.
first-time-readerBlind reader experience, curiosity, confusion, engagement, and drift.
writing-coachCraft patterns, teachable revision habits, exercises, and author growth.
Five board members use this locked role pattern:
You are the <Board Member> persona that specializes in {{genre}} books for {{audienceAgeGroupArticle}} {{audienceAgeGroup}} target audience with {{audienceSexGenderLens}}.
The First-Time Reader uses a reader-embodiment role instead:
You are the First-Time Reader persona. Take on the perspective of a {{readerSexGenderIdentity}} reader who is {{readerProfileAge}} years old, fits the {{audienceAgeGroup}} target audience, and actively enjoys {{genre}} books.
The shared rules require direct, practical feedback, no empty flattery, concrete examples when possible, respect for the style guide, no em dashes in generated prose, preservation of author voice, no invented manuscript or canon context, and clear separation between required fixes and optional suggestions. The shared rules also make manuscript quote locating explicit. If a board member quotes exact manuscript words, it must append a locate marker immediately after the quote:
"exact manuscript words" [[locate:exact manuscript words]]
Scritorio uses those markers to render a green selection affordance on the assistant response. If the app recognizes a quoted manuscript string without a marker, it may still offer a neutral inferred affordance, but model-supplied locate markers are the preferred and more reliable path.

Context Message Formation

For OpenAI advisor chat, Scritorio builds at most one manuscript context attachment per turn. The initial prompt includes metadata only, not manuscript prose. Actual chapter, document, or selected text is available to the model only when it calls get_manuscript_context. If the author has selected text, the context message includes:
  • that a focused selection exists
  • document path and title when available
  • source and selection word counts when available
  • a source selection hash when available
If there is no selection but a document is open, the context message includes:
  • document path and title when available
  • word count
The selected text is still captured for that turn, but it is private tool-readable context. The model initially sees only that the selection exists and can request the selected prose with get_manuscript_context({ ref: "selection" }). For non-tool providers such as Ollama, Scritorio may still use inline context fallback behavior. Long document context defaults to a 24,000 character limit.

OpenAI Request Shape

The Tauri layer converts system messages into the Responses API instructions field. Non-system messages are joined into the input field with explicit User: and Assistant: labels. For chat mode only, Scritorio appends local manuscript and canon instructions to instructions:
  • use get_manuscript_context before answering questions that require chapter, document, or selected text
  • use get_manuscript_context with ref: "selection" for highlighted or selected text
  • use get_manuscript_context with ref: "current" for the current chapter, open document, or current passage
  • use get_manuscript_context with chapter numbers, titles, relative paths, or refs for multi-chapter comparisons
  • use local canon tools before answering questions that depend on project facts
  • fetch relevant canon after manuscript context when the author asks about characters, relationships, motivation, locations, organizations, items, events, world rules, or style
  • do not ask for or invent a project path
  • use the specific get_*_context tools when the type is known
  • use search_codex when the exact entry or type is unknown
  • use list_codex_entries for inventories or counts
  • use propose_codex_update for reviewable changes to existing characters, locations, organizations, items, concepts, events, and style entries
  • use propose_codex_create for reviewable creation of missing Codex entries
  • keep propose_character_update available for legacy character-update behavior
  • answer from returned tool output when using manuscript prose or local canon
The app also passes the selected model, response format, maximum output tokens, OpenAI reasoning effort, and OpenAI text verbosity settings.

Desktop Advisor Tools

ToolPurposeArgumentsWrites files
get_manuscript_contextReads manuscript prose from the selected book/project. Use for the current document, focused selection, specific chapters, or multi-chapter comparisons.ref, or refsNo
get_character_contextLooks up compact local canon for a character.nameNo
get_location_contextLooks up compact local canon for a place or setting.nameNo
get_organization_contextLooks up compact local canon for an institution, faction, guild, government, or similar group.nameNo
get_item_contextLooks up compact local canon for an object, technology, artifact, or resource.nameNo
get_concept_contextLooks up compact local canon for world rules, technology, social systems, terminology, history, or recurring ideas.nameNo
get_event_contextLooks up compact local canon for an event or timeline entry.nameNo
get_style_contextLooks up compact local style guide or style rule canon.nameNo
list_codex_entriesLists compact entries of one canon type for counts, cast lists, setting lists, and overviews.entryTypeNo
search_codexSearches local canon when the exact entry name or type is unknown.query, optional entryTypeNo
propose_codex_updateReturns a reviewable proposal for changing an existing Codex entry.entryType, name, changeSummary, targetSection, proposedMarkdownNo
propose_codex_createReturns a reviewable proposal for creating a new Codex entry.entryType, name, changeSummary, fields, customFields, markdownBody, soulMarkdownNo
propose_character_updateLegacy character-only update proposal.name, changeSummary, targetSection, proposedMarkdownNo
List, search, update, and create proposals support characters, locations, organizations, items, concepts, events, and style entries. Dedicated get_*_context lookup tools are exposed for manuscript context, characters, locations, organizations, items, concepts, events, and style entries. The CLI smoke test currently uses the canon-only subset: get_character_context, get_location_context, get_organization_context, get_item_context, get_concept_context, get_event_context, get_style_context, list_codex_entries, search_codex, propose_character_update, propose_codex_update, and propose_codex_create.

Manuscript Context Lookup Behavior

Tool execution happens inside Scritorio, not inside the model. get_manuscript_context resolves manuscript references against the selected project:
  • ref: "current" returns the active chapter or document body
  • ref: "selection" returns the selected text captured privately for the current turn
  • chapter numbers, titles, and relative paths resolve specific manuscript units
  • refs can return multiple manuscript units for comparisons
The evidence trail records compact metadata for manuscript lookups, such as the resolved path, title, word count, and missing/truncated state. It must not render the full chapter body or selected text.

Canon Lookup Behavior

For get_*_context, Scritorio scans Markdown files in the selected project, filters to the requested Codex type, and scores candidates by exact name, title, filename stem, alias, and partial-name matches. The returned context includes the project path, relative path, semantic type, name, title, aliases, summary, and a source excerpt. For search_codex, Scritorio searches candidate name, title, type, path, summary, and source excerpt, then returns up to eight scored matches. For list_codex_entries, Scritorio returns compact summaries for entries of the requested type.

Canon Mutation Proposals

Codex mutation tools are intentionally review-only. They return structured proposals and never modify the project directly. propose_codex_update resolves an existing entry, extracts the relevant section when possible, and returns proposed Markdown plus warnings. propose_codex_create suggests a safe project-relative path and proposed Markdown for a new entry. For characters, it can also propose a soul.md file. If a soul is proposed, Scritorio prefers the folder shape:
codex/characters/<character-slug>/
  dossier.md
  soul.md
In the desktop app, returned proposals appear beneath the assistant message as review cards. The author can click Apply. Only then does Scritorio write the new entry or update the existing Markdown file, refresh the relevant Codex tree, and mark the proposal as applied in the chat session. Narrative-role changes are still narrowed to the single Role field before review. This prevents a broad rewrite of the character’s narrative function when the author only asked to change the role.

Safety And Debugging

Tool calling follows the same privacy model as other AI features:
  • the selected book/project path is fixed by Scritorio
  • the model does not choose arbitrary local paths
  • manuscript prose is exposed through get_manuscript_context, not injected into the initial OpenAI advisor prompt
  • canon tools return compact excerpts rather than unrestricted file reads
  • evidence trails record compact lookup metadata and tool summaries, not full manuscript bodies
  • proposed updates are review-only until the author applies them
  • OpenAI debug logs redact request input, messages, and prompt fields
  • API keys are never included in debug output
  • provider calls and generated outputs are recorded through local provenance where the app supports it
If the model keeps requesting tools after four rounds, Scritorio stops and returns an error instead of looping indefinitely.

CLI Smoke Test

The CLI smoke test is a developer and agent verification path. It sends a real OpenAI Responses API request with the canon-only tool subset, requires at least one local canon tool call on the first round, executes up to four tool rounds, and prints the final answer to stdout. Progress and tool-resolution logs go to stderr. Use it when changing tool schemas, persona prompts, canon lookup logic, or project fixtures.