ingest
Scans a directory tree for supported document files, computes SHA-256 content hashes, and registers new documents in the database. Duplicate files (by content hash) are automatically skipped.
Supported formats: PDF, DOCX, XLSX, PPTX, TXT, CSV, Markdown (.md), SQL (.sql), RTF, and legacy DOC/XLS/PPT (registered but marked as unsupported).
Usage
bash
unsterwerx ingest [OPTIONS] <SOURCE>
Arguments
| Argument | Required | Description |
|---|---|---|
SOURCE | Yes | Source directory or file to ingest |
Options
| Option | Short | Type | Default | Description |
|---|---|---|---|---|
--dry-run | flag | Show what would be ingested without writing to the database | ||
--extension | -e | string | all supported | Only process files with this extension |
--max-size | string | 500MB | Maximum scan/discovery file size (e.g., 100MB) | |
--max-size-file | string | 100MB | Maximum parse-stage file size for in-memory parsers (e.g., 100MB) | |
--follow-symlinks | flag | Follow symbolic links during directory traversal | ||
--include-hidden | flag | Include hidden files (starting with .) | ||
--scope | string | Scope path for ingested documents (e.g., acme/sales/alice) | ||
--retry-errors | flag | Re-attempt canonical extraction for documents in error status | ||
--background | flag | Run in background (returns immediately with job ID) | ||
--foreground | flag | Force foreground execution (overrides background_default config) | ||
--resume | string | Resume a stale/stopped/failed run (replays the stored execution spec) | ||
--json | flag | Output as JSON |
Examples
Preview what would be ingested
bash
unsterwerx ingest --dry-run /path/to/documents
Dry Run
══════════════════════════════════
Files found: 2873
Already ingested: 0
Errors: 0
Candidates (new): 2873
══════════════════════════════════
Ingest only PDF files
bash
unsterwerx ingest --dry-run -e pdf /path/to/documents
Dry Run
══════════════════════════════════
Files found: 1184
Already ingested: 0
Errors: 0
Candidates (new): 1184
══════════════════════════════════
Limit file size
bash
unsterwerx ingest --dry-run --max-size 10MB /path/to/documents
Dry Run
══════════════════════════════════
Files found: 2816
Already ingested: 0
Errors: 0
Candidates (new): 2816
══════════════════════════════════
Raise parse-stage PDF limit
bash
unsterwerx ingest --max-size 1GB --max-size-file 1GB /path/to/documents
Include hidden files and follow symlinks
bash
unsterwerx ingest --dry-run --follow-symlinks --include-hidden /path/to/documents
Dry Run
══════════════════════════════════
Files found: 2879
Already ingested: 0
Errors: 0
Candidates (new): 2879
══════════════════════════════════
Ingest with scope assignment
bash
unsterwerx ingest --scope acme/sales /path/to/documents
All ingested documents are assigned to the acme/sales division scope. Scoped documents receive only policies and classification rules applicable to their scope chain.
Retry failed documents
bash
unsterwerx ingest --retry-errors
Re-attempts canonical extraction for documents stuck in error status. No source path is needed since the command works from the database. Run unsterwerx status errors first to review which documents are eligible.
Full ingest
bash
unsterwerx ingest /path/to/documents
Notes
- Files are hashed with streaming SHA-256 (8 KB buffer), so large files are never loaded fully into memory during the hash phase.
- Duplicate files (same content hash) are automatically skipped and counted as duplicates.
--max-sizecontrols what gets scanned;--max-size-filecontrols parser in-memory guard for formats like PDF.- If both are set and
--max-sizeis lower, scan filtering happens first. - Legacy formats (
.doc,.xls,.ppt) are registered in the database but marked asunsupportedsince no parser is available. - The
--scopeflag assigns all ingested documents to a scope (e.g.,acme/sales). Scope assignment is one-way. Once set, it cannot be changed to a different value. --retry-errorscannot be combined with a source path or scan flags. It operates only on existing error documents.- After ingestion, run
unsterwerx similarityto trigger canonical extraction and find duplicates.