Skip to content

Troubleshoot Early Problems

Most first-run Atlas failures fall into a small number of categories. This page is meant to shorten the time between “something failed” and “I know which layer is wrong.”

Early Failure Map

flowchart TD
    A[Failure] --> B[Build problem]
    A --> C[Fixture path problem]
    A --> D[Artifact root problem]
    A --> E[Server startup problem]
    A --> F[Query problem]

This failure map is here to shorten diagnosis time. Atlas first-run issues usually belong to one layer at a time, and readers get unstuck faster when they identify the layer before changing multiple things.

If cargo run Fails Before the Command Starts

Focus on build and workspace issues first:

  • confirm you are at the repository root
  • confirm the workspace compiles
  • re-run the exact command with --verbose or --trace

Do not debug dataset paths or server flags before the binary can even start. That usually wastes time in the wrong layer.

If Fixture Paths Cannot Be Found

Check that these exist:

ls crates/bijux-atlas/tests/fixtures/tiny/genes.gff3
ls crates/bijux-atlas/tests/fixtures/tiny/genome.fa
ls crates/bijux-atlas/tests/fixtures/tiny/genome.fa.fai

If they do not, you are likely not at the workspace root or the worktree is incomplete.

If Ingest Fails

flowchart LR
    IngestFail[Ingest failure] --> Inputs[Check gff3/fasta/fai paths]
    Inputs --> Output[Check output-root writable]
    Output --> Flags[Check release/species/assembly flags]
    Flags --> Logs[Re-run with --trace]

This ingest triage order keeps the likely causes practical and local. Most early ingest failures are input, path, or identity mismatches rather than deep product defects.

Common causes:

  • wrong fixture path
  • build root not writable
  • mismatched flags for release, species, or assembly
  • trying to skip the FAI or other required inputs

The right recovery pattern is to fix one concrete input problem and rerun the same ingest command. Do not change multiple identity flags and paths at once unless you enjoy losing the root cause.

If Dataset Validation Fails

The usual causes are:

  • ingest never completed successfully
  • validation is pointed at the wrong build root
  • release identity flags do not match the built output

Always validate the same root you passed as --output-root during ingest.

If validation fails, do not move on to publish or startup. That only spreads uncertainty into later layers.

If the Server Fails Even Though Ingest Succeeded

One common reason is using the ingest build root as if it were the serving store. Atlas serving expects published artifacts plus a catalog.

Run these steps before startup:

cargo run -p bijux-atlas --bin bijux-atlas -- dataset publish \
  --source-root artifacts/getting-started/tiny-build \
  --store-root artifacts/getting-started/tiny-store \
  --release 110 \
  --species homo_sapiens \
  --assembly GRCh38

cargo run -p bijux-atlas --bin bijux-atlas -- catalog promote \
  --store-root artifacts/getting-started/tiny-store \
  --release 110 \
  --species homo_sapiens \
  --assembly GRCh38

If the Server Does Not Start

flowchart TD
    StartupFail[Server startup failure] --> StoreRoot[Check --store-root]
    StartupFail --> CacheRoot[Check --cache-root]
    StartupFail --> Config[Run --validate-config]
    Config --> Retry[Retry startup]

This startup decision tree exists because server failures often get overcomplicated. Atlas startup problems are usually explained by serving-store shape, cache-root setup, or resolved runtime config.

Use:

cargo run -p bijux-atlas --bin bijux-atlas-server -- \
  --store-root artifacts/getting-started/tiny-store \
  --cache-root artifacts/getting-started/server-cache \
  --validate-config

If Health Works but Queries Fail

That usually means the runtime started, but the store or dataset resolution path is not returning the state you expect.

Check:

  • curl -s http://127.0.0.1:8080/v1/version
  • curl -s http://127.0.0.1:8080/v1/datasets
  • your query parameters for release, species, and assembly

This is the classic point where people confuse "the server is up" with "the expected dataset is published and discoverable." Atlas keeps those as separate questions on purpose.

Fast Diagnosis Order

  1. Can --help run?
  2. Can the fixture files be listed?
  3. Did ingest complete?
  4. Did dataset validation pass?
  5. Does server config validation pass?
  6. Does v1/version work?
  7. Does v1/datasets work?

If you answer “no” at one step, fix that layer before you continue. Atlas is easier to debug when you narrow the failure boundary instead of pushing uncertainty forward through the workflow.

If you answer those in order, you usually isolate the failing layer quickly.

Purpose

This page explains the Atlas material for troubleshoot early problems and points readers to the canonical checked-in workflow or boundary for this topic.

Stability

This page is part of the canonical Atlas docs spine. Keep it aligned with the current repository behavior and adjacent contract pages.