Use this skill when the user asks to install, set up, start, verify, repair, or troubleshoot OpenStoryline on the current machine. This includes requests suc...
Use this skill when the task is to install or repair a local source checkout of FireRed-OpenStoryline.
Keep the workflow deterministic:
venv install unless the user explicitly asks for Docker or conda..storyline models and resource/ assetsconfig.toml model settingsCheck these first:
git>= 3.11ffmpegwgetunzipOptional:
dockercondaIf ffmpeg, wget, or unzip are missing, install them through the OS package manager before continuing.
Examples:
macOS with Homebrew:
brew install ffmpeg wget unzip
Debian/Ubuntu:
sudo apt-get update
sudo apt-get install -y ffmpeg wget unzip
If no supported package manager or permission is available, stop and report the missing system dependency clearly.
First prefer any interpreter that already exists and passes version checks:
>= 3.11>= 3.11>= 3.11, but only if basic stdlib modules workValidate candidate interpreters before using them:
/path/to/python -c "import ssl, sqlite3, venv; print('stdlib_ok')"
If no supported interpreter already exists, peferr conda fallback:
conda create -y -n openstoryline-py311 python=3.11
conda run -n openstoryline-py311 python --version
conda run -n openstoryline-py311 python -m venv .venv
After a supported interpreter is found, always create a repo-local .venv and continue using .venv/bin/python for install, config validation, and service startup.
Do not duplicate the rest of the workflow for pyenv or conda unless the user explicitly asks to stay inside a conda environment.
From the repo root:
/path/to/python -m venv .venv
.venv/bin/python -m pip install --upgrade pip
.venv/bin/python -m pip install -r requirements.txt
bash download.sh
Notes:
download.sh pulls both model weights and a large resource archive. It can take a long time and may resume after network drops.Before starting the app, update config.toml.
You can use scripts/update_config.py.
At minimum, fill:
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set llm.api_key=sk-REPLACE_WITH_REAL_KEY
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.model=REPLACE_WITH_REAL_MODEL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.base_url=REPLACE_WITH_REAL_URL
.venv/bin/python scripts/update_config.py --config ./config.toml --set vlm.api_key=sk-REPLACE_WITH_REAL_KEY
Optional but common:
[search_media] pexels_api_key[generate_voiceover.providers.*]After editing config, validate it with:
PYTHONPATH=src .venv/bin/python -c "from open_storyline.config import load_settings; s=load_settings('config.toml'); print(s.llm.model, s.vlm.model)"
Run these checks before saying installation is complete:
.venv/bin/pip check
PYTHONPATH=src .venv/bin/python -c "import agent_fastapi; print('fastapi_app_ok')"
PYTHONPATH=src .venv/bin/python -c "from open_storyline.config import load_settings; load_settings('config.toml'); print('config_ok')"
Also confirm key resources exist:
test -f .storyline/models/transnetv2-pytorch-weights.pth
test -f .storyline/models/all-MiniLM-L6-v2/model.safetensors
test -d resource/bgms
There are two common paths. These are long-running processes. Do not wait for them to exit normally. Treat successful startup log lines or confirmed listening ports as success, and keep the services running in separate shells/sessions as needed.
Manual start:
PYTHONPATH=src .venv/bin/python -m open_storyline.mcp.server
In a second shell:
PYTHONPATH=src .venv/bin/python -m uvicorn agent_fastapi:app --host 127.0.0.1 --port 8005
Combined start:
HOST=127.0.0.1 PORT=8005 PATH="$(pwd)/.venv/bin:$PATH" bash run.sh
After a successful install:
.venv/ exists127.0.0.1:8001)127.0.0.1:8005, though run.sh defaults may differ)download.sh is slow or interruptedSymptom:
Fix:
wget continue; it supports resume behavior hereSymptom:
operation not permitted while binding 127.0.0.1 or 0.0.0.0Fix:
127.0.0.1 over 0.0.0.0 unless external access is requiredWhen reporting status to the user, separate:
Do not say "installation complete" if only the Python packages are installed but the resource bundle is still missing.
ZIP package — ready to use