Unified interface for all providers and all modalities: use one nous-genai CLI/SDK flow to run text/image/audio/video/embedding across OpenAI, Gemini, Claude...
IMPORTANT: If you rely on .env.* files, run commands in the directory that contains those files (typically this skill base directory). If you pass runtime env vars (inline/export), working directory is not restricted.
# 1) Create `.env.local` in this skill directory
(cd "<SKILL_BASE_DIR>" && { test -f .env.local || touch .env.local; })
# 2) Edit `<SKILL_BASE_DIR>/.env.local` and set at least one provider key (see "Supported Environment Variables").
# Example (OpenAI):
# NOUS_GENAI_OPENAI_API_KEY=...
# 3) Text
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai --model openai:gpt-4o-mini --prompt "Hello")
# 4) See what you can use (requires at least one provider key configured)
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai model available --all)
If uvx is unavailable, install once and use genai directly:
python -m pip install --upgrade nous-genai
(cd "<SKILL_BASE_DIR>" && genai --model openai:gpt-4o-mini --prompt "Hello")
Configuration is managed via environment variables.
You can set env vars in two ways:
export in shell).env.local, .env.production, .env.development, .env.test)Recommended for this skill:
<SKILL_BASE_DIR>/.env.localRuntime example (inline):
(cd "<SKILL_BASE_DIR>" && NOUS_GENAI_OPENAI_API_KEY=... uvx --from nous-genai genai --model openai:gpt-4o-mini --prompt "Hello")
When env files are used, SDK/CLI/MCP loads them automatically with priority (high -> low):
.env.local > .env.production > .env.development > .env.testProcess env vars override .env.* (SDK uses os.environ.setdefault()).
Minimal .env.local (OpenAI text only):
NOUS_GENAI_OPENAI_API_KEY=...
NOUS_GENAI_TIMEOUT_MS=120000
Notes:
.env.local (add it to .gitignore if needed).OPENAI_API_KEY, but prefer NOUS_GENAI_* for clarity.NOUS_GENAI_TIMEOUT_MS (default: 120000)NOUS_GENAI_URL_DOWNLOAD_MAX_BYTES (default: 134217728)NOUS_GENAI_ALLOW_PRIVATE_URLS (1/true/yes to allow private/loopback URL download)NOUS_GENAI_TRANSPORT (internal transport marker; MCP server uses mcp, legacy sse is accepted)NOUS_GENAI_OPENAI_API_KEY or OPENAI_API_KEYNOUS_GENAI_GOOGLE_API_KEY or GOOGLE_API_KEYNOUS_GENAI_ANTHROPIC_API_KEY or ANTHROPIC_API_KEYNOUS_GENAI_ALIYUN_API_KEY or ALIYUN_API_KEYNOUS_GENAI_VOLCENGINE_API_KEY or VOLCENGINE_API_KEYALIYUN_OAI_BASE_URL (default: https://dashscope.aliyuncs.com/compatible-mode/v1)VOLCENGINE_OAI_BASE_URL (default: https://ark.cn-beijing.volces.com/api/v3)TUZI_BASE_URL (default: https://api.tu-zi.com)TUZI_OAI_BASE_URL (optional override)TUZI_GOOGLE_BASE_URL (optional override)TUZI_ANTHROPIC_BASE_URL (optional override)NOUS_GENAI_TUZI_WEB_API_KEY or TUZI_WEB_API_KEYNOUS_GENAI_TUZI_OPENAI_API_KEY or TUZI_OPENAI_API_KEYNOUS_GENAI_TUZI_GOOGLE_API_KEY or TUZI_GOOGLE_API_KEYNOUS_GENAI_TUZI_ANTHROPIC_API_KEY or TUZI_ANTHROPIC_API_KEYNOUS_GENAI_MCP_HOST (default: 127.0.0.1)NOUS_GENAI_MCP_PORT (default: 6001)NOUS_GENAI_MCP_PUBLIC_BASE_URLNOUS_GENAI_MCP_BEARER_TOKENNOUS_GENAI_MCP_TOKEN_RULESModel string is {provider}:{model_id} (example: openai:gpt-4o-mini).
Use this to pick a model by output modality:
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai model available --all)
# Look for: out=text / out=image / out=audio / out=video / out=embedding
If you have not configured any keys yet, you can still view the SDK curated list:
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai model sdk)
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai --model openai:gpt-4o-mini --prompt "Describe this image" --image-path "/path/to/image.png")
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai --model openai:gpt-image-1 --prompt "A red square, minimal" --output-path "/tmp/out.png")
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai --model openai:whisper-1 --audio-path "/path/to/audio.wav")
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai --model openai:tts-1 --prompt "你好" --output-path "/tmp/tts.mp3")
Install:
python -m pip install --upgrade nous-genai
Minimal example:
from nous.genai import Client, GenerateRequest, Message, OutputSpec, Part
client = Client()
resp = client.generate(
GenerateRequest(
model="openai:gpt-4o-mini",
input=[Message(role="user", content=[Part.from_text("Hello")])],
output=OutputSpec(modalities=["text"]),
)
)
print(resp.output[0].content[0].text)
Note: Client() loads .env.* from the current working directory; run your script in the directory that contains
.env.local, or export env vars in the process environment.
Start server (Streamable HTTP: /mcp, SSE: /sse):
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai-mcp-server)
Recommended: set auth via runtime env vars or .env.local before exposing the server:
# NOUS_GENAI_MCP_BEARER_TOKEN=sk-...
Debug with MCP CLI:
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai-mcp-cli env)
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai-mcp-cli tools)
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai-mcp-cli call list_providers)
(cd "<SKILL_BASE_DIR>" && uvx --from nous-genai genai-mcp-cli call generate --args '{"request":{"model":"openai:gpt-4o-mini","input":"Hello","output":{"modalities":["text"]}}}')
Set provider credentials via runtime env vars or in <SKILL_BASE_DIR>/.env.local (see "Supported Environment Variables"), then retry.
If you see cannot detect ... mime type, verify the path exists and is a valid image/audio/video file.
Increase NOUS_GENAI_TIMEOUT_MS (runtime env var or .env.local) and retry.
Binary outputs may be returned as URLs. Private/loopback URLs are rejected by default. Only if you understand the risk, set NOUS_GENAI_ALLOW_PRIVATE_URLS=1.
Set NOUS_GENAI_MCP_BEARER_TOKEN (or NOUS_GENAI_MCP_TOKEN_RULES) via runtime env var or .env.local, and ensure genai-mcp-cli uses the same token.
ZIP package — ready to use