SchemaSmith Documentation

LLM Providers

Configure Ask Forge with any of 10 supported LLM providers — from free local models to cloud APIs.

Ask Forge LLM Providers

API Key Security

API keys in the config file support ${ENV_VAR} substitution so secrets never need to be stored in plain text. All providers support this syntax in their apiKey field:

{
  "apiKey": "${OPENAI_API_KEY}"
}

At runtime, AskForge resolves the environment variable value. This works for any provider that accepts an apiKey field.

Configuration

Ask Forge reads LLM provider settings from the config file. The llm section uses the active field to select a provider and the providers object to define each provider's configuration.

{
  "llm": {
    "active": "ollama",
    "providers": {
      "ollama": {
        "type": "ollama",
        "endpoint": "http://localhost:11434",
        "model": "llama3:8b"
      }
    }
  }
}

Switching the Active Provider

Set the active field to the name of the provider you want to use. You can define multiple providers and switch between them by changing a single value:

{
  "llm": {
    "active": "anthropic",
    "providers": {
      "ollama": {
        "type": "ollama",
        "endpoint": "http://localhost:11434",
        "model": "llama3:8b"
      },
      "anthropic": {
        "type": "anthropic",
        "apiKey": "${ANTHROPIC_API_KEY}",
        "model": "claude-sonnet-4-20250514"
      }
    }
  }
}

Override via environment variable: FORGE_LLM_ACTIVE=provider-name. Set to null to disable LLM integration entirely.

Quick Setup via Environment Variables

Set these environment variables to configure a provider without editing config files:

set FORGE_LLM_PROVIDER=OpenAI
set FORGE_LLM_API_KEY=sk-your-key-here
set FORGE_LLM_MODEL=gpt-4o
set FORGE_LLM_ENDPOINT=https://api.openai.com/v1  &REM optional
export FORGE_LLM_PROVIDER=OpenAI
export FORGE_LLM_API_KEY=sk-your-key-here
export FORGE_LLM_MODEL=gpt-4o
export FORGE_LLM_ENDPOINT=https://api.openai.com/v1  # optional

This creates an env-override provider and sets it as active, taking precedence over config file settings.

Provider Reference

Select a provider below to see its configuration. Each tab shows the provider entry within the llm.providers object of the config file.

Ollama Free / Local

Run models locally with no API key required. Install Ollama from ollama.com, then pull a model: ollama pull llama3:8b.

{
  "llm": {
    "active": "ollama",
    "providers": {
      "ollama": {
        "type": "ollama",
        "endpoint": "http://localhost:11434",
        "model": "llama3:8b"
      }
    }
  }
}
FieldRequiredDefault
typeYes"ollama"
endpointYeshttp://localhost:11434
modelYesllama3:8b

Environment variable overrides: FORGE_OLLAMA_ENDPOINT, FORGE_OLLAMA_MODEL

OpenAI

Use GPT-4o and other OpenAI models. Requires an API key from platform.openai.com.

{
  "llm": {
    "active": "openai",
    "providers": {
      "openai": {
        "type": "openAI",
        "apiKey": "${OPENAI_API_KEY}",
        "model": "gpt-4o",
        "organization": "org-optional"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
apiKeyYes--
modelYesgpt-4o
organizationNo--

Default endpoint: https://api.openai.com/v1 (not configurable separately)
Environment variable overrides: FORGE_OPENAI_API_KEY, FORGE_OPENAI_MODEL

Anthropic

Use Claude models from Anthropic. Requires an API key from console.anthropic.com.

{
  "llm": {
    "active": "anthropic",
    "providers": {
      "anthropic": {
        "type": "anthropic",
        "apiKey": "${ANTHROPIC_API_KEY}",
        "model": "claude-sonnet-4-20250514"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
apiKeyYes--
modelYesclaude-sonnet-4-20250514
endpointNohttps://api.anthropic.com

Environment variable overrides: FORGE_ANTHROPIC_API_KEY, FORGE_ANTHROPIC_MODEL

Azure OpenAI

Use OpenAI models hosted on Azure. Requires an Azure OpenAI deployment, endpoint, and API key.

{
  "llm": {
    "active": "azure",
    "providers": {
      "azure": {
        "type": "azureOpenAI",
        "endpoint": "https://your-resource.openai.azure.com",
        "apiKey": "${AZURE_OPENAI_API_KEY}",
        "model": "gpt-4",
        "deployment": "your-deployment-name"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
endpointYes--
apiKeyYes--
modelYes--
deploymentYes--

Note: The deployment field is the Azure deployment name and is required for Azure OpenAI.

Google Gemini

Use Gemini models from Google. Requires a Google API key from aistudio.google.com.

{
  "llm": {
    "active": "gemini",
    "providers": {
      "gemini": {
        "type": "gemini",
        "apiKey": "${GOOGLE_API_KEY}",
        "model": "gemini-pro"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
apiKeyYes--
modelYes--
endpointNohttps://generativelanguage.googleapis.com/v1beta

Note: Uses a Google API key, not a Bearer token.

Groq OpenAI-compatible

Ultra-fast inference with Groq's LPU hardware. Requires an API key from console.groq.com.

{
  "llm": {
    "active": "groq",
    "providers": {
      "groq": {
        "type": "groq",
        "apiKey": "${GROQ_API_KEY}",
        "model": "mixtral-8x7b-32768"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
apiKeyYes--
modelYes--

Default endpoint: https://api.groq.com/openai/v1

Mistral OpenAI-compatible

Use Mistral's models via their API. Requires an API key from console.mistral.ai.

{
  "llm": {
    "active": "mistral",
    "providers": {
      "mistral": {
        "type": "mistral",
        "apiKey": "${MISTRAL_API_KEY}",
        "model": "mistral-large-latest"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
apiKeyYes--
modelYes--

Default endpoint: https://api.mistral.ai/v1

DeepSeek OpenAI-compatible

Use DeepSeek models for code-focused tasks. Requires an API key from platform.deepseek.com.

{
  "llm": {
    "active": "deepseek",
    "providers": {
      "deepseek": {
        "type": "deepSeek",
        "apiKey": "${DEEPSEEK_API_KEY}",
        "model": "deepseek-chat"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
apiKeyYes--
modelYes--

Default endpoint: https://api.deepseek.com/v1

xAI (Grok) OpenAI-compatible

Use Grok models from xAI. Requires an API key from console.x.ai.

{
  "llm": {
    "active": "xai",
    "providers": {
      "xai": {
        "type": "xAi",
        "apiKey": "${XAI_API_KEY}",
        "model": "grok-2-latest"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
apiKeyYes--
modelYes--

Default endpoint: https://api.x.ai/v1

Custom OpenAI-Compatible

Connect to any endpoint that implements the OpenAI API format — LM Studio, text-generation-webui, vLLM, and more.

{
  "llm": {
    "active": "my-custom",
    "providers": {
      "my-custom": {
        "type": "custom",
        "endpoint": "https://your-endpoint.com/v1",
        "apiKey": "${CUSTOM_API_KEY}",
        "model": "your-model-name"
      }
    }
  }
}
FieldRequiredDefault
typeYes--
endpointYes--
apiKeyYes--
modelYes--

Provider Type Values

Use these exact values in the type field when configuring a provider:

Config Value Provider
ollamaOllama
openAIOpenAI
anthropicAnthropic
azureOpenAIAzure OpenAI
geminiGoogle Gemini
groqGroq
mistralMistral
deepSeekDeepSeek
xAixAI (Grok)
customOpenAI-Compatible

Environment Variables

Environment variables override config file values. Use them for container environments, scripting, or quick testing.

Global Overrides

These create an env-override provider that takes precedence over all config file settings:

Variable Description Example
FORGE_LLM_ACTIVE Override the active provider name anthropic
FORGE_LLM_PROVIDER Provider type for env-override OpenAI, Anthropic
FORGE_LLM_API_KEY API key for env-override sk-...
FORGE_LLM_MODEL Model for env-override gpt-4o
FORGE_LLM_ENDPOINT Endpoint for env-override https://api.openai.com/v1

Provider-Specific Overrides

These override individual fields within a specific provider's config:

Variable Provider Overrides
FORGE_OLLAMA_ENDPOINT Ollama endpoint
FORGE_OLLAMA_MODEL Ollama model
FORGE_OPENAI_API_KEY OpenAI apiKey
FORGE_OPENAI_MODEL OpenAI model
FORGE_ANTHROPIC_API_KEY Anthropic apiKey
FORGE_ANTHROPIC_MODEL Anthropic model

Example: Quick Switch with Environment Variables

REM Switch to OpenAI for one session
set FORGE_LLM_PROVIDER=OpenAI
set FORGE_LLM_API_KEY=sk-your-key-here
set FORGE_LLM_MODEL=gpt-4o
ask-forge
# Switch to OpenAI for one session
export FORGE_LLM_PROVIDER=OpenAI
export FORGE_LLM_API_KEY=sk-your-key-here
export FORGE_LLM_MODEL=gpt-4o
ask-forge

Validation

AskForge validates provider configuration at startup. Common validation errors:

  • Missing apiKey for providers that require one
  • Missing endpoint for Ollama or Azure OpenAI
  • Missing deployment for Azure OpenAI
  • Missing model for any provider
  • active pointing to a provider name that does not exist in providers

Tips

Best for Speed

Use Groq for the fastest response times. Their LPU hardware delivers near-instant inference on open-source models.

Best for Free

Use Ollama to run models locally at zero cost. No API key, no usage limits, and your data stays on your machine.

Best for Privacy

Use Ollama or any Custom local server to keep all schema data on your local machine — nothing leaves your network.

Best for Enterprise

Use Azure OpenAI for enterprise compliance, data residency, and integration with your existing Azure infrastructure.