LLM Providers
GPTLint supports any chat LLM which exposes an OpenAI-compatible chat completions API. Specific instructions for the most popular LLM providers and local, open source models are included below.
OpenAI
This is the default. Just export an OPENAI_API_KEY
environment variable either via your environment, a local .env
file, or via the CLI --apiKey
flag.
The default model is gpt-4o
. The default weakModel
is gpt-4o-mini
which is used for two-pass linting.
If you have access to gpt-4-turbo-preview
, for instance, you can use it as the strong model by adding a config file to your project. For example:
import { recommendedConfig } from 'gptlint'
/** @type {import('gptlint').GPTLintConfig} */
export default [
...recommendedConfig,
{
llmOptions: {
model: 'gpt-4-turbo-preview',
weakModel: 'gpt-4o-mini'
}
}
]
Anthropic
Anthropic Claude is supported by using a proxy such as OpenRouter.
- Claude 3 Opus (powerful, but very expensive)
- Claude 3 Sonnet (balanced)
- Claude 3 Haiku
Export your OpenRouter API key as an OPENAI_API_KEY
environment variable either via your environment, a local .env
file, or via the CLI --apiKey
flag.
import { recommendedConfig } from 'gptlint'
/** @type {import('gptlint').GPTLintConfig} */
export default [
...recommendedConfig,
{
llmOptions: {
apiBaseUrl: 'https://openrouter.ai/api/v1',
model: 'anthropic/claude-3-opus:beta',
weakModel: 'anthropic/claude-3-haiku:beta',
// Optional
kyOptions: {
headers: {
// Optional, for including your app on openrouter.ai rankings
'HTTP-Referer': 'https://gptlint.dev',
// Optional, shows in rankings on openrouter.ai
'X-Title': 'gptlint'
}
}
}
}
]
Local Models
- ollama supports exposing a local OpenAI compatible server
- vLLM supports exposing a local OpenAI compatible server
Use the apiBaseUrl
and apiKey
config / CLI params to point GPTLint to your local model server.
In production, you may want to consider using a cloud provider that offers inference and fine-tuning APIs such as: