Guide
LLM Providers

LLM Providers

GPTLint supports any chat LLM which exposes an OpenAI-compatible chat completions API. Specific instructions for the most popular LLM providers and local, open source models are included below.

OpenAI

This is the default. Just export an OPENAI_API_KEY environment variable either via your environment, a local .env file, or via the CLI --apiKey flag.

The default model is gpt-4. We’re not using gpt-4-turbo-preview as the default because some developers don’t have access to it. The default weakModel is gpt-3.5-turbo which is used for two-pass linting.

If you have access to gpt-4-turbo-preview, for instance, you can use it as the strong model by adding a config file to your project. For example:

gptlint.config.js
import { recommendedConfig } from 'gptlint'
 
/** @type {import('gptlint').GPTLintConfig} */
export default [
  ...recommendedConfig,
  {
    llmOptions: {
      model: 'gpt-4-turbo-preview',
      weakModel: 'gpt-3.5-turbo'
    }
  }
]

Anthropic

Anthropic Claude is supported by using a proxy such as OpenRouter.

Export your OpenRouter API key as an OPENAI_API_KEY environment variable either via your environment, a local .env file, or via the CLI --apiKey flag.

gptlint.config.js
import { recommendedConfig } from 'gptlint'
 
/** @type {import('gptlint').GPTLintConfig} */
export default [
  ...recommendedConfig,
  {
    llmOptions: {
      apiBaseUrl: 'https://openrouter.ai/api/v1',
      model: 'anthropic/claude-3-opus:beta',
      weakModel: 'anthropic/claude-3-haiku:beta',
      // Optional
      kyOptions: {
        headers: {
          // Optional, for including your app on openrouter.ai rankings
          'HTTP-Referer': 'https://gptlint.dev',
          // Optional, shows in rankings on openrouter.ai
          'X-Title': 'gptlint'
        }
      }
    }
  }
]

Local Models

Use the apiBaseUrl and apiKey config / CLI params to point GPTLint to your local model server.

In production, you may want to consider using a cloud provider that offers inference and fine-tuning APIs such as: