Project
Roadmap

Roadmap

MVP Public Release

  • rules
    • rewrite liberal-accept-strict-produce to be less verbose and have better examples (WIP)
    • rewrite prefer-types-always-valid-states
    • finish effective-tsconfig
  • project
    • stress test w/ real-world repos
    • finish usage.md doc
    • social image for docs and launch
      • twitter thread
    • public launch! 🚀
    • fix confusing rules with empty config
    • fix cache being in .gptlint
    • fix dry-run should never be cached

Post-MVP

  • cross-file linting (likely using tree-sitter; see my initial exploration)
    • add embedding support for files and functions (PoC)
    • add DRY rule for detecting near duplicates
  • add support for different programming languages
  • add support for applying autofixes to linter errors
    • add support for bad ⇒ good examples for autofixing
  • track the positive instances where we see rule conformance as well?
    • could help us output a better picture of overall code health
  • fine-tuning pipeline
    • for core linting engine
    • for individual rules
  • explore reinforcement learning with continuous fine-tuning so rule accuracy improves over time
  • explore generating rule definitions from an existing repo (PRs, unique code patterns, etc)
  • experiment with ways of making the number of LLM calls sublinear w.r.t. the number of files
    • experiment with using bin packing to optimize context usage, but that’s still same O(tokens)
  • basic eval graphs and blog post
  • demo video
  • SARIF support (github notes)
  • linter engine
    • add support for git diffs
    • track eval results across multiple llm configs during CI
    • add --dry-run support for non-openai llm providers
    • move built-in configs into a separate package
    • improve error reporting to include approx line numbers
    • gracefully respect rate limits
    • add support for openai seed and system_fingerprint to help make the system more deterministic
    • handle context overflow properly depending on selected model
    • add support for comments explaining why it’s okay to break a rule
    • improve evals
    • double-check against openai best practices
    • add additional unit tests to evals for edge cases
  • rules
    • add new rules
    • finish effective-eslint-config
  • config
    • reconsider rule scope now that we have overrides
    • support rule-specific settings like eslint
    • ensure precheck tasks are properly reported as cached