What this plugin is.
Why an LSP plugin instead of an MCP bridge — and what's deliberately out of scope.
Why LSP and not an MCP server with a "lint this file" tool?
LSP diagnostics are pushed into the editor automatically by the language server every time a file changes — no agent action required. An MCP tool would need the agent to remember to call it after each edit. With LSP, parse errors and lint findings reach the agent through the same channel as a human's red squiggles, immediately and unconditionally.
Practically: zero round-trips, zero forgotten checks, and the agent's reasoning sits on top of the same data the human sees.
Why Verible specifically?
Verible is the most actively maintained open-source SystemVerilog parser, ships a real LSP, has a documented and stable rule corpus with citable IDs, and is used by Google and ChipsAlliance internally. The rule IDs matter because sv-reviewer uses them as evidence — a finding with no rule ID is an opinion, a finding with one is a citation.
The team tracked alternatives (slang-server, veridian, svlangserver) and they're either too new to commit to or stale upstream. slang-server is the most likely v1.1 opt-in.
Why not also wire in Yosys / Verilator / Quartus?
Synthesis feedback is a different shape of problem — slower, longer-lived, with build artifacts. Mixing it into an LSP plugin would muddle both. That work has its own home: the fpga-flow companion plugin, reserved for v2 in the same marketplace. v1 of fpga-lsp ships only the LSP integration so the contract stays small and demonstrable.
Does the plugin parse .qsf or .xpr project files?
Not in v1 — explicitly deferred. If your repo commits a verible.filelist, the plugin uses it byte-for-byte. Otherwise the session-start hook walks the workspace and writes one. Project-file parsing for Quartus and Vivado is on the table for fpga-flow, where the synthesis context already justifies it.
What runs where.
Linux x64 is the only auto-install in v1. The matrix and the manual install are documented; the failure mode on unsupported platforms is intentional.
Why is Linux x64 the only auto-install target?
v1 ships one auto-install target so the install path stays short and verifiable. The pinned Verible binary has a known SHA256 for that platform, the download is one tarball, and the wrapper has one happy path to test. macOS, Windows, and Linux arm64 are all on the list — they need their own SHA pinning and integration tests, which adds scope.
What happens on an unsupported platform?
The LSP wrapper prints a clear, README-pointing error on first launch — not a generic command not found. The failure mode is supposed to be obvious so users don't waste time guessing. Once a manual install puts verible-verilog-ls on $PATH, the wrapper picks it up automatically.
Can I pin a different Verible version?
Yes — install whatever version you want manually. The wrapper resolves $PATH first and falls back to the plugin-managed download. The pin in the plugin exists so everyone on a default install sees identical behaviour; it's a floor, not a ceiling.
How it fits in.
What the plugin adds to a Claude Code session and how the surfaces relate.
Do I need to invoke a skill for the agent to see lint errors?
No. Diagnostics flow through LSP, which the platform feeds to the agent automatically. The skills (sv-lint, sv-format, sv-diff) exist for explicit invocation — when you want to lint a tree, reformat a file, or compare two files structurally — not as the path through which routine diagnostics reach the agent.
What does sv-reviewer do that sv-lint doesn't?
sv-lint returns Verible's findings, raw. sv-reviewer runs the same lint first, then layers HDL-specific judgment that Verible's rule corpus doesn't cover — inferred latches, blocking-vs-nonblocking in always_ff, sensitivity-list drift, X-propagation, clock-domain hygiene. Every finding cites a Verible rule ID where one exists; judgment beyond the rule set is labelled as such.
Will format-on-save fight my .editorconfig or pre-commit hook?
The hook runs verible-verilog-format with project defaults, so its output is whatever Verible considers canonical. If you have a strong house style that Verible doesn't match, disable the hook in .lsp.json and run sv-format manually, or use your own formatter in pre-commit.
When something is off.
The most common failure modes and what to check first.
Diagnostics aren't showing up after I edit a .sv file.
Three things to check, in order. First, verible-verilog-ls --version — does the binary exist? If not, finish step 4 of the install. Second, is the file a known HDL extension? The plugin attaches on .sv .svh .v .vh .vhd .vhdl. Third, look at the LSP log in Claude Code — the wrapper prints its resolved binary path and the filelist it's using.
Every VHDL file errors on its library ieee; line.
That's the canonical "no vhdl_ls.toml" symptom. cargo install vhdl_ls alone is not enough — the binary needs a config that points it at the standard libraries. Drop a vhdl_ls.toml at your project root, or set VHDL_LS_CONFIG to a global config. Step 3 of the install walks through both.
Cross-file go-to-definition isn't resolving.
Verible needs a filelist. The plugin's session-start hook writes one at .fpga-lsp/verible.filelist after walking the workspace. If your repo commits its own verible.filelist, the plugin respects that one byte-for-byte — make sure the paths in it are correct relative to the project root.
What's next.
v1 is small on purpose. The deferred list is small on purpose too.
What's planned for v1.1?
Auto-install on macOS, Windows, and Linux arm64; slang-server opt-in once dogfooding shows where Verible's parser limits matter; and the first batch of sv-reviewer judgments graduating from "experimental" to "stable" once they earn their keep against nyavana/pvz-fpga.
What's fpga-flow?
The companion plugin in the same marketplace, reserved for v2. It will cover synthesis feedback (Yosys, Verilator, Quartus) — the build-shaped problems that don't belong in an LSP. Today it ships only a placeholder README; no manifest, not registered in marketplace.json, not installable.
How is the success of v1 measured?
Four end-to-end criteria checked against nyavana/pvz-fpga: diagnostics on edit without a tool call, cross-file go-to-definition through the resolved filelist, all three skills invocable, and the sv-reviewer agent running Verible lint first and citing rule IDs in its findings. v1 is "done" when those four hold.