Autonomous · Local-first · MIT

An autonomous pentest agent that runs on your hardware.

Dark Wire is a local-inference pentesting suite. Point a capable model at an authorized target, and the agent drives the full loop — recon, vulnerability research, command execution over SSH — without sending a single prompt or finding to a cloud API.

  • 🛰️ Recon
  • 🔍 CVE research
  • ⚙️ SSH execution
  • 🧠 Reasoning models
  • 🔐 Encrypted history
Authorized testing only. Dark Wire is built for sanctioned engagements — your own labs, bug bounty programs in scope, and clients who've signed a rules-of-engagement document. Don't point it at infrastructure you don't have permission to touch.
100%
local inference
14
agent tools
5
tool-call rounds per turn
AES-256
encrypted engagement history
The agent loop

Recon → research → action — on autopilot.

A capable model decides which tool to call next based on what it just learned. No LangChain, no external orchestrator. Up to 5 tool-call rounds per question for chained reasoning.

  1. 01

    Operator prompt

    You give the agent a goal in plain English: "Map the perimeter of target.lab and tell me what's exploitable."

  2. 02

    Recon

    Agent invokes dns_lookup, red_team_recon (RDAP/WHOIS), get_ip_info, and ssh_run for active probes — all decided autonomously.

  3. 03

    Vulnerability research

    Discovered services feed into search_cve against the NIST NVD, plus web_search and url_fetch for write-ups and proof-of-concepts.

  4. 04

    Execution

    ssh_run runs the tools you'd normally type by hand — nmap, nikto, gobuster, sqlmap, dirb, tcpdump, etc. — against the host you authorized.

  5. 05

    Synthesis

    Agent collates the evidence, prioritizes findings, and writes a structured assessment with concrete next steps. Export to HTML or PDF for the report.

Why local inference

Don't ship your engagement data to a cloud LLM.

Pentest output is some of the most sensitive material a team produces. Most cloud chat APIs are the wrong place to put it.

🔒

OPSEC by default

Recon output, target IPs, credential snippets, internal hostnames — they all stay between Dark Wire and your local model. Nothing transits the public internet unless a tool explicitly does (and you can disable Web Tools entirely).

📜

ToS-compatible

Most provider terms forbid using their hosted models for offensive security work, even on authorized targets. Local models you run yourself sidestep that whole conversation.

🛜

Air-gap friendly

With Web Tools off, Dark Wire only needs your local Ollama / LM Studio / llama.cpp endpoint. Run engagements from inside a segmented lab, a jump box, or a flight without Wi-Fi.

📋

Reproducible reporting

Pin a model, set a seed, encrypt the history file, and you have a forensically traceable engagement transcript that you can hand to the client without DLP concerns.

🪪

Cloud as opt-in

Need a frontier model for one tough analysis step? Switch to OpenAI Cloud or Claude in the dropdown — the rest of the engagement still runs locally. Granular by design.

No per-token bill

Long recon transcripts, big nmap dumps, and multi-round CVE chasing add up fast on metered APIs. Local inference is free at the margin — burn as many tokens as you need.

Agent tools

14 built-in tools, grouped for the engagement workflow.

Capable models call these autonomously. Toggle them all off in Settings if you'd rather stay strictly local.

🛰️ Recon & OSINT

dns_lookupGoogle DoH · A/AAAA/MX/CNAME/TXT/NS/SOA
red_team_reconWHOIS / RDAP for domain or IP
get_ip_infoGeolocation, ISP, ASN
web_searchDuckDuckGo Lite — top 5 results
url_fetchStrip-to-text · SSRF-guarded
get_newsPublic mentions of a target / CVE

🔍 Vulnerability research

search_cveNIST NVD by keyword or product
url_fetchPull writeups, PoCs, advisories
get_definitionDisambiguate technical terms

⚙️ Execution & analysis

ssh_runRemote shell — nmap, nikto, gobuster, sqlmap, …
calculateSandboxed math — subnet sizes, conversions

📦 Operational context

get_timeTimezone-aware engagement timestamps
get_weatherSite-survey context for physical jobs
get_crypto_priceWallet / ransom-note pivots
get_stock_quoteMarket-event correlation

Tool definitions are only sent to models with the 🔧 capability marker. The agent loop runs up to 5 rounds for chained calls — recon → CVE lookup → command → re-analysis.

Remote execution

Wire your jump box in once. The agent does the typing.

Configure host + user + key in Settings → SSH Configuration. The agent calls ssh_run with whatever shell command makes sense for the moment.

🗝️

Two key modes

Point at an existing key file (~/.ssh/id_ed25519) or paste an OpenSSH private key directly. Pasted keys are written with 0600 perms, locked down with Windows ACLs, used for the command, then deleted. Test Connection button confirms auth before you trust the agent with it.

⛓️

Sandboxed command shape

Hostname, username, and command are validated for control characters before execution. BatchMode=yes is forced — no interactive password fallback, no surprise host-key prompts.

⏱️

Timeouts & output capture

15 s for the connection probe, 120 s per command. Stdout and stderr are merged, returned to the model, and fed back into the agent loop for the next decision.

🛡️

System prompt that knows the rules

The default Dark Wire persona is a security-analysis system prompt: it knows to prepend sudo for privileged tools, to prefer ssh_run over web_search for active probes, and to never paraphrase a tool call as plain text.

Providers

Six interchangeable backends. Local first, cloud optional.

Switch the dropdown — URL and API-key fields update automatically. API keys are encrypted at rest with the OS keystore (DPAPI / Keychain / libsecret).

Ollama recommended

localhost:11434

Native API, NDJSON streaming, capability detection via /api/show, in-app model search & pull.

no key

LM Studio

localhost:1234

OpenAI-compatible /v1/*. Use any model loaded in LM Studio.

no key

llama.cpp

localhost:8080

OpenAI-compatible. /props probe detects tools, vision, and <think> reasoning.

no key

OpenAI API local

your endpoint

Any drop-in OpenAI-compatible server (vLLM, TGI, custom).

key optional

OpenAI Cloud ☁

api.openai.com

For one-off frontier-model analysis. Be mindful of ToS for offensive use.

key required

Anthropic Claude ☁

api.anthropic.com

Native Messages API. Server-side tool-format conversion.

key required
In the field

Built for long sessions and clear evidence trails.

Dark Wire main chat view
Streaming with token stats and live context-window meter.
Dark Wire conversation in flight
Conversation tabs let you keep recon, exploitation, and reporting in separate threads.
Dark Wire settings sidebar
SSH config, sampling controls, theme switching, and history mode.
Tradecraft

Designed assuming your engagement data matters.

Sandboxed renderer

contextIsolation: true, nodeIntegration: false. Every privileged action goes through preload IPC — the chat UI cannot touch the file system or the network directly.

OS-level secret storage

API keys and pasted SSH private keys are encrypted at rest via Electron safeStorage — DPAPI on Windows, Keychain on macOS, libsecret on Linux.

AES-256-GCM history

Optional encrypted disk history with scrypt key derivation. Unique salt & IV per save. Passphrase lives in memory only — never written, never sent to the model.

SSRF-guarded fetches

url_fetch rejects file://, loopback, link-local, and RFC1918 private addresses — even if the model is convinced to ask for them.

Hardened SSH key handling

Pasted keys are written with 0600 permissions, ACL-locked to the current user on Windows, and deleted as soon as the command completes. BatchMode=yes blocks interactive password fallbacks.

Sanitized math eval

calculate strips everything outside math characters before evaluating in strict mode. No JS injection vector via the agent's own arithmetic.

Install

Up and running in three commands.

01

Clone

git clone https://github.com/MuchDevSuchCode/DarkWire.git
cd DarkWire
02

Install

npm install

Node 18+ required.

03

Launch

npm start

Auto-connects to localhost:11434.

Build standalone binaries

# Windows portable
npm run build:win

# Linux AppImage / deb
npm run build:linux

Output lands in dist/. Carry the portable .exe on a USB stick to your engagement laptop.

Recommended models for offensive work

You want tools 🔧 and ideally reasoning 🧠. Start with one of these in Ollama:

  • qwen2.5:14b — solid tool calling, runs on a 16 GB GPU.
  • qwen2.5:32b — better at multi-step recon chains, needs ~24 GB VRAM.
  • llama3.1:8b — fast, reliable tool calls, fits in 12 GB.
  • qwq:32b — reasoning model; slow but the agent plans much further ahead.

Smaller models (≤ 3 B) advertise tool support but call them poorly — fine for chat, not for autonomous engagements.

FAQ

Quick answers.

Is this an actual exploit framework?

No — Dark Wire is a driver, not an exploit collection. The exploits, scanners, and post-exploitation tools live on your jump box (kali, parrot, your own toolkit). Dark Wire's job is to plan, sequence, and run them via SSH, then reason over the output.

Does Dark Wire send my engagement data anywhere?

By default, no. Dark Wire talks to whatever endpoint you point it at — typically localhost. The 14 tools call public APIs (DNS, NVD, DuckDuckGo, etc.) only when the model invokes them, and you can flip Web Tools off in Settings to disable that entirely. Cloud LLM providers (OpenAI, Claude) only see traffic if you choose them in the dropdown.

Where is engagement history stored?

If History Mode is Memory, nowhere — it dies with the window. If it's Disk, it goes to your OS user data directory under chat_history/current.json. Enable Encrypt History to write current.enc instead, AES-256-GCM with a passphrase that lives only in memory.

How does the agent decide what to do next?

Each turn, the model sees its tool definitions and the conversation so far. It emits a tool_calls request, Dark Wire executes the tool in the main process, and feeds the result back. Loop runs up to 5 rounds before the model has to write a final response — enough for chains like dns_lookupssh_run nmapsearch_cve → recommendation.

Can I run this fully offline?

Yes — pair it with Ollama, LM Studio, or llama.cpp and disable Web Tools. Configure SSH to a target on your air-gapped lab network, and the only outbound traffic is from your jump box to the target.

What about safety / refusals?

The default Dark Wire system prompt frames the model as a sanctioned security analyst with tool access. Model refusals depend entirely on the model you load — community fine-tunes and base models behave very differently here. If you need looser refusals, load an uncensored or DPO'd offensive-security model in Ollama and point Dark Wire at it.

Will this work on a flight / in a SCIF / in a client's segmented network?

Yes. Bundle a portable build, an Ollama install with one of the recommended models, and the OpenSSH client. With Web Tools off, the only network traffic Dark Wire generates is to your local model and your authorized SSH target.

Can I use it for non-pentest work?

Absolutely — Dark Wire is also just a great desktop chat client. Switch to the Dark Wire — Light theme for a professional general-purpose persona without the offensive-security framing.

Bring Dark Wire to your next engagement.

MIT licensed. Cross-platform. Built for people who care where their engagement data lives.