Remote AI Workstation
Run Claude or Copilot on your own laptop, keep the files and credentials local, and still give yourself or a teammate a clean remote entry point. NFLTR handles the public edge, while the AI session and the actual workstation stay on the machine you control.
What This Pattern Gives You
| Need | NFLTR pattern | Why it works |
|---|---|---|
| Remote Claude or Copilot chat | nfltr ai ... | Publishes an interactive PTY session with browser chat or terminal UI. |
| Tool-enabled AI that can work inside a repo | nfltr ai --cwd ... or nfltr command ... copilot -- --allow-all-tools | The model runs on the laptop next to the real files, shells, and browser tools. |
| Manual fallback when the AI gets stuck | nfltr terminal, nfltr tcp 22, or nfltr tcp 3389 | You can jump onto the same machine without opening a separate VPN. |
| Short-lived reviewer link | Dashboard Share Workflow | Issue a public, review, or webhook link with TTL and access controls. |
| Stable hostname for a recurring workstation | Dashboard Managed Routes | Bind a real hostname and path prefix to the laptop without making every session a one-off share link. |
Treat NFLTR as the control plane and your laptop as the execution plane. The model, repo checkout, browser profile, SSH agent, and desktop apps stay local. NFLTR only handles reachability, access control, and route management.
1. Install the Agent on the Laptop
- Download the right binary from the NFLTR download page.
- Move it into your
PATHasnfltr. - Store an API key or fleet token in the local config.
# one-time setup on the laptop
nfltr config add-api-key YOUR_TOKEN_OR_API_KEY
# optional: keep the laptop grouped under a fleet selector
export NFLTR_LABELS=device=laptop,role=ai,owner=alice
If you manage many laptops, create a fleet token in the dashboard and copy the generated bootstrap commands. The new Fleet Token reveal flow writes the selector contract and ready-to-run join commands for HTTP, TCP, and AI sessions.
2. Expose Claude or Copilot from the Laptop
Use nfltr ai when you want a live AI UI backed by the laptop itself.
# Claude session rooted in the current repo
nfltr ai claude \
--cwd ~/projects/customer-portal \
--mode chat \
--basic-auth review:secret \
--name laptop-claude \
--labels device=laptop,role=ai,tool=claude
# Copilot chat with the same model, same machine, different route
nfltr ai copilot \
--cwd ~/projects/customer-portal \
--mode chat \
--basic-auth review:secret \
--name laptop-copilot \
--labels device=laptop,role=ai,tool=copilot
Open the share URL or stable route in a browser and you are now talking to the AI process running on that laptop. The repo never leaves the machine unless the tool itself sends data elsewhere.
3. Give the AI Computer-Use / Tool Access
When the AI needs shell access, repo context, or browser automation, keep the process on the laptop and publish the interface. There are two useful shapes:
Option A — native AI UI
# Good when the CLI already exposes tool use in its own UI
nfltr ai claude \
--cwd ~/projects/customer-portal \
--name laptop-claude-tools \
--basic-auth review:secret \
--e2ee
Option B — command wrapper for Copilot CLI or other tool-driven CLIs
# Remote Copilot CLI with tool permissions enabled
nfltr command --name copilot-tools --basic-auth 'review:secret' \
--body-as-arg -p --timeout 600 \
--cwd ~/projects/customer-portal \
copilot -- --allow-all-tools
# Split by repo so each service keeps its own working directory
nfltr command --name copilot-frontend --body-as-arg -p \
--cwd ~/projects/customer-portal/frontend \
copilot -- --allow-all-tools
nfltr command --name copilot-backend --body-as-arg -p \
--cwd ~/projects/customer-portal/backend \
copilot -- --allow-all-tools
This pattern is the cleanest way to let the model operate on the real workstation state: local repo checkout, package manager cache, browser profile, SSH config, and test tools all stay on the laptop.
4. Add Manual Control of the Same Machine
AI control is useful until it is not. Give yourself an operator fallback on the same laptop:
| Fallback | Command | Use when |
|---|---|---|
| Browser shell | nfltr terminal --basic-auth review:secret --name laptop-terminal | You need quick command-line intervention from a browser. |
| Native SSH | nfltr tcp 22 --name laptop-ssh | You want your normal SSH client, tmux, editor, or agent forwarding. |
| Windows desktop | nfltr tcp 3389 --name laptop-rdp | You need full remote desktop instead of an AI or terminal surface. |
That gives you a tight loop: ask the AI to make the change, then drop into the terminal or desktop tunnel if you want to verify, fix, or supervise directly.
5. Publish the Session Safely
The dashboard now has two controls that matter for remote AI workstations:
- Share Workflow — issue a short-lived public link, a review link with Basic Auth, or a webhook link with generated header/token gates.
- Managed Routes — bind a hostname like
claude.example.comorops.example.com/copilotto the laptop's advertised route prefix so the entry point stays stable.
For a personal workstation, start with the review preset. For a recurring team workflow, move the AI session behind a managed route and keep the share workflow for temporary reviewers.
# Recommended production-ish posture for a long-running laptop AI session
nfltr ai claude \
--cwd ~/projects/customer-portal \
--name alice-laptop-claude \
--labels device=laptop,role=ai,owner=alice \
--e2ee
# Then in the dashboard:
# 1. Open Catalog
# 2. Save managed route host=claude.example.com path=/ target=(default backend)
# 3. Use review shares only for temporary collaborators
6. Verified Mode and Cert Rotation
If the workstation is sensitive, run the AI session in verified mode and use the dashboard Proof panel:
- Use
--e2eeor--mode verifiedso the agent terminates TLS locally. - Share the proof bundle or proof JSON with anyone who needs to pin the session.
- Use the dashboard's Rotate certificate action when you reprovision the laptop or want a fresh trust chain.
That keeps the relay out of the plaintext path while still giving you stable hostnames, route objects, and dashboard visibility.
7. Example End-to-End Setup
# Claude on the laptop
nfltr ai claude \
--cwd ~/projects/customer-portal \
--name alice-claude \
--labels device=laptop,role=ai,owner=alice \
--e2ee
# Manual fallback terminal on the same laptop
nfltr terminal \
--basic-auth review:secret \
--name alice-terminal \
--labels device=laptop,role=ops,owner=alice
# Optional Windows or macOS desktop tunnel
nfltr tcp 3389 --name alice-rdp
Use the dashboard to assign a stable hostname to the Claude session, then keep the terminal and RDP flows as operator-only fallbacks.
Related Reading
- CLI Reference: nfltr ai — AI mode flags, chat vs terminal UI, and examples.
- Team Collaboration — shared AI sessions and multi-user workflows.
- DevOps & Remote Access — terminal, SSH, and workstation fallback patterns.
- RDP Remote Access — full desktop tunnel for Windows hosts.
- End-to-End Encryption — verified mode and proof model.