Privacy & security
hfo is fully local by design. It contacts two public Hugging Face endpoints to list repo files and download the GGUF you pick. Everything else — hardware probing, Modelfile generation, Ollama registration — runs on your machine with your user privileges. No telemetry, no accounts, no subscriptions.
What hfo contacts over the network
Every network call hfo makes falls into one of two patterns, both against huggingface.co, both unauthenticated by default:
That's the full list. No analytics pixel, no Sentry, no PostHog, no Plausible. The user-agent string is hfo/<version> so you can audit traffic from your router logs if you want.
Local-only operations
Everything else runs in your shell with your privileges:
- Hardware probing:
nvidia-smi(when a discrete NVIDIA GPU is present) +systeminformationfor VRAM, RAM, CPU, OS. - Ollama ops:
ollama ps,ollama list,ollama create,ollama rm,ollama launch— invoked as subprocesses. - Environment persistence:
setxon Windows,launchctl setenv+~/.zprofileon macOS,~/.profile+ systemd override on Linux. - Filesystem: reads/writes under your chosen install dir (default: current working directory) and the platform config dir for
settings.json+ backups.
Gated or private repos
If you want to download a gated model, provide a Hugging Face token via either:
- The
HF_TOKENenvironment variable (standard HF convention), or - The
--token/-tCLI flag.
The token is attached as an Authorization: Bearer <token> header on the two HF requests above, and is never persisted by hfo itself. It lives wherever you sourced it from (shell rc file, secret manager, CI env).
Never writes to settings. HF_TOKEN stays out of settings.json. If you accidentally commit a dotfile that exports it, hfo can't leak it for you.
What lives on your disk
- Settings file with preferences (theme, language, refresh cadence, model dir, installations index). No secrets.
- GGUF files in your chosen install directory. Default is
process.cwd()— wherever you launched hfo. - Modelfiles generated per install, next to the GGUF.
- Zip backups under the platform config dir if you use
Bor--backup.
Config locations:
- Windows —
%APPDATA%\hfo\ - macOS —
~/Library/Application Support/hfo/ - Linux —
$XDG_CONFIG_HOME/hfo/(defaults to~/.config/hfo/)
Ollama install helpers
If Ollama isn't on your PATH, hfo offers to install it via the platform-appropriate method:
- Windows:
winget install Ollama.Ollama - macOS:
brew install ollama - Linux: the official
curl -fsSL https://ollama.com/install.sh | sh
All three run only when you explicitly confirm from the installer overlay. hfo never installs anything without your go-ahead.
Costs
Zero. MIT license, no API keys (HF public endpoints are free), no subscriptions, no rate-limited tiers. Local inference cost is electricity.
Reporting a security issue
Use GitHub's private advisory form:
github.com/carrilloapps/hfo/security/advisories/new
Or email m@carrillo.app. Machine-readable contact info is published at /.well-known/security.txt per RFC 9116.
Threat model
hfo trusts:
- Hugging Face's TLS chain and the integrity of the GGUF + README files you choose to download.
- The local Ollama daemon.
- Your operating system's subprocess isolation when
ollama launch <integration>shells out.
hfo does not try to defend against:
- Malicious Modelfile syntax in a Hugging Face README (you see and can customize the generated Modelfile before
ollama create). - A compromised local user (a compromised user can simply read your
HF_TOKENdirectly). - Network-level MITM outside of TLS (HF enforces HTTPS on the endpoints hfo hits).