Local GPU Advanced / Lab Mode

Run Biosphere on your own GPU when your lab is ready

Managed cloud is the default path. Hybrid is the recommended practical option for labs with a workstation GPU. Local-only execution remains available as an advanced mode for teams that already maintain the required scientific tools.

Back to Platform
Why use local GPU?

Reduce recurring spend by paying orchestration only for local heavy phases.

Keep sensitive structures and trajectories on your own machine during validation.

Start with hybrid, then move to local-only only if your lab wants full control.

Recommended storage

Use fast local SSD storage for trajectories, checkpoints, and MM-PBSA intermediates.

Keep at least 100 GB free for medium-sized multi-replica studies.

Persist checkpoints so Biosphere can resume production MD after worker interruption.

Best execution modes

`Managed cloud` is the default and easiest path for most users.

`Hybrid` is the recommended lab option when you want local MD with managed fallback.

`Local GPU only` is the advanced mode for teams already comfortable maintaining scientific tooling.

Prerequisites

Windows 11 or Linux host with a CUDA-capable NVIDIA GPU

WSL2 with Ubuntu and a working `nvidia-smi` command inside WSL

GROMACS, AutoDock Vina, OpenBabel, fpocket, ACPYPE, and optional AmberTools installed on the worker

A valid Biosphere account plus a worker config derived from `worker/config.wsl.example.yaml`

Setup Flow

1Prepare WSL and CUDA

Install NVIDIA drivers on the host, enable WSL2, then verify GPU passthrough inside Ubuntu with `nvidia-smi`.

2Install the scientific toolchain

Install GROMACS, Vina, OpenBabel, fpocket, ACPYPE, and optional AmberTools natively inside WSL.

3Configure the local worker

Create `worker/config.wsl.yaml`, set the API endpoint, worker identity, local storage path, and enabled engines.

4Launch hybrid or advanced local jobs

Prefer `Hybrid local + cloud` for day-to-day lab use. Use `Local GPU only` when your team already maintains the full scientific stack locally.

Advanced Local GPU Pricing

Consumer 24 GB

$0.18/hr

Best for affordable local validation runs

Workstation 48 GB

$0.22/hr

Good balance for steady local MD and hybrid workflows

Datacenter 80 GB

$0.28/hr

Near-cloud throughput while keeping data on your own hardware

Local pricing covers orchestration, provenance tracking, dashboard updates, and optional hybrid fallback coordination. Cloud charges apply only to the remote portion of hybrid or managed jobs, which is why hybrid is usually the smoothest lab deployment path.

Where to continue

For deployment and worker wiring, follow `docs/wsl-local-gpu-validation.md` and `worker/config.wsl.example.yaml` inside the project.

Inside the main pipeline form, start with `Managed cloud`, move to `Hybrid local + cloud` when your lab GPU is ready, and use `Local GPU only` only in advanced setups.