IC GPU

CLI Reference

The ic command-line tool for managing GPU instances, VMs, models, and more. Table or JSON output, shell completion, and built-in SSH.

Prerequisites

  • Go 1.21 or later (for go install), or download a pre-built binary
  • An IC GPU Service account with an API key (sk-ic-...)

Getting Started

1

Install the CLI

Terminal
# Using go install
go install github.com/ic-gpu/cli/cmd/ic@latest

# Or download a pre-built binary from the releases page
# and place it in your PATH
2

Configure authentication

Terminal
ic configure --api-key sk-ic-your-api-key --api-url https://api.gpu.local

Or set the environment variable:

export IC_GPU_API_KEY="sk-ic-your-api-key"
3

Run your first command

Terminal
ic instance list

Global Flags

FlagDefaultDescription
--outputautoOutput format: table, json (auto-detects TTY)
--api-urlfrom configOverride API base URL
--profiledefaultConfiguration profile name

ic instance — GPU Instances

Terminal
# List instances
ic instance list
ic instance list --status running --tier dedicated
ic instance list --max-results 10

# Create an instance
ic instance create --name my-workspace --tier timesliced  # no memory isolation
ic instance create --name ml-box --tier dedicated --engine vllm  # full isolation

# Get details
ic instance get inst-abc123

# Lifecycle
ic instance stop inst-abc123
ic instance start inst-abc123

# SSH directly into an instance
ic instance ssh inst-abc123
ic instance ssh inst-abc123 --user gpuuser

# View logs
ic instance logs inst-abc123

# Terminate
ic instance delete inst-abc123

Example output:

ID              NAME           TIER         STATUS    SSH
inst-abc123     my-workspace   timesliced   running   10.0.0.50:32022
inst-def456     ml-box         dedicated    stopped   -

ic vm — Virtual Machines

Terminal
# List VMs
ic vm list

# Create a VM
ic vm create --name dev-server --template tpl-ubuntu-24
ic vm create --name heavy-vm --template tpl-ubuntu-24 --cpu 8 --memory 32

# Get details
ic vm get vm-abc123

# Lifecycle
ic vm start vm-abc123
ic vm stop vm-abc123
ic vm reboot vm-abc123

# Delete (must be stopped first)
ic vm delete vm-abc123

ic cluster — Kubernetes Clusters

Terminal
# List clusters
ic cluster list

# Create a cluster
ic cluster create --name ml-cluster

# Get details
ic cluster get clust-abc123

# Download kubeconfig
ic cluster kubeconfig clust-abc123 > kubeconfig.yaml
export KUBECONFIG=./kubeconfig.yaml
kubectl get nodes

# Delete
ic cluster delete clust-abc123

ic slurm — Slurm HPC Environments

Terminal
# List Slurm environments
ic slurm list

# Create a Slurm environment
ic slurm create --name hpc-cluster

# Get details
ic slurm get slurm-abc123

# Delete
ic slurm delete slurm-abc123

ic model — Model Deployments

Terminal
# List deployed models
ic model list

# Deploy a model
ic model deploy --name my-llama --repo meta-llama/Meta-Llama-3-8B-Instruct
ic model deploy --name my-llama --repo meta-llama/Meta-Llama-3-8B-Instruct --engine vllm --gpus 2

# Get deployment status
ic model get my-llama

# Scale replicas
ic model scale my-llama --min 2 --max 5

# Stop a deployment
ic model stop my-llama

ic token — Token Balance

Terminal
# Check balance
ic token balance

# View usage history
ic token usage
ic token usage --days 7

# Purchase tokens
ic token purchase --package pkg-starter
ic token purchase --amount 50000

Example output:

Token Balance
─────────────
Balance:  45,230 tokens
Currency: USD

ic apikey — API Keys

Terminal
# List API keys
ic apikey list

# Create a key
ic apikey create --name ci-pipeline

# Delete a key
ic apikey delete key-abc123

ic sshkey — SSH Keys

Terminal
# List SSH keys
ic sshkey list

# Import a public key
ic sshkey create --name laptop --public-key "ssh-ed25519 AAAA... user@laptop"

# Generate a new key pair (private key shown once)
ic sshkey generate --name auto-key

# Delete
ic sshkey delete ssh-abc123

ic webhook — Webhooks

Terminal
# List webhooks
ic webhook list

# Create a webhook
ic webhook create --url https://example.com/hooks/gpu --events instance.created --events balance.low

# Send a test delivery
ic webhook test wh-abc123

# Delete
ic webhook delete wh-abc123

ic alert — Spending Alerts

Terminal
# List alerts
ic alert list

# Create an alert
ic alert create --threshold 10000 --type dashboard

# Delete
ic alert delete alert-abc123

ic llm — LLM Inference

Terminal
# Chat with an LLM
ic llm chat -m "What is CUDA?"
ic llm chat --model llama-3-8b -m "Explain transformers"
ic llm chat --model llama-3-8b -m "Hello" --max-tokens 128 --temperature 0.5

# Text completion
ic llm complete --prompt "The benefits of GPU computing are:" --model llama-3-8b

# Generate embeddings
ic llm embed --model bge-large --input "GPU computing"

# List available models
ic llm models

Output Formats

Terminal
# Table output (default for terminal)
ic instance list --output table

# JSON output (default when piped)
ic instance list --output json

# Combine with jq for scripting
ic instance list --output json | jq '.[].name'

# Get a single field
ic instance get inst-abc123 --output json | jq -r '.ssh_host'

Shell Completion

Terminal
# Bash
ic completion bash > /etc/bash_completion.d/ic

# Zsh
ic completion zsh > ~/.zsh/completions/_ic

# Fish
ic completion fish > ~/.config/fish/completions/ic.fish

# PowerShell
ic completion powershell > ic.ps1

SSH Integration

The CLI auto-discovers SSH host and port, so you never need to look them up manually.

Terminal
# SSH into a running instance (auto-discovers host/port)
ic instance ssh inst-abc123

# Specify user
ic instance ssh inst-abc123 --user root

Scripting Examples

deploy.sh
#!/bin/bash
set -euo pipefail

# Create instance and capture ID
INST_ID=$(ic instance create --name batch-worker --tier timesliced --output json | jq -r '.id')
echo "Created: $INST_ID"

# Wait for running status
while true; do
  STATUS=$(ic instance get "$INST_ID" --output json | jq -r '.status')
  [ "$STATUS" = "running" ] && break
  echo "  Waiting... ($STATUS)"
  sleep 5
done

echo "Instance is running"

# SSH and run a command
ic instance ssh "$INST_ID" -- python3 train.py

# Clean up
ic instance delete "$INST_ID"
echo "Done"
Terminal
# Pipe text to LLM chat
echo "Summarise this log file" | ic llm chat --model llama-3-8b

# Stop all running instances
ic instance list --output json | jq -r '.[] | select(.status=="running") | .id' | \
  xargs -I{} ic instance stop {}

# Export all instance names
ic instance list --output json | jq -r '.[].name' > instances.txt

Configuration

Terminal
# Configure default profile
ic configure --api-key sk-ic-your-key --api-url https://api.gpu.local

# Use a named profile
ic configure --api-key sk-ic-staging-key --api-url https://staging-api.gpu.local --profile staging

# Use the staging profile
ic instance list --profile staging

# Environment variables (take precedence over config file)
export IC_GPU_API_KEY="sk-ic-your-key"
export IC_GPU_API_URL="https://api.gpu.local"