Skip to content

CLI Configuration

The PaiTIENT Secure Model CLI can be configured in multiple ways to suit your workflow. This document covers the various configuration options and methods.

Configuration Methods

The CLI can be configured in several ways, listed in order of precedence:

  1. Command-line arguments
  2. Environment variables
  3. Configuration file
  4. Interactive prompts

Configuration File

The CLI uses a configuration file stored at ~/.paitient/config.json. You can create or edit this file manually, or use the configure command:

bash
secure-model configure

This will prompt you for various configuration values and save them to the configuration file.

Example configuration file:

json
{
  "client_id": "your-client-id",
  "api_key": "your-api-key",
  "endpoint": "https://api.paitient.ai/v1",
  "default_region": "us-east-1",
  "default_compute_type": "gpu",
  "default_instance_type": "g4dn.xlarge",
  "output_format": "json",
  "auto_update_check": true,
  "telemetry": false
}

Environment Variables

You can override configuration values using environment variables:

bash
export PAITIENT_API_KEY="your-api-key"
export PAITIENT_CLIENT_ID="your-client-id"
export PAITIENT_ENDPOINT="https://api.paitient.ai/v1"
export PAITIENT_DEFAULT_REGION="us-east-1"
export PAITIENT_DEFAULT_COMPUTE_TYPE="gpu"
export PAITIENT_DEFAULT_INSTANCE_TYPE="g4dn.xlarge"
export PAITIENT_OUTPUT_FORMAT="json"
export PAITIENT_AUTO_UPDATE_CHECK="true"
export PAITIENT_TELEMETRY="false"

Command-line Arguments

Most settings can also be specified as command-line arguments, which take precedence over environment variables and the configuration file:

bash
secure-model deploy model --name ZimaBlueAI/HuatuoGPT-o1-8B --compute-type gpu --instance-type g4dn.xlarge

Profiles

The CLI supports multiple profiles for different environments or projects. You can specify a profile when running commands:

bash
secure-model --profile production deploy model --name ZimaBlueAI/HuatuoGPT-o1-8B

To create a new profile:

bash
secure-model configure --profile production

This creates a separate section in your configuration file for the specified profile.

Output Formats

The CLI supports several output formats:

  • text: Human-readable text (default)
  • json: JSON output
  • yaml: YAML output
  • table: Tabular output (when applicable)

Set the output format in the configuration file, or with the --output option:

bash
secure-model list deployments --output json

Logging

Control the verbosity of CLI output with the --log-level option:

bash
secure-model deploy model --name ZimaBlueAI/HuatuoGPT-o1-8B --log-level debug

Available log levels:

  • error: Only show errors
  • warn: Show warnings and errors
  • info: Show informational messages (default)
  • debug: Show detailed information for debugging

Auto-completion

The CLI supports auto-completion for commands and options. To enable it:

Bash

bash
secure-model completion bash > ~/.secure-model-completion.bash
echo 'source ~/.secure-model-completion.bash' >> ~/.bashrc

Zsh

bash
secure-model completion zsh > ~/.secure-model-completion.zsh
echo 'source ~/.secure-model-completion.zsh' >> ~/.zshrc

Fish

bash
secure-model completion fish > ~/.config/fish/completions/secure-model.fish

Proxy Settings

If you're behind a proxy, configure the CLI to use it:

bash
export HTTP_PROXY="http://proxy.example.com:8080"
export HTTPS_PROXY="http://proxy.example.com:8080"
export NO_PROXY="localhost,127.0.0.1"

Default Settings

You can configure default settings for various commands in the configuration file:

json
{
  "deploy": {
    "compute_type": "gpu",
    "instance_type": "g4dn.xlarge",
    "min_replicas": 1,
    "max_replicas": 3,
    "auto_scaling": true
  },
  "generate": {
    "max_tokens": 500,
    "temperature": 0.7,
    "top_p": 0.95
  }
}

Next Steps

Released under the MIT License.