Skip to content

PaiTIENT Secure Model CLI

The PaiTIENT Secure Model CLI provides a convenient command-line interface for deploying, managing, and using secure AI models. It's built on top of the Node.js SDK and offers the same HIPAA/SOC2 compliant security features.

Installation

You can install the CLI globally using npm:

bash
npm install -g paitient-secure-model

After installation, you'll have access to the secure-model command in your terminal.

Authentication

Before using the CLI, you need to set up your authentication credentials. There are two ways to do this:

Environment Variables

Set the following environment variables:

bash
export PAITIENT_API_KEY=your-api-key
export PAITIENT_CLIENT_ID=your-client-id
export PAITIENT_ENDPOINT=https://api.paitient.ai  # Optional, defaults to production

Interactive Login

Alternatively, you can use the interactive login:

bash
secure-model login

This will prompt you for your API key and client ID, and save them securely for future use.

Available Commands

The CLI provides the following commands:

deploy

Deploy a new model to your secure environment.

bash
secure-model deploy --model ZimaBlueAI/HuatuoGPT-o1-8B --name my-secure-model [--use-gpu]

Options:

  • --model: The HuggingFace model ID to deploy (required)
  • --name: A friendly name for your deployment (required)
  • --use-gpu: Whether to use GPU for inference (default: false)
  • --region: AWS region for deployment (default: us-west-2)
  • --replicas: Number of replicas to deploy (default: 1)
  • --instance-type: EC2 instance type (default: ml.g4dn.xlarge if --use-gpu is true)

status

Check the status of a model deployment.

bash
secure-model status --deployment-id deployment-123

Options:

  • --deployment-id: The ID of the deployment to check
  • --all: Show all deployments

generate

Generate text using a deployed model.

bash
secure-model generate --deployment-id deployment-123 --prompt "Your prompt here"

Options:

  • --deployment-id: The ID of the deployment to use
  • --prompt: The text prompt to generate from
  • --max-tokens: Maximum number of tokens to generate (default: 100)
  • --temperature: Sampling temperature (default: 0.7)
  • --top-p: Nucleus sampling parameter (default: 0.9)
  • --output-file: Save output to a file

subscription

Manage and check your subscription status.

bash
secure-model subscription

This will display information about your current subscription, including:

  • Subscription tier
  • Active features
  • Usage limits
  • Current usage

fine-tune

Fine-tune a model with your own data.

bash
secure-model fine-tune --deployment-id deployment-123 --data-file /path/to/data.jsonl

Options:

  • --deployment-id: The ID of the deployment to fine-tune
  • --data-file: Path to the training data file (JSONL format)
  • --epochs: Number of training epochs (default: 3)
  • --learning-rate: Learning rate for training (default: 3e-5)
  • --output-dir: Directory to save the fine-tuned model (default: ./fine-tuned-model)

help

Display help information for any command.

bash
secure-model help [command]

Examples

Deploying and Using a Model

bash
# Deploy a model
secure-model deploy --model ZimaBlueAI/HuatuoGPT-o1-8B --name clinical-assistant

# Check deployment status
secure-model status --all

# Once deployed, generate text
secure-model generate --deployment-id deployment-123 --prompt "Explain the symptoms of type 2 diabetes"

Fine-tuning a Model

bash
# Prepare your data in JSONL format
# Each line should be a JSON object with "prompt" and "completion" fields

# Fine-tune the model
secure-model fine-tune --deployment-id deployment-123 --data-file clinical-data.jsonl --epochs 5

# Use the fine-tuned model
secure-model generate --deployment-id deployment-123 --prompt "What are the treatment options for hypertension?"

Configuration

The CLI configuration is stored in ~/.paitient/config.json. You can edit this file directly, but it's recommended to use the CLI commands to manage your configuration.

For advanced configuration options, see our CLI Configuration Guide.

Troubleshooting

If you encounter issues with the CLI, try the following:

  1. Ensure your API key and client ID are set correctly
  2. Check your network connection
  3. Verify that your subscription is active
  4. Use the --verbose flag with any command for additional debugging information
  5. Check our troubleshooting guide for common issues

Next Steps

Released under the MIT License.