[DEV] VESSL Docs
  • Welcome to VESSL Docs!
  • GETTING STARTED
    • Overview
    • Quickstart
    • End-to-end Guides
      • CLI-driven Workflow
      • SDK-driven Workflow
  • USER GUIDE
    • Organization
      • Creating an Organization
      • Organization Settings
        • Add Members
        • Set Notifications
        • Configure Clusters
        • Add Integrations
        • Billing Information
    • Project
      • Creating a Project
      • Project Overview
      • Project Repository & Project Dataset
    • Clusters
      • Cluster Integrations
        • Fully Managed Cloud
        • Personal Laptops
        • On-premise Clusters
        • Private Cloud (AWS)
      • Cluster Monitoring
      • Cluster Administration
        • Resource Specs
        • Access Control
        • Quotas and Limits
        • Remove Cluster
    • Dataset
      • Adding New Datasets
      • Managing Datasets
      • Tips & Limitations
    • Experiment
      • Creating an Experiment
      • Managing Experiments
      • Experiment Results
      • Distributed Experiments
      • Local Experiments
    • Model Registry
      • Creating a Model
      • Managing Models
    • Sweep
      • Creating a Sweep
      • Sweep Results
    • Workspace
      • Creating a Workspace
      • Exploring Workspaces
      • SSH Connection
      • Downloading / Attaching Datasets
      • Running a Server Application
      • Tips & Limitations
      • Building Custom Images
    • Serve
      • Quickstart
      • Serve Web Workflow
        • Monitoring Dashboard
        • Service Logs
        • Service Revisions
        • Service Rollouts
      • Serve YAML Workflow
        • YAML Schema Reference
    • Commons
      • Running Spot Instances
      • Volume Mount
  • API REFERENCE
    • What is the VESSL CLI/SDK?
    • CLI
      • Getting Started
      • vessl run
      • vessl cluster
      • vessl dataset
      • vessl experiment
      • vessl image
      • vessl model
      • vessl organization
      • vessl project
      • vessl serve
      • vessl ssh-key
      • vessl sweep
      • vessl volume
      • vessl workspace
    • Python SDK
      • Integrations
        • Keras
        • TensorBoard
      • Utilities API
        • configure
        • vessl.init
        • vessl.log
          • vessl.Image
          • vessl.Audio
        • vessl.hp.update
        • vessl.progress
        • vessl.upload
        • vessl.finish
      • Dataset API
      • Experiment API
      • Cluster API
      • Image API
      • Model API
        • Model Serving API
      • Organization API
      • Project API
      • Serving API
      • SSH Key API
      • Sweep API
      • Volume API
      • Workspace API
    • Rate Limits
  • TROUBLESHOOTING
    • GitHub Issues
    • VESSL Flare
Powered by GitBook
On this page
  • Requirements
  • Building from VESSL's pre-built images
  • Example
  • Building from community maintained images
  • Example
  • FAQ
  1. USER GUIDE
  2. Workspace

Building Custom Images

Requirements

To use custom images to run a workspace, your custom images have to satisfy below requirements.

  • Jupyterlab

    • VESSL runs Jupyterlab and expose port 8888. Jupyterlab should be pre-installed in the container image.

    • Jupyterlab daemon must be located in /usr/local/bin/jupyter.

  • sshd

    • VESSL runs sshd and expose port 22 as NodePort. sshd package should be pre-installed in the container image.

  • PVC mountable at /root

    • VESSL mounts a PVC at /root to keep state across Pod restarts.

Building from VESSL's pre-built images

VESSL offers pre-built images to run workspaces directly. You can use these images to build your own images. These images already have pre-installed Jupyterlab and sshd. The list of images is in the following table.

Python Version
Frameworks
Image

3.8.17

-

quay.io/vessl-ai/kernels:py38-202306140446

3.8.10

CUDA 11.8.0 PyTorch 1.14.0a0

quay.io/vessl-ai/ngc-pytorch-kernel:22.12-py3-202301160809

3.8.10

CUDA 11.8.0 TensorFlow 2.10.1

quay.io/vessl-ai/ngc-tensorflow-kernel:22.12-tf2-py3-202301160808

3.10.12

-

quay.io/vessl-ai/kernels:py310-202306140445

3.10.6

CUDA 12.1.1 PyTorch 2.0.0

quay.io/vessl-ai/ngc-pytorch-kernel:23.05-py3-202306150328

3.10.6

CUDA 12.1.1 TensorFlow 2.12.0

quay.io/vessl-ai/ngc-tensorflow-kernel:23.05-tf2-py3-202306150329

Example

# Use CUDA 11.8.0, PyTorch 1.14.0a0 base image
FROM quay.io/vessl-ai/ngc-pytorch-kernel:22.12-py3-202301160809

# Install custom Python dependencies
RUN pip install transformers
...

Building from community maintained images

You can make your own images from any community maintained Docker images. Make sure that your image meet our requirements.

Example

FROM nvidia/cuda:11.2.2-devel-ubuntu20.04

RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive \
    apt-get install -y \
    software-properties-common curl openssh-server

# Install Python 3.9
# Note that base image has python3.8 (3.8.10) installed
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get install -y python3.9 python3.9-distutils
# Add symbolic links
RUN update-alternatives --install /usr/bin/python3 python3 $(which python3.9) 1
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1

# Install pip
RUN curl https://bootstrap.pypa.io/get-pip.py | python
RUN pip install -U pip

# Install Jupyterlab
RUN pip install jupyterlab

FAQ

  • If you use conda for installing Jupyterlab, generally Jupyterlab daemon is located in /opt/conda/bin/jupyter. In this case, you should make a symbolic link in /usr/local/bin/jupyter.

    # In Dockerfile,
    RUN ln -s /opt/conda/bin/jupyter /usr/local/bin/jupyter
PreviousTips & LimitationsNextServe

Last updated 1 year ago