1 - Installation

This section guides you through the initial setup required to interact with the platform. From a developer or data scientist’s perspective, the CLI serves as the primary bridge between your local code (functions, models) and our platform.

Quick install

The easiest way to install the Intelligence CLI is by running the following command in your terminal. This script automatically detects your operating system and architecture, downloads the latest binary, and adds it to your path.

curl -fsSL https://storage.googleapis.com/fathom-cli-artifacts/install.sh | bash

Verifying the Installation

To verify that everything is set up correctly, restart your terminal or source your profile, then run:

fathom --version

Manual installation

If you prefer to manage binaries manually, you can download the latest release directly from our registry.

  • Get recent version of our CLI from our registry
  • Download the binary file for your OS (darwin-amd64, darwin-arm64, linux-amd64, windows-amd64) using artifact version from the registry: <version>/<platform>/fathom
  • Move it to a directory in your $PATH (e.g., /usr/local/bin)

Manual autocompletion setup

If you need to manually generate autocompletion script, use the completion command:

fathom completion <shell>

2 - Authentication

Before you can deploy models or manage functions, you must authenticate your local environment. This section describes how to securely link your CLI to the platform organization.

Once the CLI is installed, you need to authenticate with your fathom platform instance.

Quick login

To perform an interactive login, run the following command in your terminal:

fathom login <url>

Replace with the actual URL of your Fathom environment (e.g., platform.dev.fthm.io).

Preselect organization and projects

You can optionally provide additional parameters during login if you want the CLI to automatically select your organization and associated projects:

fathom login <url>
  --organization-id <ORGANIZATION_ID>
  --data-project-id <DATA_PROJECT_ID>
  --application-project-id <APPLICATION_PROJECT_ID>
  --intelligence-project-id <INTELLIGENCE_PROJECT_ID>

3 - Contexts

This section explains how to manage various environments seamlessly using a single CLI, ensuring clear separation between development, staging, and production ecosystems.

In the Fathom CLI, a Context represents a specific environment or configuration profile. Instead of logging out and back in to switch between development and production environment, you can use contexts to keep these worlds isolated yet easily accessible.

  • The Default Context: Upon installation and initial setup, the CLI creates a context named default. Unless specified otherwise, all commands execute within this space.
  • The Current Context: This is your active environment. Any command you run will apply to this context until you manually switch to another.

Listing Available Contexts

To see which environments you currently have configured and identify which one is active, use the list command:

fathom context list

The output will display all saved contexts, highlighting the one currently selected for use.

Switching Contexts

There are two primary ways to handle multiple environments: temporary overrides and persistent selection.

Temporary Override

If you want to run a single command against a different environment without changing your global settings, use the --context flag:

fathom --context qa intelligence function list

Persistent Selection

To switch your current context across all future terminal sessions, use the select command:

fathom context select qa 

Once selected, qa becomes your active environment for every subsequent command until you change it again.

Adding New Contexts

Adding a new environment is straightforward. By combining the --context flag with the login command, the CLI initializes a new profile under the name you provide.

To add a new environment named qa:

fathom --context qa login <url>

After completing the authentication flow, the qa context will be saved and available in your context list. You can then configure its specific organization or project settings independently of your default setup.

4 - Functions

This section provides a comprehensive guide on how to develop, build, and manage custom functions within the platform. By leveraging the Fathom CLI, you can transform local code into scalable, platform-hosted logic that integrates seamlessly with your existing data ecosystems.

4.1 - Function Management

This document covers the administrative lifecycle of a function. It explains how to provision new function identities, list your existing inventory, and update metadata such as environment variables or scaling parameters.

Listing Functions

To view all functions available in your current context, use the list command. This provides an overview of function names, IDs, and their current configurations.

fathom intelligence function list

Creating a Function

The create command initializes a new serverless function entry on the platform. This step sets the “blueprint” for your function, defining its runtime environment and hardware requirements before you upload any code.

fathom intelligence function create <NAME> [OPTIONS]

Key Arguments & Options

Argument/OptionRequirementDescription
<NAME>RequiredThe name of the function.
–description, -dRequiredA brief description of the function’s purpose.
–kind, -kRequiredBuild environment kind. Values: rust189, rust186, python310, go124.
–serving-sizeRequiredComputational resources: small, large, or extra-large.
–auto-createOptionalAutomatically creates a sample application structure for you.

Advanced Configuration Options

OptionTypeDescription
–build-envKEY=VALUEEnvironment variables used strictly during the build process.
–serving-envKEY=VALUEEnvironment variables available to the running service.
–serving-gpuEnumGPU variant to attach: nvidia-l4 or nvidia-l4-2x.
–serving-timeoutSecondsMaximum execution time before the function times out.
–schema-inputJSONJSON schema to validate incoming request data.
–schema-outputJSONJSON schema to validate outgoing response data.

Example: Simple Python Function

To create a simple python function with sample application code type:

fathom intelligence function create test1 --description 'Just testing' --kind python310 --serving-size small

Updating a Function

Use the update command to modify an existing function’s configuration. This is commonly used to adjust scaling, update secrets via environment variables, or rename the resource.

fathom intelligence function update <FUNCTION_ID> [OPTIONS]
OptionRequirementDescription
<FUNCTION_ID>RequiredThe unique ID of the function you wish to update.
–name, -nOptionalUpdate the display name of the function.
–description, -dOptionalUpdate the function’s description.
–serving-size, -sOptionalChange the hardware tier (small, large, extra-large).
–serving-envOptionalUpdate or add new runtime environment variables (KEY=VALUE).
–serving-gpuOptionalChange the GPU variant or attach one to the service.
–serving-timeoutOptionalAdjust the execution timeout in seconds.

Example: Scaling an Existing Function

If your function requires more memory or a longer execution time, you can update it as follows:

fathom intelligence function update 1fc6c0ba-2ab6-4c0d-8a32-3a6374956aa3 --serving-size extra-large --serving-timeout 600

4.2 - Source Management

This section focuses on the synchronization between your local development environment and the remote registry. It details the process of pushing your source code to the platform and pulling existing codebases down for collaborative development or version audits.

Initializing Local Source Code

If you created your function using the --auto-create flag, or if the function already contains source code in the registry, you should start by downloading the code to your local machine.

Downloading Code

The download command fetches the function’s source code and places it into a specified directory.

fathom intelligence function download <FUNCTION_ID> <DIRECTORY>

Example

To download a function to a folder named testing:

fathom intelligence function download --id 44b18587-5eb1-4261-a90e-fa6852bc8086 testing

After downloading, your directory (e.g., testing) will typically look like this:

testing
├── func.yaml          # Function configuration
├── function/          # Source code directory
│   ├── __init__.py
│   └── func.py        # Main logic
├── pyproject.toml     # Dependency management
├── README.md
└── tests/             # Local tests directory
    └── test_func.py

Uploading Changes

Once you have made changes to your logic or updated the dependencies locally, you must upload the source code back to the platform before it can be built or deployed.

fathom intelligence function upload <FUNCTION_ID> <DIRECTORY>

Usage Note:

  • The argument should point to the root folder containing your func.yaml or main source files.
  • This command updates the “Source” state of the function on the platform, but it does not automatically trigger a new deployment. You must run a build and deploy command separately to see the changes live.

Versioning and Revisions

Every time you successfully upload code, the CLI generates a new Revision ID. This ID is used to track different versions of your source code before they are built.

fathom intelligence function upload --id 44b18587-5eb1-4261-a90e-fa6852bc8086 testing
Function source code updated - revision: v1

4.3 - Build and Deploy

This section describes the critical transition from raw source code to a running service. It covers the build system where dependencies are resolved and artifacts are created and the deployment process that promotes a build into a live, accessible endpoint.

Understanding the Build Process

A Build is an immutable artifact created from a specific source code revision (e.g., v1). During this phase, the platform resolves dependencies, compiles your code and prepares the environment for execution.

Creating a Build

The build create command triggers the build process on the platform.

fathom intelligence function build create <FUNCTION_ID> [OPTIONS]

Build Options

OptionDescription
–revision, -rThe source revision to build. If omitted, the CLI defaults to the most recent revision.
–auto-deploy, -aIf the build succeeds, the CLI will automatically trigger a deployment.
–watchStarts an interactive watch mode to track build progress in real-time.
–timePrepends timestamps to the build output logs.

Example: Build with Auto-Deploy

> fathom i function build create 44b18587-5eb1-4261-a90e-fa6852bc8086
  --revision v1
  --auto-deploy
  --watch

build created: c4c9e7c6-a93e-43a3-ac14-705691c38f72
(... logs)
build finished: c4c9e7c6-a93e-43a3-ac14-705691c38f72

Monitoring Builds

Since builds happen asynchronously on the platform, you can monitor their status and logs to troubleshoot any compilation or dependency errors.

Listing Builds

To see the history and status of all builds for your current context:

fathom intelligence function build list

Viewing Build Logs

If a build fails or you want to see the detailed compilation output, use the logs command. Note that you need both the Function ID and the Build ID (which you can get from the list command).

fathom intelligence function build logs <FUNCTION_ID> <BUILD_ID>

Deploying a Build

While --auto-deploy is the most common workflow, you can also deploy a previously “Successfully Built” artifact manually.

fathom intelligence function deploy <FUNCTION_ID> <BUILD_ID>

Manual deployment is useful when you want to roll back to an older, stable build without re-building the entire source.

4.4 - Interaction and debugging

This document provides instructions for testing and maintaining your functions post-deployment. It explains how to trigger function logic directly from the CLI with custom payloads and how to stream real-time logs to troubleshoot execution errors or monitor performance.

Once your function is successfully deployed and in a “Running” state, you can interact with it and monitor its execution. This section covers triggering function logic via the CLI and accessing runtime logs.

Invoking Functions

To execute your function and see the result, use the call command group. This allows you to send data to your function and receive the processed output directly in your terminal.

Creating a Call

The call create command triggers the execution of your deployed function.

fathom intelligence function call create <FUNCTION_ID> [OPTIONS]

To read file context stdout can be used:

echo '{}' | fathom intelligence function call create --data - <FUNCTION_ID>

Listing Calls

To see the history of executions, their statuses (e.g., Succeeded, Failed), and execution times:

fathom intelligence function call list <FUNCTION_ID>

Runtime Logs

Debugging a serverless function requires visibility into what happens during execution. The logs command streams the standard output (stdout) and error (stderr) logs from your running instances.

Viewing Function Logs

Unlike build logs, which focus on compilation, these logs show your application’s logic in action (e.g., print statements, caught exceptions, or incoming request traces).

fathom intelligence function logs <FUNCTION_ID>
  • Real-time Debugging: Keep this command running with --watch option in a separate terminal window while you use call create to see immediate feedback.
  • Troubleshooting: If a call fails with a generic error, the logs will typically contain the specific stack trace or error message from your code.

5 - Machine Learning

This section covers the various ways to integrate, deploy, and manage machine learning models within the Fathom platform, ranging from proprietary custom models to pre-trained industry standards.

5.1 - Custom Models

This document describes different approaches to accessing and preparing data required for training machine learning models.

5.1.1 - Tracking Experiments

Tools to log parameters, metrics, and metadata during the training phase to ensure reproducibility.

Fathom enables seamless experiment tracking by integrating with platform experiment tracking. This allows you to log parameters, metrics, and metadata during your training phase to ensure full reproducibility of your machine learning models.

How It Works

The platform does not require a custom logging library. Instead, it leverages the standard MLflow Python SDK (minimum version 3.5 required).

When you execute your training scripts via the Fathom CLI, the system automatically injects the necessary environment variables and authentication contexts. This ensures that all data logged via the SDK is correctly routed to your organization’s private experiment registry on the platform.

Integrated Workflow: Code & Run

The most efficient way to track an experiment is to write a standard Python script and execute it using the mlflow run wrapper. This ensures that your session is authenticated and linked to the correct project.

Write your Training Script

Create a file (e.g., train.py) using the standard MLflow library. The platform handles the backend connection automatically.

import mlflow
import os

# Create experiment
mlflow.set_experiment("fraud-detection-v1")

# Add or update tags to the created experiment.
mlflow.set_experiment_tags({
    "project_name": "Fraud Prevention",
    "team": "Data Science Core",
    "priority": "High"
})

with mlflow.start_run():
    # Log parameters (hyperparameters)
    mlflow.log_param("learning_rate", 0.01)
    mlflow.log_param("epochs", 10)
    
    # Log metrics (performance)
    mlflow.log_metric("accuracy", 0.95)
    mlflow.log_metric("loss", 0.05)
    # You can tag each run under an experiment independently.
    mlflow:set_tag("version", "1.0")
    
    # Log the model (Logged Model)
    # This makes the model visible in the Fathom Model Registry
    # mlflow.sklearn.log_model(sk_model, "model")
    
    print("Run completed and logged to Fathom.")

Execute with Fathom Context

To run your script and ensure the MLflow context is correctly injected, use the mlflow run command. This command wraps your execution and handles all backend communication.

fathom intelligence mlflow run <COMMAND>

To execute your local Python script:

fathom i mlflow run python3 train.py

Accessing Results

Metrics and experiment history are accessible via the intelligence platform portal.

Key Benefits

Zero-Config Tracking

The CLI automatically manages MLFLOW_TRACKING_URI and authentication tokens. No need to hardcode credentials or endpoints.

Native SDK Support

Use the tools you already know (Python, Scikit-learn, PyTorch) without custom Fathom-specific logging libraries.

Unified Registry

Models logged during training are immediately visible in the platform and ready for deployment.

5.1.2 - Model Registry

Centralized versioning and storage for your model artifacts.

The Model Registry is a centralized repository where your trained machine learning models are stored, versioned, and prepared for deployment. Models enter the registry primarily through the mlflow tracking integration.

Registering a Model

To ensure maximum interoperability and performance, we recommend exporting models to the ONNX format. Below is a practical example using a small, public dataset (Iris) to train a model and push it to the Model Registry.

The following script trains a simple classifier and logs it as an ONNX artifact.

import mlflow
import mlflow.onnx
import onnx
import numpy as np

from mlflow.models.signature import ModelSignature
from mlflow.types.schema import Schema, TensorSpec
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType

# 1. Prepare data and train a small model
iris = load_iris()
X, y = iris.data, iris.target
model = RandomForestClassifier(n_estimators=10)
model.fit(X, y)

input_schema = Schema([
    TensorSpec(np.dtype(np.float32), [1, 4], name="float_input")
])
output_schema = Schema([
    TensorSpec(np.dtype(np.int64), [-1], name="label")
])
signature = ModelSignature(inputs=input_schema, outputs=output_schema)

# 2. Convert the Scikit-learn model to ONNX format
initial_type = [('float_input', FloatTensorType([None, 4]))]
options = {type(model): {'zipmap': False}}

onnx_model = convert_sklearn(model, initial_types=initial_type, options=options, target_opset=17)

mlflow.set_experiment("test-experiment")

# Add or update tags to the created experiment.
mlflow.set_experiment_tags({
    "project_name": "Fraud Prevention",
    "team": "Data Science Core",
    "priority": "High"
})

# 3. Log to Fathom via MLflow
with mlflow.start_run():
    # Log hyperparameters for context
    mlflow.log_param("n_estimators", 10)

    # You can tag each run under an experiment independently.
    mlflow:set_tag("version", "1.0")
    
    # Register the model in the registry
    mlflow.onnx.log_model(
        onnx_model=onnx_model,
        artifact_path="iris_classifier",
        signature=signature,
        input_example=X[:1] 
    )
    
    print("Model successfully pushed to the Fathom Registry.")

Execution

Run the script using the Fathom CLI to ensure the registry context is correctly injected:

fathom intelligence mlflow run python3 train_onnx.py

Listing models

Once the script finishes, you can confirm that the model was received and stored correctly by querying the platform’s model list. This ensures your model is now an immutable asset ready for deployment.

fathom intelligence machine-learning model list

5.1.3 - Model Deployment

The process of wrapping your models into scalable, production-ready endpoints.

Model Deployment is the final step in the machine learning lifecycle. It takes a versioned artifact from the Model Registry and wraps it into a high-performance, scalable endpoint ready to serve real-time predictions.

Deploying a Registered Model

To deploy a model, you need the id of the logged model (which you obtained in the previous step). The deployment process allocates the necessary computational resources (CPU, RAM, or GPU) and sets up the inference runtime.

Create a Deployment

Use the deployment create command to launch your model. You must specify the model ID and the desired serving size.

fathom intelligence machine-learning deployment create logged-model --model-id 6174cc98-55fb-4818-9370-f75cafade62e --name "iris-classifier" --description "Production endpoint for Iris flower classification" --serving-size small
OptionRequirementDescription
–model-idRequiredThe UUID of the model from the registry.
–nameRequiredA unique name for your deployment.
–serving-sizeOptionalResource tier: small, large, or extra-large.
–serving-gpuOptionalAttach a GPU for heavy models (nvidia-l4, nvidia-l4-2x).

Tag deployment

Use the deployment tag to tag your deployment. You must specify the deployment ID. It is possible to remove tags using deployment untag.

Monitoring Deployment Status

Deployments happen asynchronously. After creating one, you should monitor its state to ensure it transitions to running:

fathom intelligence machine-learning deployment list

Output example of a command run with --watch option:

 id                                   | created_at                     | name                 | kind                                       | description                                        | status  | state | tags
--------------------------------------+--------------------------------+----------------------+--------------------------------------------+----------------------------------------------------+---------+--------------
 379f103f-45cd-4c00-aec3-0fa4af756cae | 2026-03-25 08:06:17.003811 UTC | iris-classifier      | logged-models                              | Production endpoint for Iris flower classification | pending | N/A | production 

 id                                   | created_at                     | name            | kind          | description                                        | status  | state | tags |
--------------------------------------+--------------------------------+-----------------+---------------+----------------------------------------------------+---------+---------------
 379f103f-45cd-4c00-aec3-0fa4af756cae | 2026-03-25 08:06:17.003811 UTC | iris-classifier | logged-models | Production endpoint for Iris flower classification | running | hot   | production |

Updating a Deployment

Once a deployment is running, you can update it to point to a new version of your model (e.g., a newly trained logged-model-id) or change its resource allocation (e.g., upgrading from small to large).

The platform performs a rolling update, ensuring that your endpoint remains available while the new model version is being provisioned.

fathom intelligence machine-learning deployment update <DEPLOYMENT_ID> logged-model <OPTIONS>
OptionDescription
–model-idThe new Logged Model UUID from the registry.
–nameUpdate the display name of the deployment.
–descriptionUpdate the deployment’s metadata/description.
–serving-sizeScale resources (small, large, extra-large).
–serving-gpuChange or add a GPU accelerator.

Example: Update Logged Model

To promote a new model version to an existing deployment, use the update logged-model command. You will need the Deployment ID and the new Model ID.

fathom intelligence machine-learning deployment update 3cdec2ec-f51e-420c-937a-6c65af770084 logged-model --model-id 93096f6a-3a8a-4315-bc18-615ef72c7bcc

Model Inference

Once your deployment is in the running and hot state, you can begin making predictions. Fathom Intelligence supports three primary inference modes depending on your model type: General Tensor Inference, Chat Completions, and Embeddings.

General Tensor Inference (V2 Protocol)

This mode is used for classic ML models (Scikit-learn, ONNX, XGBoost) and computer vision. It follows the NVIDIA Triton V2 Predict Protocol.

You can pipe a JSON payload directly into the CLI. This is ideal for integration with tools like jq or automated data pipelines.

echo '{
  "inputs": [
    {
      "name": "float_input",
      "shape": [1, 4],
      "datatype": "FP32",
      "data": [7.0, 3.2, 4.7, 1.4]
    }
  ],
  "outputs": [
    {
      "name": "label"
    }
  ]
}' | fathom intelligence machine-learning deployment infer <DEPLOYMENT_ID> --data -

Inline JSON Payload

For quick manual testing, you can pass the full JSON object directly as a string.

fathom intelligence machine-learning deployment infer <DEPLOYMENT_ID> --data '{
  "inputs": [
    {
      "name": "float_input",
      "shape": [2, 4],
      "datatype": "FP32",
      "data": [5.1, 3.5, 1.4, 0.2]
    }
  ],
  "outputs": [
    {
      "name": "label"
    }
  ]
}'

5.2 - Hugging Face Models

A specialized guide on how to leverage the vast library of open-source models from Hugging Face.

For Large Language Models (LLMs) and Text Embedding models, Fathom Intelligence provides a direct integration with Hugging Face. This allows you to skip the Model Registry and deploy industry-standard models with a single CLI command.

You can deploy any supported model by providing its Hugging Face repository ID (e.g., mistralai/Mistral-7B-v0.1). The platform automatically handles the weights download, environment setup, and API wrapping.

Chat Model

For conversational AI, Fathom Intelligence supports Instruct models. Unlike base models that simply “complete” text, Instruct models are fine-tuned to follow directions and maintain a dialogue. When deployed, these models expose an OpenAI-compatible API, allowing you to use them as a drop-in replacement for existing AI integrations.

To deploy a chat-optimized model:

fathom intelligence machine-learning deployment create hugging-face --model-id "Qwen/Qwen2.5-0.5B-Instruct" --name "qwen-tiny-chat" --description "Fast 0.5B parameter chat model" -s large-high-mem

Once the deployment status reaches Running / Hot, you can interact with the model using the chat command. This command automatically handles the complex formatting (roles like user and assistant) required by the model’s internal chat template.

fathom intelligence machine-learning deployment chat <DEPLOYMENT_ID> --prompt "Explain the concept of 'Open Source' in one sentence."

Embedding Model

Fathom Intelligence allows you to bypass the manual model registration process for industry-standard architectures. You can deploy models directly from the Hugging Face Hub using their repository ID.

To start, we will deploy a lightweight but high-performance embedding model. This model converts text into 384-dimensional vectors.

fathom intelligence machine-learning deployment create hugging-face --model-id "sentence-transformers/all-MiniLM-L6-v2" --name "tiny-embed" --description "Small embedding model for testing" --serving-size large

The embed command is designed for models that perform Feature Extraction (e.g., BERT, RoBERTa, BGE). It converts raw text into high-dimensional numerical vectors (embeddings), which are essential for semantic search, clustering, and Retrieval-Augmented Generation (RAG).

fathom intelligence machine-learning deployment embed <DEPLOYMENT_ID> --input "The quick brown fox jumps over the lazy dog"

The command returns a JSON object containing the vector (embedding) for your input. For a standard model like all-MiniLM-L6-v2, the output will look like this:

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [
        0.0125, -0.0456, 0.0892, ... 384 dimensions total
      ]
    }
  ],
  "model": "default",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}

5.3 - Managed LLMs

This section describes how to access and configure Large Language Models (LLMs) managed directly by the platform. It focuses on rapid integration, prompt engineering via API, and cost-efficient scaling without the overhead of infrastructure management.

The Managed LLMs service provides a unified interface to access Large Language Models provided natively by the platform, as well as external models running as a service (such as OpenAI, Claude, or Gemini).

Access and Permissions

To ensure security and cost management, access to these models is governed by API keys managed at two levels:

  • Platform Level: Global models provided by the infrastructure.
  • Organization Level: Custom integrations where organization administrators can plug in their own provider keys.

This architecture allows teams to use state-of-the-art models without managing individual credentials, while administrators maintain full control over which models are available to specific organizations.

Listing Available Models

Before interacting with an LLM, you can list all models currently available in your active context. This list includes both native and third-party models (e.g., gpt-4o, claude-3-5-sonnet).

fathom intelligence llms model list

Chat Completions

The chat command is the primary way to interact with Managed LLMs via the CLI. It is an excellent tool for testing connectivity, validating model behavior, or quickly generating content.

fathom intelligence llms model chat <MODEL_NAME> --prompt <PROMPT_TEXT> [OPTIONS]

Key Options

OptionDefaultDescription
<MODEL_NAME>RequiredThe model ID to use (e.g., gpt-4o, gemini-1.5-pro).
–prompt, -pRequiredThe message to send to the model.
–system, -s“You are a helpful…”Sets the behavior/persona of the assistant.
–temperature, -t0.7Controls creativity (0.0 = deterministic, 1.0 = creative).
–max-tokens, -nN/ALimits the length of the generated response.
–no-streamN/ADisables real-time streaming of the response to the terminal.

Example: Basic Interactive Chat

To send a simple query to gemma-3-12b-it:

fathom intelligence llms model chat google/gemma-3-12b-it --prompt 'Explain quantum entanglement in one sentence.'

Example: Advanced System Behavior

You can override the default assistant behavior to act as a specific persona:

fathom intelligence llms model chat Qwen/Qwen2.5-VL-3B-Instruct --system 'You are a senior Rust developer. Provide code examples only.' --prompt 'How do I implement a trait in Rust?' --temperature 0.2

6 - Notebooks

This section explains how to run and test code directly in Notebooks, making it easy to experiment, visualize, and prototype.

6.1 - Accessing Data for Machine Learning Models

This document describes different approaches to accessing and preparing data required for training machine learning models.

Data - sequences

Retrieving Sequences in Parquet Format

Parquet is a columnar storage format optimized for large-scale data processing. It is widely used in machine learning pipelines due to its efficiency and compatibility with distributed systems.

To access a sequence in Parquet format, you only need to construct the correct URL pointing to the resource. The general pattern is:

http://localhost/v1/data/sequences/<sequenceId>.parquet?dataProjectId=<projectId>
  • sequenceId - The unique identifier of the sequence. You can obtain this ID from the Data Workspace.
  • dataProjectId - The identifier of the project in which the sequence resides.

Sequences can be retrieved using standard ML tools like Pandas, simply by constructing the correct URL with the sequenceId and dataProjectId.

%pip install pyarrow

import pandas as pd
df = pd.read_parquet("http://localhost/v1/data/sequences/0add4bdc-cff6-4f26-a904-c38b5956e60b.parquet?dataProjectId=680b61b0aedd6f9e639d8699")
df.head(10)

6.2 - Model training

The model training section demonstrates how to build, track, and manage machine learning experiments in Python.

Building effective machine learning models requires not only robust algorithms but also a well-structured workflow for experimentation, tracking, and reproducibility. Python has become the de facto language for machine learning due to its rich ecosystem of libraries such as scikit-learn, TensorFlow, PyTorch, and XGBoost, which provide powerful tools for model development across classical and deep learning tasks.

To complement these libraries, MLflow offers an open-source platform SDK to manage the end-to-end machine learning lifecycle. It enables:

  • Experiment tracking: Logging parameters, metrics, and artifacts for each run.
  • Model management: Packaging models in a standardized format for deployment.
  • Reproducibility: Ensuring experiments can be replicated across environments.
  • Collaboration: Sharing results and models across teams.

Dependencies

Before starting model training, ensure that the required Python libraries are installed. These dependencies provide the core functionality for building and tracking machine learning experiments.

Scikit-learn

Run the following command in your notebook:

pip install scikit-learn mlflow==3.5.1

Use the following code snippet as a template for training a machine learning model with Scikit-learn:

# Original source code and more details can be found in:
# https://www.mlflow.org/docs/latest/tutorials-and-examples/tutorial.html

# The data set used in this example is from
# http://archive.ics.uci.edu/ml/datasets/Wine+Quality
# P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.
# Modeling wine preferences by data mining from physicochemical properties.
# In Decision Support Systems, Elsevier, 47(4):547-553, 2009.

import warnings
import sys

import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
from urllib.parse import urlparse
import mlflow
import mlflow.sklearn
from mlflow.models.signature import infer_signature

import logging

logging.basicConfig(level=logging.WARN)
logger = logging.getLogger(__name__)


def eval_metrics(actual, pred):
    rmse = np.sqrt(mean_squared_error(actual, pred))
    mae = mean_absolute_error(actual, pred)
    r2 = r2_score(actual, pred)
    return rmse, mae, r2


if __name__ == "__main__":
    warnings.filterwarnings("ignore")
    np.random.seed(40)

    # Read the wine-quality csv file from the URL
    csv_url = (
        "http://archive.ics.uci.edu/ml"
        "/machine-learning-databases/wine-quality/winequality-red.csv"
    )
    try:
        data = pd.read_csv(csv_url, sep=";")
    except Exception as e:
        logger.exception(
            "Unable to download training & test CSV, "
            "check your internet connection. Error: %s",
            e,
        )
        
    # Split the data into training and test sets. (0.75, 0.25) split.
    train, test = train_test_split(data)

    # The predicted column is "quality" which is a scalar from [3, 9]
    train_x = train.drop(["quality"], axis=1)
    test_x = test.drop(["quality"], axis=1)
    train_y = train[["quality"]]
    test_y = test[["quality"]]

    alpha = 0.5
    l1_ratio = 0.5

    experiment_name = "wine-classification"

    existing_experiment = mlflow.get_experiment_by_name(experiment_name)
    if existing_experiment is None:
        experiment_id = mlflow.create_experiment(
            name=experiment_name
        )
    else:
        experiment_id = existing_experiment.experiment_id

    mlflow.set_experiment(experiment_name)

    # Add or update tags to the created experiment.
    mlflow.set_experiment_tags({
        "project_name": "Fraud Prevention",
        "team": "Data Science Core",
        "priority": "High"
    })

    with mlflow.start_run(experiment_id=experiment_id):
        lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
        lr.fit(train_x, train_y)

        predicted_qualities = lr.predict(test_x)

        (rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)

        print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
        print("  RMSE: %s" % rmse)
        print("  MAE: %s" % mae)
        print("  R2: %s" % r2)

        mlflow.log_param("alpha", alpha)
        mlflow.log_param("l1_ratio", l1_ratio)
        mlflow.log_metric("rmse", rmse)
        mlflow.log_metric("r2", r2)
        mlflow.log_metric("mae", mae)
        # You can tag each run under an experiment independently.
        mlflow.set_tag("version", "1.0")

        tracking_url_type_store = urlparse(mlflow.get_tracking_uri()).scheme
        model_signature = infer_signature(train_x, train_y)

        if tracking_url_type_store != "file":
            mlflow.sklearn.log_model(
                lr,
                "my-new-model",
                registered_model_name="ElasticnetWineModel",
                input_example=train_x.head(1),
                signature=model_signature,
            )
        else:
            mlflow.sklearn.log_model(lr, "model", signature=model_signature)
        
print("done.")

Script Breakdown

  • Dataset: Wine Quality dataset from UCI ML repository.
  • Model: ElasticNet regression (combines L1 and L2 regularization).
  • Metrics logged: RMSE, MAE, R².
  • Parameters, metrics, and the trained model are logged automatically.
  • Creates or reuses an experiment (wine-classification) and stores results there.

Pytorch

Run the following command in your notebook:

pip install torch torchvision mlflow==3.5.1

Use the following code snippet as a template for training a machine learning model with PyTorch:

from torch.fft import Tensor
from mlflow.types import Schema, TensorSpec
import mlflow
import mlflow.pytorch
from mlflow.models import ModelSignature
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import numpy as np
import logging

logging.basicConfig(level=logging.WARN)
logger = logging.getLogger(__name__)

# Transformations for MNIST images
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
])

# Load MNIST dataset
train_dataset = datasets.MNIST('./data', train=True, download=True, transform=transform)
test_dataset = datasets.MNIST('./data', train=False, download=True, transform=transform)

train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=1000, shuffle=False)

# Simple neural network
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(28*28, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = x.view(-1, 28*28)
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

# Define input schema
inputs_schema = Schema([TensorSpec(type=np.dtype(np.float32), shape=(-1, 1, 28, 28))])

# Define output schema
outputs_schema = Schema([TensorSpec(type=np.dtype(np.float32), shape=(-1, 10))])

# Create the signature
model_signature = ModelSignature(inputs=inputs_schema, outputs=outputs_schema)

model = Net()
optimizer = optim.Adam(model.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss()

experiment_name = "Testing"

existing_experiment = mlflow.get_experiment_by_name(experiment_name)
if existing_experiment is None:
    experiment_id = mlflow.create_experiment(
        name=experiment_name,
        artifact_location="mlflow-artifacts:/pytorch-artifacts"
    )
else:
    experiment_id = existing_experiment.experiment_id

mlflow.set_experiment(experiment_name)

# Add or update tags to the created experiment.
mlflow.set_experiment_tags({
    "project_name": "Fraud Prevention",
    "team": "Data Science Core",
    "priority": "High"
})

input_example = False

with mlflow.start_run():
    mlflow.log_param("lr", 0.001)
    mlflow.log_param("batch_size", 64)

    for epoch in range(5):  # train for 5 epochs
        model.train()
        train_loss = 0
        correct = 0
        total = 0

        for data, target in train_loader:
            input_example = data.numpy()
            optimizer.zero_grad()
            output = model(data)
            loss = criterion(output, target)
            loss.backward()
            optimizer.step()

            train_loss += loss.item()
            _, predicted = torch.max(output.data, 1)
            total += target.size(0)
            correct += (predicted == target).sum().item()

        epoch_loss = train_loss / len(train_loader)
        epoch_acc = correct / total

        # Log metrics per epoch
        mlflow.log_metric("train_loss", epoch_loss, step=epoch)
        mlflow.log_metric("train_accuracy", epoch_acc, step=epoch)
        # You can tag each run under an experiment independently.
        mlflow.set_tag("version", "1.0")

        print(f"Epoch {epoch+1}, Loss: {epoch_loss:.4f}, Accuracy: {epoch_acc:.4f}")

    # Save trained model
    mlflow.pytorch.log_model(model, "mnist_model", signature=model_signature, input_example=input_example)

Real-Time Feedback During Training

When training a model inside a notebook, you should be informed in real time about:

  • Training progress: logs and outputs displayed directly in the notebook cells and UI.
  • Evaluation results: metrics such as RMSE, MAE, or accuracy printed immediately after each run.
  • Trained model artifacts: confirmation that the model has been saved and registered.

Model deployment

If you believe your model is ready and you would like to deploy it so it can be used in other parts of the platform, please proceed to the Getting Started Model Deployments Section.

6.3 - Working with LLMs in Notebooks

Getting started guide for integrating Large Language Models (LLMs) into your notebook workflows.

This documentation explains how to communicate with Large Language Models (LLMs) directly from a notebook environment.

Requirements

Before you start, make sure you have the necessary dependencies installed in your notebook environment.

Install the OpenAI SDK

The OpenAI SDK (openai) is only required if you want to write code in Python and communicate with LLMs via the SDK.

pip install openai

Listing models

Before starting, you may want to see which models are available in your environment. This helps you choose the right model for your task.

from openai import OpenAI

client = OpenAI(
    base_url = os.environ.get("FATHOM_SDK_BASE_URL") + '/llms/v1',
    api_key = "",
    default_headers = {
        "Authorization": os.environ.get("FATHOM_SDK_AUTHORIZATION")
    }
)

models = client.models.list()

print(models)
import requests
import os
import json

response = requests.get(
    os.environ.get("FATHOM_SDK_BASE_URL") + "/llms/v1/models",
    headers={"Authorization": os.environ.get("FATHOM_SDK_AUTHORIZATION")},
)

if response.status_code == 200:
    print("Models list:")
    print(json.dumps(response.json(), indent=4))

else:
    print("Error:", response)

This will output a list of model identifiers (e.g., gpt-4.1, gpt-4o-mini, etc.) that you can use in subsequent calls.

Creating chats

Chats allow you to interact with an LLM in a conversational style. You can provide a sequence of messages, and the model will respond accordingly.

from openai import OpenAI

client = OpenAI(
    base_url=os.environ.get("FATHOM_SDK_BASE_URL") + "/llms/v1",
    api_key="",
    default_headers={"Authorization": os.environ.get("FATHOM_SDK_AUTHORIZATION")},
)

response = client.chat.completions.create(
    model="Qwen/Qwen2.5-VL-3B-Instruct",
    messages=[
        {"role": "developer", "content": "Talk like a pirate."},
        {
            "role": "user",
            "content": "How do I check if a Python object is an instance of a class?",
        },
    ],
)

print(response)
import requests
import os
import json

data = {
    "model": "google/gemma-3-12b-it",
    "messages": [{"role": "user", "content": "What time is it in Poland"}],
}

response = requests.post(
    os.environ.get("FATHOM_SDK_BASE_URL") + "/llms/v1/chat/completions",
    headers={"Authorization": os.environ.get("FATHOM_SDK_AUTHORIZATION")},
    json=data,
)

if response.status_code == 200:
    print("Success:")
    print(json.dumps(response.json(), indent=4))

else:
    print("Error:", response)

Direct Communication with a Custom LLM Endpoint

In some cases, you may want to communicate with an LLM that is not OpenAI-compatible. This usually means the model is hosted on a custom server or API endpoint. Instead of using the built-in chat.completions.create or completions.create methods, you can send requests directly to your endpoint using standard HTTP libraries such as requests.

import requests
import os
import json

backend_uri = "/v1/backends/gemini/" # uri retrieved from models list

data = {
    "model": "models/gemini-2.5-flash",
    "messages": [
        {"role": "user", "content": "What time is it in Poland"}
    ]
}

response = requests.post(os.environ.get("FATHOM_SDK_BASE_URL") + '/llms' + backend_uri + 'chat/completions', headers={
    'Authorization': os.environ.get("FATHOM_SDK_AUTHORIZATION")
}, json=data)

if response.status_code == 200:
    print('Success:')
    print(json.dumps(response.json(), indent=4))

else:
    print('Error:', response);

6.4 - Working with Databases in Notebooks

This page explains how to integrate databases into your notebook workflows.

Qdrant

Qdrant is a vector database designed for storing and searching embeddings, making it a powerful tool in machine learning workflows. In a notebook context, it allows you to seamlessly manage collections of vectors generated by LLMs, enabling tasks like semantic search or similarity matching. By integrating Qdrant with LLM outputs, you can build intelligent applications that combine natural language understanding with efficient vector-based retrieval.

Requirements

Before you start, make sure you have the necessary dependencies installed in your notebook environment.

Python - Qdrant SDK

pip install qdrant_client

Listing collections

You can list all collections available in your Qdrant instance. This is useful to check which datasets are already stored.

from qdrant_client.async_qdrant_client import AsyncQdrantClient
import os

q = AsyncQdrantClient(
    url = os.environ.get("FATHOM_SDK_BASE_URL"),
    check_compatibility = False,
    prefix = os.environ.get("FATHOM_SDK_SERVICE_PATH_VECTOR_DATABASE").rstrip("/"),
    timeout = 30,
    headers = {
        "authorization": os.environ.get("FATHOM_SDK_AUTHORIZATION")
    }
);

all_collections = await q.get_collections()

print(all_collections)

This will return metadata about all collections currently stored in Qdrant.

Creating a collection

You can create a new collection to store vectors. When creating a collection, you need to specify the vector size and distance metric.

from qdrant_client.async_qdrant_client import AsyncQdrantClient
from qdrant_client.http.models import (
    VectorParams
)
import os

q = AsyncQdrantClient(
    url = os.environ.get("FATHOM_SDK_BASE_URL"),
    check_compatibility = False,
    prefix = os.environ.get("FATHOM_SDK_SERVICE_PATH_VECTOR_DATABASE").rstrip("/"),
    timeout = 30,
    headers = {
        "authorization": os.environ.get("FATHOM_SDK_AUTHORIZATION")
    }
);

result = await q.create_collection(
    collection_name="my_collection",
    vectors_config=VectorParams(
        size=128,
        distance="Cosine"
    )
);

print(result)

This example creates a collection named my_collection with vectors of size 128 and cosine similarity as the distance metric.

7 - Environment variables

Pre-injected environment variables providing automated authentication and project context for all platform Functions and Notebooks.

To streamline development and ensure seamless integration with the platform, every Function and Notebook is automatically injected with a set of predefined environment variables. These variables allow the SDK to authenticate and communicate with the necessary services without manual configuration.

System Variables

Variable NameDescriptionValue / Example
FATHOM_SDK_AUTHORIZATIONSecurity token used for authenticating API requests.Bearer <redacted>
FATHOM_SDK_BASE_URLThe primary API endpoint for the platform.https://internal-address/
FATHOM_SDK_SERVICE_PATH_*Dynamic paths assigned to specific platform sub-services./v1/service-name
FATHOM_SDK_ORGANIZATION_IDThe unique id of the organization.2cbfe270-d195-48ad-aed1-24145924635c
FATHOM_SDK_PROJECT_IDThe unique id of the current project.20effa8442ea1a309c35e6d9

Accessing Variables

You can access these variables within your code using standard environment lookups:

import os
import requests

base_url = os.environ.get("FATHOM_SDK_BASE_URL")
service_path = os.environ.get("FATHOM_SDK_SERVICE_PATH_ML")

url = f"{base_url}{service_path}/openapi"

headers = {
    'Authorization': f'Bearer {os.environ.get("FATHOM_SDK_AUTHORIZATION")}'
}

response = requests.get(url, headers=headers)

if response.status_code == 200:
    print('Success:', response.json())
else:
    print(f'Error: {response.status_code}')
use std::env;
use reqwest::header::AUTHORIZATION;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let base_url = env::var("FATHOM_SDK_BASE_URL")?;
    let service_path = env::var("FATHOM_SDK_SERVICE_PATH_ML")?;
    let auth_token = env::var("FATHOM_SDK_AUTHORIZATION")?;

    let url = format!("{base_url}{service_path}/openapi");

    let client = reqwest::Client::new();
    let response = client
        .get(url)
        .header(AUTHORIZATION, format!("Bearer {auth_token}"))
        .send()
        .await?;

    println!("Status: {}", response.status());
    Ok(())
}

8 - API Reference

Local Development & Proxy

To accelerate your development workflow, Fathom Intelligence provides a local Development Proxy. This tool allows you to interact with the platform’s API from your local machine exactly as your code would after being deployed to our infrastructure.

Running the proxy locally eliminates the need for manual authentication headers and complex URL construction during development. It bridges your local environment with the cloud platform, providing a seamless transition from local script to production function.

Why use the Proxy?

  • Zero-Config Auth: The proxy handles authentication with Fathom Intelligence using your CLI’s active session.
  • Environment Mirroring: Your local code can use the same relative endpoints as it will in the production environment.
  • Rapid Iteration: Test your Rust functions or Python notebooks against live deployments without re-deploying your code.

Starting the Proxy

To launch the internal proxy, simply run the following command in your terminal. By default, the proxy will attempt to bind to an available random port, but you can also specify a fixed one.

fathom proxy --port 59243

Once started, the CLI will display the local address where the proxy is listening:

╭─────────┬─────────────────────────╮
│ status  │ address                 │
├─────────┼─────────────────────────┤
│ RUNNING │ http://127.0.0.1:59243/ │
╰─────────┴─────────────────────────╯

Example: Querying Models

To see all models currently available to your account via the proxy:

curl http://127.0.0.1:59243/api/intelligence/llms/v1/models

Api endpoints

POST

/api/intelligence/machine-learning/v1/code-snippets #

Render code snippets

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "deploymentId": {
      "format": "uuid",
      "type": "string"
    },
    "documentationKind": {
      "enum": [
        "loggedModel",
        "huggingFace"
      ],
      "type": "string"
    },
    "model": {
      "type": "string"
    }
  },
  "required": [
    "deploymentId",
    "model",
    "documentationKind"
  ],
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "snippets": {
          "type": "object"
        }
      },
      "required": [
        "snippets"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/deployments #

Lists deployments

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "description": {
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "modelData": {
            "type": "object"
          },
          "name": {
            "type": "string"
          },
          "nextRevision": {
            "type": "string"
          },
          "runningRevision": {
            "type": "string"
          },
          "serving": {
            "properties": {
              "gpu": {
                "enum": [
                  "nvidia-l4",
                  "nvidia-l4-2x"
                ],
                "type": "string"
              },
              "size": {
                "enum": [
                  "small",
                  "large",
                  "large-high-mem",
                  "extra-large"
                ],
                "type": "string"
              }
            },
            "required": [
              "size"
            ],
            "type": "object"
          },
          "state": {
            "type": "object"
          },
          "status": {
            "enum": [
              "pending",
              "running"
            ],
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          },
          "userId": {
            "format": "uuid",
            "type": "string"
          }
        },
        "required": [
          "id",
          "userId",
          "createdAt",
          "nextRevision",
          "name",
          "description",
          "status",
          "state",
          "serving",
          "tags",
          "modelData"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/deployments #

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "description": {
      "type": "string"
    },
    "model": {
      "type": "object"
    },
    "name": {
      "type": "string"
    },
    "serving": {
      "properties": {
        "gpu": {
          "enum": [
            "nvidia-l4",
            "nvidia-l4-2x"
          ],
          "type": "string"
        },
        "size": {
          "enum": [
            "small",
            "large",
            "large-high-mem",
            "extra-large"
          ],
          "type": "string"
        }
      },
      "required": [
        "size"
      ],
      "type": "object"
    },
    "tags": {
      "type": "object"
    }
  },
  "required": [
    "model",
    "name",
    "description",
    "serving"
  ],
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "description": {
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "modelData": {
          "type": "object"
        },
        "name": {
          "type": "string"
        },
        "nextRevision": {
          "type": "string"
        },
        "runningRevision": {
          "type": "string"
        },
        "serving": {
          "properties": {
            "gpu": {
              "enum": [
                "nvidia-l4",
                "nvidia-l4-2x"
              ],
              "type": "string"
            },
            "size": {
              "enum": [
                "small",
                "large",
                "large-high-mem",
                "extra-large"
              ],
              "type": "string"
            }
          },
          "required": [
            "size"
          ],
          "type": "object"
        },
        "state": {
          "type": "object"
        },
        "status": {
          "enum": [
            "pending",
            "running"
          ],
          "type": "string"
        },
        "tags": {
          "items": {
            "properties": {
              "key": {
                "type": "string"
              },
              "value": {
                "type": "string"
              }
            },
            "required": [
              "key"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        },
        "userId": {
          "format": "uuid",
          "type": "string"
        }
      },
      "required": [
        "id",
        "userId",
        "createdAt",
        "nextRevision",
        "name",
        "description",
        "status",
        "state",
        "serving",
        "tags",
        "modelData"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "invalid-logged-model-id",
            "unknown"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/deployments/filter #

Filter deployments

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "example": {
    "filter": {
      "@and": [
        {
          "@eq": {
            "path": "description",
            "value": "descriptionNot"
          }
        },
        {
          "@not": {
            "@regex": {
              "path": "name",
              "pattern": "^assetNot[2-9]*$"
            }
          }
        }
      ]
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "description": {
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "modelData": {
            "type": "object"
          },
          "name": {
            "type": "string"
          },
          "nextRevision": {
            "type": "string"
          },
          "runningRevision": {
            "type": "string"
          },
          "serving": {
            "properties": {
              "gpu": {
                "enum": [
                  "nvidia-l4",
                  "nvidia-l4-2x"
                ],
                "type": "string"
              },
              "size": {
                "enum": [
                  "small",
                  "large",
                  "large-high-mem",
                  "extra-large"
                ],
                "type": "string"
              }
            },
            "required": [
              "size"
            ],
            "type": "object"
          },
          "state": {
            "type": "object"
          },
          "status": {
            "enum": [
              "pending",
              "running"
            ],
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          },
          "userId": {
            "format": "uuid",
            "type": "string"
          }
        },
        "required": [
          "id",
          "userId",
          "createdAt",
          "nextRevision",
          "name",
          "description",
          "status",
          "state",
          "serving",
          "tags",
          "modelData"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/machine-learning/v1/deployments/{id} #

Delete function by id

Parametry:
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/deployments/{id} #

Parametry:
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "description": {
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "modelData": {
          "type": "object"
        },
        "name": {
          "type": "string"
        },
        "nextRevision": {
          "type": "string"
        },
        "runningRevision": {
          "type": "string"
        },
        "serving": {
          "properties": {
            "gpu": {
              "enum": [
                "nvidia-l4",
                "nvidia-l4-2x"
              ],
              "type": "string"
            },
            "size": {
              "enum": [
                "small",
                "large",
                "large-high-mem",
                "extra-large"
              ],
              "type": "string"
            }
          },
          "required": [
            "size"
          ],
          "type": "object"
        },
        "state": {
          "type": "object"
        },
        "status": {
          "enum": [
            "pending",
            "running"
          ],
          "type": "string"
        },
        "tags": {
          "items": {
            "properties": {
              "key": {
                "type": "string"
              },
              "value": {
                "type": "string"
              }
            },
            "required": [
              "key"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        },
        "userId": {
          "format": "uuid",
          "type": "string"
        }
      },
      "required": [
        "id",
        "userId",
        "createdAt",
        "nextRevision",
        "name",
        "description",
        "status",
        "state",
        "serving",
        "tags",
        "modelData"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

PATCH

/api/intelligence/machine-learning/v1/deployments/{id} #

Parametry:
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "description": {
      "type": "string"
    },
    "model": {
      "type": "object"
    },
    "name": {
      "type": "string"
    },
    "serving": {
      "properties": {
        "gpu": {
          "enum": [
            "nvidia-l4",
            "nvidia-l4-2x"
          ],
          "type": "string"
        },
        "size": {
          "enum": [
            "small",
            "large",
            "large-high-mem",
            "extra-large"
          ],
          "type": "string"
        }
      },
      "required": [
        "size"
      ],
      "type": "object"
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "description": {
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "modelData": {
          "type": "object"
        },
        "name": {
          "type": "string"
        },
        "nextRevision": {
          "type": "string"
        },
        "runningRevision": {
          "type": "string"
        },
        "serving": {
          "properties": {
            "gpu": {
              "enum": [
                "nvidia-l4",
                "nvidia-l4-2x"
              ],
              "type": "string"
            },
            "size": {
              "enum": [
                "small",
                "large",
                "large-high-mem",
                "extra-large"
              ],
              "type": "string"
            }
          },
          "required": [
            "size"
          ],
          "type": "object"
        },
        "state": {
          "type": "object"
        },
        "status": {
          "enum": [
            "pending",
            "running"
          ],
          "type": "string"
        },
        "tags": {
          "items": {
            "properties": {
              "key": {
                "type": "string"
              },
              "value": {
                "type": "string"
              }
            },
            "required": [
              "key"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        },
        "userId": {
          "format": "uuid",
          "type": "string"
        }
      },
      "required": [
        "id",
        "userId",
        "createdAt",
        "nextRevision",
        "name",
        "description",
        "status",
        "state",
        "serving",
        "tags",
        "modelData"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "invalid-logged-model-id"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/deployments/{id}/openai/v1/chat/completions #

Parametry:
  • Inference-Header-Content-Length [header](Type: [integer])
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json):
{}
Request Body (application/octet-stream):
{
  "format": "binary",
  "type": "string"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "unknown",
            "invalid-path"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/deployments/{id}/openai/v1/embeddings #

Parametry:
  • Inference-Header-Content-Length [header](Type: [integer])
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json):
{}
Request Body (application/octet-stream):
{
  "format": "binary",
  "type": "string"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "unknown",
            "invalid-path"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/deployments/{id}/v2/models/default/infer #

Parametry:
  • Inference-Header-Content-Length [header](Type: [integer])
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json):
{}
Request Body (application/octet-stream):
{
  "format": "binary",
  "type": "string"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "unknown",
            "invalid-path"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/experiments #

Lists experiments

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "functionId": {
            "format": "uuid",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "remoteId": {
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          }
        },
        "required": [
          "id",
          "remoteId",
          "name",
          "createdAt",
          "updatedAt",
          "tags"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/experiments/filter #

Lists experiments

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "example": {
    "filter": {
      "@and": [
        {
          "@eq": {
            "path": "description",
            "value": "descriptionNot"
          }
        },
        {
          "@not": {
            "@regex": {
              "path": "name",
              "pattern": "^assetNot[2-9]*$"
            }
          }
        }
      ]
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "functionId": {
            "format": "uuid",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "remoteId": {
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          }
        },
        "required": [
          "id",
          "remoteId",
          "name",
          "createdAt",
          "updatedAt",
          "tags"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/experiments/{experiment_id}/runs #

Lists runs

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "functionId": {
            "format": "uuid",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "metrics": {
            "type": "object"
          },
          "name": {
            "type": "string"
          },
          "parameters": {
            "type": "object"
          },
          "remoteId": {
            "type": "string"
          },
          "status": {
            "enum": [
              "running",
              "scheduled",
              "finished",
              "failed",
              "killed"
            ],
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          }
        },
        "required": [
          "id",
          "remoteId",
          "createdAt",
          "updatedAt",
          "status",
          "tags",
          "metrics",
          "parameters"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "fields": {
          "type": "object"
        },
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize",
        "fields"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/experiments/{experiment_id}/runs/filter #

Lists runs - filtered

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "metrics": {
      "items": {
        "type": "string"
      },
      "type": "array"
    },
    "parameters": {
      "items": {
        "type": "string"
      },
      "type": "array"
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "functionId": {
            "format": "uuid",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "metrics": {
            "type": "object"
          },
          "name": {
            "type": "string"
          },
          "parameters": {
            "type": "object"
          },
          "remoteId": {
            "type": "string"
          },
          "status": {
            "enum": [
              "running",
              "scheduled",
              "finished",
              "failed",
              "killed"
            ],
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          }
        },
        "required": [
          "id",
          "remoteId",
          "createdAt",
          "updatedAt",
          "status",
          "tags",
          "metrics",
          "parameters"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "fields": {
          "type": "object"
        },
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize",
        "fields"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/experiments/{id} #

Get experiment

Parametry:
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "functionId": {
          "format": "uuid",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "metrics": {
          "items": {
            "type": "string"
          },
          "type": "array"
        },
        "name": {
          "type": "string"
        },
        "parameters": {
          "items": {
            "type": "string"
          },
          "type": "array"
        },
        "remoteId": {
          "type": "string"
        },
        "tags": {
          "items": {
            "properties": {
              "key": {
                "type": "string"
              },
              "value": {
                "type": "string"
              }
            },
            "required": [
              "key"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        }
      },
      "required": [
        "id",
        "remoteId",
        "name",
        "createdAt",
        "updatedAt",
        "tags",
        "metrics",
        "parameters"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/logged-models #

Lists logged models

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "kind": {
            "enum": [
              "sklearn",
              "onnx",
              "pytorch",
              "unknown",
              "hugging-face"
            ],
            "type": "string"
          },
          "modelSize": {
            "format": "uint64",
            "type": "integer"
          },
          "name": {
            "type": "string"
          },
          "runId": {
            "format": "uuid",
            "type": "string"
          },
          "schema": {
            "properties": {
              "input": {
                "items": {},
                "type": "array"
              },
              "output": {
                "items": {},
                "type": "array"
              }
            },
            "type": "object"
          },
          "status": {
            "enum": [
              "pending",
              "ready",
              "upload-failed"
            ],
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          }
        },
        "required": [
          "id",
          "createdAt",
          "name",
          "kind",
          "status",
          "schema",
          "tags"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/logged-models/filter #

Filter logged models

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "example": {
    "filter": {
      "@and": [
        {
          "@eq": {
            "path": "description",
            "value": "descriptionNot"
          }
        },
        {
          "@not": {
            "@regex": {
              "path": "name",
              "pattern": "^assetNot[2-9]*$"
            }
          }
        }
      ]
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "kind": {
            "enum": [
              "sklearn",
              "onnx",
              "pytorch",
              "unknown",
              "hugging-face"
            ],
            "type": "string"
          },
          "modelSize": {
            "format": "uint64",
            "type": "integer"
          },
          "name": {
            "type": "string"
          },
          "runId": {
            "format": "uuid",
            "type": "string"
          },
          "schema": {
            "properties": {
              "input": {
                "items": {},
                "type": "array"
              },
              "output": {
                "items": {},
                "type": "array"
              }
            },
            "type": "object"
          },
          "status": {
            "enum": [
              "pending",
              "ready",
              "upload-failed"
            ],
            "type": "string"
          },
          "tags": {
            "items": {
              "properties": {
                "key": {
                  "type": "string"
                },
                "value": {
                  "type": "string"
                }
              },
              "required": [
                "key"
              ],
              "type": "object"
            },
            "type": "array"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          }
        },
        "required": [
          "id",
          "createdAt",
          "name",
          "kind",
          "status",
          "schema",
          "tags"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/machine-learning/v1/logged-models/{id} #

Delete logged model id

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/logged-models/{id} #

Get logged model

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "kind": {
          "enum": [
            "sklearn",
            "onnx",
            "pytorch",
            "unknown",
            "hugging-face"
          ],
          "type": "string"
        },
        "modelSize": {
          "format": "uint64",
          "type": "integer"
        },
        "name": {
          "type": "string"
        },
        "runId": {
          "format": "uuid",
          "type": "string"
        },
        "schema": {
          "properties": {
            "input": {
              "items": {},
              "type": "array"
            },
            "output": {
              "items": {},
              "type": "array"
            }
          },
          "type": "object"
        },
        "status": {
          "enum": [
            "pending",
            "ready",
            "upload-failed"
          ],
          "type": "string"
        },
        "tags": {
          "items": {
            "properties": {
              "key": {
                "type": "string"
              },
              "value": {
                "type": "string"
              }
            },
            "required": [
              "key"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        }
      },
      "required": [
        "id",
        "createdAt",
        "name",
        "kind",
        "status",
        "schema",
        "tags"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/logged-models/{id}/input-example #

Find example input for logged model

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "id": {
          "type": "string"
        },
        "inputs": {
          "items": {
            "properties": {
              "data": {
                "items": {},
                "type": "array"
              },
              "datatype": {
                "enum": [
                  "BOOL",
                  "UINT8",
                  "UINT16",
                  "UINT32",
                  "UINT64",
                  "INT8",
                  "INT16",
                  "INT32",
                  "INT64",
                  "FP16",
                  "FP32",
                  "FP64",
                  "BYTES"
                ],
                "type": "string"
              },
              "name": {
                "type": "string"
              },
              "parameters": {
                "type": "object"
              },
              "shape": {
                "items": {
                  "format": "int64",
                  "type": "integer"
                },
                "type": "array"
              }
            },
            "required": [
              "name",
              "shape",
              "datatype",
              "data"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "outputs": {
          "items": {
            "properties": {
              "name": {
                "type": "string"
              },
              "parameters": {
                "type": "object"
              }
            },
            "required": [
              "name"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "parameters": {
          "type": "object"
        }
      },
      "required": [
        "inputs"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/runs/{run_id} #

Get run

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "functionId": {
          "format": "uuid",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "remoteId": {
          "type": "string"
        },
        "status": {
          "enum": [
            "running",
            "scheduled",
            "finished",
            "failed",
            "killed"
          ],
          "type": "string"
        },
        "tags": {
          "items": {
            "properties": {
              "key": {
                "type": "string"
              },
              "value": {
                "type": "string"
              }
            },
            "required": [
              "key"
            ],
            "type": "object"
          },
          "type": "array"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        }
      },
      "required": [
        "id",
        "remoteId",
        "createdAt",
        "updatedAt",
        "status",
        "tags"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/runs/{run_id}/metrics #

Get run metrics

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "metrics": {
          "type": "object"
        }
      },
      "required": [
        "metrics"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/machine-learning/v1/runs/{run_id}/parameters #

Get run parameters

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "parameters": {
          "type": "object"
        }
      },
      "required": [
        "parameters"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/machine-learning/v1/tags #

Tag experiment, deployment, logged_model and run

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "tags": {
      "items": {
        "properties": {
          "key": {
            "type": "string"
          },
          "value": {
            "type": "string"
          }
        },
        "required": [
          "key"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "target": {
      "type": "object"
    }
  },
  "required": [
    "tags",
    "target"
  ],
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "key": {
            "type": "string"
          },
          "value": {
            "type": "string"
          }
        },
        "required": [
          "id",
          "key",
          "createdAt"
        ],
        "type": "object"
      },
      "type": "array"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "ValidationError"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/machine-learning/v1/tags/{key}/deployment/{deployment_id} #

UnTag deployment

Parametry:
  • key [path] *(Type: [string])
  • deployment_id [path] *(Type: [string])
  • value [query](Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "key": {
          "type": "string"
        },
        "value": {
          "type": "string"
        }
      },
      "required": [
        "id",
        "key",
        "createdAt"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "ValidationError"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/machine-learning/v1/tags/{key}/experiment/{experiment_id} #

UnTag experiment

Parametry:
  • key [path] *(Type: [string])
  • experiment_id [path] *(Type: [string])
  • value [query](Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "key": {
          "type": "string"
        },
        "value": {
          "type": "string"
        }
      },
      "required": [
        "id",
        "key",
        "createdAt"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "ValidationError"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/machine-learning/v1/tags/{key}/logged-model/{logged_model_id} #

UnTag deployment

Parametry:
  • key [path] *(Type: [string])
  • logged_model_id [path] *(Type: [string])
  • value [query](Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "key": {
          "type": "string"
        },
        "value": {
          "type": "string"
        }
      },
      "required": [
        "id",
        "key",
        "createdAt"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "ValidationError"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/machine-learning/v1/tags/{key}/run/{run_id} #

UnTag deployment

Parametry:
  • key [path] *(Type: [string])
  • run_id [path] *(Type: [string])
  • value [query](Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "key": {
          "type": "string"
        },
        "value": {
          "type": "string"
        }
      },
      "required": [
        "id",
        "key",
        "createdAt"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "ValidationError"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/llms/v1/backends #

List backends

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "kind": {
            "enum": [
              "vllm",
              "openai"
            ],
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "settings": {
            "properties": {
              "authorization": {
                "enum": [
                  "sealed"
                ],
                "type": "string"
              }
            },
            "type": "object"
          },
          "status": {
            "enum": [
              "enabled",
              "disabled"
            ],
            "type": "string"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          },
          "url": {
            "type": "string"
          }
        },
        "required": [
          "id",
          "name",
          "url",
          "status",
          "kind",
          "createdAt",
          "settings"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/llms/v1/backends #

Create new backend

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "kind": {
      "enum": [
        "vllm",
        "openai"
      ],
      "type": "string"
    },
    "locator": {
      "type": "string"
    },
    "name": {
      "type": "string"
    },
    "settings": {
      "properties": {
        "authorization": {
          "type": "string"
        },
        "filterModels": {
          "type": "object"
        }
      },
      "type": "object"
    },
    "url": {
      "type": "string"
    }
  },
  "required": [
    "name",
    "locator",
    "url",
    "kind",
    "settings"
  ],
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "kind": {
          "enum": [
            "vllm",
            "openai"
          ],
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "settings": {
          "properties": {
            "authorization": {
              "enum": [
                "sealed"
              ],
              "type": "string"
            }
          },
          "type": "object"
        },
        "status": {
          "enum": [
            "enabled",
            "disabled"
          ],
          "type": "string"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        },
        "url": {
          "type": "string"
        }
      },
      "required": [
        "id",
        "name",
        "url",
        "status",
        "kind",
        "createdAt",
        "settings"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "error": {
      "properties": {
        "code": {
          "enum": [
            "schemaValidationError",
            "locatorConflictError"
          ],
          "type": "string"
        },
        "message": {
          "type": "string"
        }
      },
      "required": [
        "code",
        "message"
      ],
      "type": "object"
    }
  },
  "required": [
    "error"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/llms/v1/backends/filter #

List backends using filters

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "example": {
    "filter": {
      "@and": [
        {
          "@eq": {
            "path": "description",
            "value": "descriptionNot"
          }
        },
        {
          "@not": {
            "@regex": {
              "path": "name",
              "pattern": "^assetNot[2-9]*$"
            }
          }
        }
      ]
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "kind": {
            "enum": [
              "vllm",
              "openai"
            ],
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "settings": {
            "properties": {
              "authorization": {
                "enum": [
                  "sealed"
                ],
                "type": "string"
              }
            },
            "type": "object"
          },
          "status": {
            "enum": [
              "enabled",
              "disabled"
            ],
            "type": "string"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          },
          "url": {
            "type": "string"
          }
        },
        "required": [
          "id",
          "name",
          "url",
          "status",
          "kind",
          "createdAt",
          "settings"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/llms/v1/backends/{id} #

Delete backend by id

Parametry:
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/llms/v1/backends/{id} #

Get backend by id

Parametry:
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "kind": {
          "enum": [
            "vllm",
            "openai"
          ],
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "settings": {
          "properties": {
            "authorization": {
              "enum": [
                "sealed"
              ],
              "type": "string"
            }
          },
          "type": "object"
        },
        "status": {
          "enum": [
            "enabled",
            "disabled"
          ],
          "type": "string"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        },
        "url": {
          "type": "string"
        }
      },
      "required": [
        "id",
        "name",
        "url",
        "status",
        "kind",
        "createdAt",
        "settings"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

PATCH

/api/intelligence/llms/v1/backends/{id} #

Patch backend

Parametry:
  • id [path] *(Type: [string])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "kind": {
      "enum": [
        "vllm",
        "openai"
      ],
      "type": "string"
    },
    "locator": {
      "type": "string"
    },
    "name": {
      "type": "string"
    },
    "settings": {
      "properties": {
        "authorization": {
          "type": "string"
        },
        "filterModels": {
          "type": "object"
        }
      },
      "type": "object"
    },
    "status": {
      "enum": [
        "enabled",
        "disabled"
      ],
      "type": "string"
    },
    "url": {
      "type": "string"
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "kind": {
          "enum": [
            "vllm",
            "openai"
          ],
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "settings": {
          "properties": {
            "authorization": {
              "enum": [
                "sealed"
              ],
              "type": "string"
            }
          },
          "type": "object"
        },
        "status": {
          "enum": [
            "enabled",
            "disabled"
          ],
          "type": "string"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        },
        "url": {
          "type": "string"
        }
      },
      "required": [
        "id",
        "name",
        "url",
        "status",
        "kind",
        "createdAt",
        "settings"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

POST

/api/intelligence/llms/v1/chat/completions #

Create chat

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "model": {
      "type": "string"
    }
  },
  "required": [
    "model"
  ],
  "type": "object"
}
Responses

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/llms/v1/code-snippets #

Render code snippet

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "model": {
      "type": "string"
    },
    "profileId": {
      "format": "uuid",
      "type": "string"
    },
    "snippetId": {
      "format": "uuid",
      "type": "string"
    }
  },
  "required": [
    "model",
    "snippetId"
  ],
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "content": {
          "type": "string"
        }
      },
      "required": [
        "content"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

GET

/api/intelligence/llms/v1/models #

List all available models

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "created": {
            "format": "int64",
            "type": "integer"
          },
          "displayName": {
            "type": "string"
          },
          "id": {
            "type": "string"
          },
          "internalId": {
            "format": "uuid",
            "type": "string"
          },
          "object": {
            "type": "string"
          },
          "ownedBy": {
            "type": "string"
          },
          "uris": {
            "properties": {
              "base": {
                "type": "string"
              },
              "chatCompletions": {
                "type": "string"
              }
            },
            "required": [
              "base",
              "chatCompletions"
            ],
            "type": "object"
          }
        },
        "required": [
          "id",
          "object",
          "displayName",
          "ownedBy",
          "created",
          "uris",
          "internalId"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "object": {
      "type": "string"
    }
  },
  "required": [
    "data",
    "object"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/llms/v1/models/filter #

List models using filters

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "example": {
    "filter": {
      "@and": [
        {
          "@eq": {
            "path": "description",
            "value": "descriptionNot"
          }
        },
        {
          "@not": {
            "@regex": {
              "path": "name",
              "pattern": "^assetNot[2-9]*$"
            }
          }
        }
      ]
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "backendId": {
            "format": "uuid",
            "type": "string"
          },
          "createdAt": {
            "format": "date-time",
            "type": "string"
          },
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "remoteId": {
            "type": "string"
          },
          "status": {
            "enum": [
              "enabled",
              "disabled"
            ],
            "type": "string"
          },
          "updatedAt": {
            "format": "date-time",
            "type": "string"
          }
        },
        "required": [
          "id",
          "name",
          "remoteId",
          "backendId",
          "status",
          "createdAt"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/llms/v1/models/{id} #

Get model by id

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "backendId": {
          "format": "uuid",
          "type": "string"
        },
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "remoteId": {
          "type": "string"
        },
        "status": {
          "enum": [
            "enabled",
            "disabled"
          ],
          "type": "string"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        }
      },
      "required": [
        "id",
        "name",
        "remoteId",
        "backendId",
        "status",
        "createdAt"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

PATCH

/api/intelligence/llms/v1/models/{id} #

Patch models

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "status": {
      "enum": [
        "enabled",
        "disabled"
      ],
      "type": "string"
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "backendId": {
          "format": "uuid",
          "type": "string"
        },
        "createdAt": {
          "format": "date-time",
          "type": "string"
        },
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "remoteId": {
          "type": "string"
        },
        "status": {
          "enum": [
            "enabled",
            "disabled"
          ],
          "type": "string"
        },
        "updatedAt": {
          "format": "date-time",
          "type": "string"
        }
      },
      "required": [
        "id",
        "name",
        "remoteId",
        "backendId",
        "status",
        "createdAt"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

GET

/api/intelligence/llms/v1/profiles #

List profiles

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "maxTokens": {
            "format": "int32",
            "type": "integer"
          },
          "modelId": {
            "format": "uuid",
            "type": "string"
          },
          "modelName": {
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "stopSequence": {
            "type": "string"
          },
          "systemPrompt": {
            "type": "string"
          },
          "temperature": {
            "format": "float",
            "type": "number"
          },
          "topP": {
            "format": "float",
            "type": "number"
          }
        },
        "required": [
          "id",
          "modelId",
          "modelName",
          "name"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

POST

/api/intelligence/llms/v1/profiles #

Create new profile

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "maxTokens": {
      "format": "int32",
      "type": "integer"
    },
    "modelId": {
      "format": "uuid",
      "type": "string"
    },
    "name": {
      "type": "string"
    },
    "stopSequence": {
      "type": "string"
    },
    "systemPrompt": {
      "type": "string"
    },
    "temperature": {
      "format": "float",
      "type": "number"
    },
    "topP": {
      "format": "float",
      "type": "number"
    }
  },
  "required": [
    "modelId",
    "name"
  ],
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "maxTokens": {
          "format": "int32",
          "type": "integer"
        },
        "modelId": {
          "format": "uuid",
          "type": "string"
        },
        "modelName": {
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "stopSequence": {
          "type": "string"
        },
        "systemPrompt": {
          "type": "string"
        },
        "temperature": {
          "format": "float",
          "type": "number"
        },
        "topP": {
          "format": "float",
          "type": "number"
        }
      },
      "required": [
        "id",
        "modelId",
        "modelName",
        "name"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

POST

/api/intelligence/llms/v1/profiles/filter #

List profiles using filters

Parametry:
  • page_number [query](Type: [integer])
  • page_size [query](Type: [integer])
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "example": {
    "filter": {
      "@and": [
        {
          "@eq": {
            "path": "name",
            "value": "GPT"
          }
        }
      ]
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "items": {
        "properties": {
          "id": {
            "format": "uuid",
            "type": "string"
          },
          "maxTokens": {
            "format": "int32",
            "type": "integer"
          },
          "modelId": {
            "format": "uuid",
            "type": "string"
          },
          "modelName": {
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "stopSequence": {
            "type": "string"
          },
          "systemPrompt": {
            "type": "string"
          },
          "temperature": {
            "format": "float",
            "type": "number"
          },
          "topP": {
            "format": "float",
            "type": "number"
          }
        },
        "required": [
          "id",
          "modelId",
          "modelName",
          "name"
        ],
        "type": "object"
      },
      "type": "array"
    },
    "pagination": {
      "properties": {
        "pageNumber": {
          "format": "uint32",
          "type": "integer"
        },
        "pageSize": {
          "format": "uint32",
          "type": "integer"
        },
        "totalElements": {
          "format": "uint32",
          "type": "integer"
        },
        "totalPages": {
          "format": "uint32",
          "type": "integer"
        }
      },
      "required": [
        "totalPages",
        "totalElements",
        "pageNumber",
        "pageSize"
      ],
      "type": "object"
    }
  },
  "required": [
    "data",
    "pagination"
  ],
  "type": "object"
}
application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

DELETE

/api/intelligence/llms/v1/profiles/{id} #

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

This response does not return a message body.

This response does not return a message body.

GET

/api/intelligence/llms/v1/profiles/{id} #

Get profile by id

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "maxTokens": {
          "format": "int32",
          "type": "integer"
        },
        "modelId": {
          "format": "uuid",
          "type": "string"
        },
        "modelName": {
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "stopSequence": {
          "type": "string"
        },
        "systemPrompt": {
          "type": "string"
        },
        "temperature": {
          "format": "float",
          "type": "number"
        },
        "topP": {
          "format": "float",
          "type": "number"
        }
      },
      "required": [
        "id",
        "modelId",
        "modelName",
        "name"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.

PATCH

/api/intelligence/llms/v1/profiles/{id} #

Patch profile

Parametry:
  • Authorization [header] — Bearer <access token> (Type: string)
  • x-organizationid [header] — Id of organization (Type: string)
  • x-projectid [header] — Id of project (Type: string)
Request Body (application/json; charset=utf-8):
{
  "properties": {
    "maxTokens": {
      "format": "int32",
      "type": "integer"
    },
    "name": {
      "type": "string"
    },
    "stopSequence": {
      "type": "string"
    },
    "systemPrompt": {
      "type": "string"
    },
    "temperature": {
      "format": "float",
      "type": "number"
    },
    "topP": {
      "format": "float",
      "type": "number"
    }
  },
  "type": "object"
}
Responses
application/json; charset=utf-8 Schema Definition
{
  "properties": {
    "data": {
      "properties": {
        "id": {
          "format": "uuid",
          "type": "string"
        },
        "maxTokens": {
          "format": "int32",
          "type": "integer"
        },
        "modelId": {
          "format": "uuid",
          "type": "string"
        },
        "modelName": {
          "type": "string"
        },
        "name": {
          "type": "string"
        },
        "stopSequence": {
          "type": "string"
        },
        "systemPrompt": {
          "type": "string"
        },
        "temperature": {
          "format": "float",
          "type": "number"
        },
        "topP": {
          "format": "float",
          "type": "number"
        }
      },
      "required": [
        "id",
        "modelId",
        "modelName",
        "name"
      ],
      "type": "object"
    }
  },
  "required": [
    "data"
  ],
  "type": "object"
}

This response does not return a message body.

This response does not return a message body.

application/json; charset=utf-8 Schema Definition
{
  "type": "string"
}

This response does not return a message body.