This is the multi-page printable view of this section. Click here to print.
Custom Models
1 - Tracking Experiments
Fathom enables seamless experiment tracking by integrating with platform experiment tracking. This allows you to log parameters, metrics, and metadata during your training phase to ensure full reproducibility of your machine learning models.
How It Works
The platform does not require a custom logging library. Instead, it leverages the standard MLflow Python SDK (minimum version 3.5 required).
When you execute your training scripts via the Fathom CLI, the system automatically injects the necessary environment variables and authentication contexts. This ensures that all data logged via the SDK is correctly routed to your organization’s private experiment registry on the platform.
Integrated Workflow: Code & Run
The most efficient way to track an experiment is to write a standard Python script and execute it using the mlflow run wrapper. This ensures that your session is authenticated and linked to the correct project.
Write your Training Script
Create a file (e.g., train.py) using the standard MLflow library. The platform handles the backend connection automatically.
import mlflow
import os
# Create experiment
mlflow.set_experiment("fraud-detection-v1")
# Add or update tags to the created experiment.
mlflow.set_experiment_tags({
"project_name": "Fraud Prevention",
"team": "Data Science Core",
"priority": "High"
})
with mlflow.start_run():
# Log parameters (hyperparameters)
mlflow.log_param("learning_rate", 0.01)
mlflow.log_param("epochs", 10)
# Log metrics (performance)
mlflow.log_metric("accuracy", 0.95)
mlflow.log_metric("loss", 0.05)
# You can tag each run under an experiment independently.
mlflow:set_tag("version", "1.0")
# Log the model (Logged Model)
# This makes the model visible in the Fathom Model Registry
# mlflow.sklearn.log_model(sk_model, "model")
print("Run completed and logged to Fathom.")
Local Setup
To quickly prepare your local environment, we recommend using a virtual environment:
python3 -m venv .venv
source .venv/bin/activate
pip install "mlflow>=3.1.4"
Execute with Fathom Context
To run your script and ensure the MLflow context is correctly injected, use the mlflow run command. This command wraps your execution and handles all backend communication.
fathom intelligence mlflow run <COMMAND>
To execute your local Python script:
fathom i mlflow run python3 train.py
Accessing Results
Metrics and experiment history are accessible via the intelligence platform portal.
Key Benefits
The CLI automatically manages MLFLOW_TRACKING_URI and authentication tokens. No need to hardcode credentials or endpoints.
Use the tools you already know (Python, Scikit-learn, PyTorch) without custom Fathom-specific logging libraries.
Models logged during training are immediately visible in the platform and ready for deployment.
Requirements
The Fathom integration requires MLflow SDK version 3.1.4 or higher. Check your version using pip show mlflow.2 - Model Registry
The Model Registry is a centralized repository where your trained machine learning models are stored, versioned, and prepared for deployment. Models enter the registry primarily through the mlflow tracking integration.
Registering a Model
To ensure maximum interoperability and performance, we recommend exporting models to the ONNX format. Below is a practical example using a small, public dataset (Iris) to train a model and push it to the Model Registry.
The following script trains a simple classifier and logs it as an ONNX artifact.
import mlflow
import mlflow.onnx
import onnx
import numpy as np
from mlflow.models.signature import ModelSignature
from mlflow.types.schema import Schema, TensorSpec
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from skl2onnx import convert_sklearn
from skl2onnx.common.data_types import FloatTensorType
# 1. Prepare data and train a small model
iris = load_iris()
X, y = iris.data, iris.target
model = RandomForestClassifier(n_estimators=10)
model.fit(X, y)
input_schema = Schema([
TensorSpec(np.dtype(np.float32), [1, 4], name="float_input")
])
output_schema = Schema([
TensorSpec(np.dtype(np.int64), [-1], name="label")
])
signature = ModelSignature(inputs=input_schema, outputs=output_schema)
# 2. Convert the Scikit-learn model to ONNX format
initial_type = [('float_input', FloatTensorType([None, 4]))]
options = {type(model): {'zipmap': False}}
onnx_model = convert_sklearn(model, initial_types=initial_type, options=options, target_opset=17)
mlflow.set_experiment("test-experiment")
# Add or update tags to the created experiment.
mlflow.set_experiment_tags({
"project_name": "Fraud Prevention",
"team": "Data Science Core",
"priority": "High"
})
# 3. Log to Fathom via MLflow
with mlflow.start_run():
# Log hyperparameters for context
mlflow.log_param("n_estimators", 10)
# You can tag each run under an experiment independently.
mlflow:set_tag("version", "1.0")
# Register the model in the registry
mlflow.onnx.log_model(
onnx_model=onnx_model,
artifact_path="iris_classifier",
signature=signature,
input_example=X[:1]
)
print("Model successfully pushed to the Fathom Registry.")
Dependencies
To use the example above, ensure you have the conversion libraries installed in your environment:
pip install skl2onnx onnxruntime
Why Signatures Matter
A missing or incorrect signature is the most common cause of Deployment Failures. The platform requires exact tensor specifications (dtype and shape) to prepare the serving infrastructure.Version Compatibility
Unsupported model IR version? If your deployment fails with an “Unsupported IR version” error, it means your local onnx library is newer than the platform’s runtime. Fix: Always specify target_opset=17 (or lower) when converting models to ONNX to ensure compatibility with the production Inference Engine.Dimension Mismatch (1 vs 2)
If your deployment fails with a “1 dimension vs 2” error, it means the auto-batching logic is conflicting with your flat ONNX model. Fix: Set the first dimension of your input to a fixed number (e.g., 1) in the TensorSpec. This disables implicit batching, allowing the engine to map your 1D model correctly.Naming Convention
The default output name for Scikit-learn classifiers in ONNX is label. Ensure your output_schema uses this exact name. Using custom names like output_label will result in an Invalid argument error during inference.Execution
Run the script using the Fathom CLI to ensure the registry context is correctly injected:
fathom intelligence mlflow run python3 train_onnx.py
Listing models
Once the script finishes, you can confirm that the model was received and stored correctly by querying the platform’s model list. This ensures your model is now an immutable asset ready for deployment.
fathom intelligence machine-learning model list
3 - Model Deployment
Model Deployment is the final step in the machine learning lifecycle. It takes a versioned artifact from the Model Registry and wraps it into a high-performance, scalable endpoint ready to serve real-time predictions.
Deploying a Registered Model
To deploy a model, you need the id of the logged model (which you obtained in the previous step). The deployment process allocates the necessary computational resources (CPU, RAM, or GPU) and sets up the inference runtime.
Create a Deployment
Use the deployment create command to launch your model. You must specify the model ID and the desired serving size.
fathom intelligence machine-learning deployment create logged-model --model-id 6174cc98-55fb-4818-9370-f75cafade62e --name "iris-classifier" --description "Production endpoint for Iris flower classification" --serving-size small
| Option | Requirement | Description |
|---|---|---|
| –model-id | Required | The UUID of the model from the registry. |
| –name | Required | A unique name for your deployment. |
| –serving-size | Optional | Resource tier: small, large, or extra-large. |
| –serving-gpu | Optional | Attach a GPU for heavy models (nvidia-l4, nvidia-l4-2x). |
Tag deployment
Use the deployment tag to tag your deployment. You must specify the deployment ID. It is possible to remove tags using deployment untag.
Monitoring Deployment Status
Deployments happen asynchronously. After creating one, you should monitor its state to ensure it transitions to running:
fathom intelligence machine-learning deployment list
Output example of a command run with --watch option:
id | created_at | name | kind | description | status | state | tags
--------------------------------------+--------------------------------+----------------------+--------------------------------------------+----------------------------------------------------+---------+--------------
379f103f-45cd-4c00-aec3-0fa4af756cae | 2026-03-25 08:06:17.003811 UTC | iris-classifier | logged-models | Production endpoint for Iris flower classification | pending | N/A | production
id | created_at | name | kind | description | status | state | tags |
--------------------------------------+--------------------------------+-----------------+---------------+----------------------------------------------------+---------+---------------
379f103f-45cd-4c00-aec3-0fa4af756cae | 2026-03-25 08:06:17.003811 UTC | iris-classifier | logged-models | Production endpoint for Iris flower classification | running | hot | production |
Resource Sizing
For the Iris Classifier (ONNX), a small serving size is more than sufficient. Choose large or attach a GPU only for complex models.Updating a Deployment
Once a deployment is running, you can update it to point to a new version of your model (e.g., a newly trained logged-model-id) or change its resource allocation (e.g., upgrading from small to large).
The platform performs a rolling update, ensuring that your endpoint remains available while the new model version is being provisioned.
fathom intelligence machine-learning deployment update <DEPLOYMENT_ID> logged-model <OPTIONS>
| Option | Description |
|---|---|
| –model-id | The new Logged Model UUID from the registry. |
| –name | Update the display name of the deployment. |
| –description | Update the deployment’s metadata/description. |
| –serving-size | Scale resources (small, large, extra-large). |
| –serving-gpu | Change or add a GPU accelerator. |
Example: Update Logged Model
To promote a new model version to an existing deployment, use the update logged-model command. You will need the Deployment ID and the new Model ID.
fathom intelligence machine-learning deployment update 3cdec2ec-f51e-420c-937a-6c65af770084 logged-model --model-id 93096f6a-3a8a-4315-bc18-615ef72c7bcc
Model Inference
Once your deployment is in the running and hot state, you can begin making predictions. Fathom Intelligence supports three primary inference modes depending on your model type: General Tensor Inference, Chat Completions, and Embeddings.
General Tensor Inference (V2 Protocol)
This mode is used for classic ML models (Scikit-learn, ONNX, XGBoost) and computer vision. It follows the NVIDIA Triton V2 Predict Protocol.
Pipe via Standard Input (Recommended for Scripts)
You can pipe a JSON payload directly into the CLI. This is ideal for integration with tools like jq or automated data pipelines.
echo '{
"inputs": [
{
"name": "float_input",
"shape": [1, 4],
"datatype": "FP32",
"data": [7.0, 3.2, 4.7, 1.4]
}
],
"outputs": [
{
"name": "label"
}
]
}' | fathom intelligence machine-learning deployment infer <DEPLOYMENT_ID> --data -
Inline JSON Payload
For quick manual testing, you can pass the full JSON object directly as a string.
fathom intelligence machine-learning deployment infer <DEPLOYMENT_ID> --data '{
"inputs": [
{
"name": "float_input",
"shape": [2, 4],
"datatype": "FP32",
"data": [5.1, 3.5, 1.4, 0.2]
}
],
"outputs": [
{
"name": "label"
}
]
}'
Protocol Compatibility
MLflow ONNX Models (like the Iris Classifier we registered earlier) strictly support the Triton Inference Protocol via the infer command.
Generative commands like chat and embed are reserved for LLMs and specialized transformers (e.g., from Hugging Face), which will be covered in the following sections.