Tracking Experiments
Fathom enables seamless experiment tracking by integrating with platform experiment tracking. This allows you to log parameters, metrics, and metadata during your training phase to ensure full reproducibility of your machine learning models.
How It Works
The platform does not require a custom logging library. Instead, it leverages the standard MLflow Python SDK (minimum version 3.5 required).
When you execute your training scripts via the Fathom CLI, the system automatically injects the necessary environment variables and authentication contexts. This ensures that all data logged via the SDK is correctly routed to your organization’s private experiment registry on the platform.
Integrated Workflow: Code & Run
The most efficient way to track an experiment is to write a standard Python script and execute it using the mlflow run wrapper. This ensures that your session is authenticated and linked to the correct project.
Write your Training Script
Create a file (e.g., train.py) using the standard MLflow library. The platform handles the backend connection automatically.
import mlflow
import os
# Create experiment
mlflow.set_experiment("fraud-detection-v1")
# Add or update tags to the created experiment.
mlflow.set_experiment_tags({
"project_name": "Fraud Prevention",
"team": "Data Science Core",
"priority": "High"
})
with mlflow.start_run():
# Log parameters (hyperparameters)
mlflow.log_param("learning_rate", 0.01)
mlflow.log_param("epochs", 10)
# Log metrics (performance)
mlflow.log_metric("accuracy", 0.95)
mlflow.log_metric("loss", 0.05)
# You can tag each run under an experiment independently.
mlflow:set_tag("version", "1.0")
# Log the model (Logged Model)
# This makes the model visible in the Fathom Model Registry
# mlflow.sklearn.log_model(sk_model, "model")
print("Run completed and logged to Fathom.")
Local Setup
To quickly prepare your local environment, we recommend using a virtual environment:
python3 -m venv .venv
source .venv/bin/activate
pip install "mlflow>=3.1.4"
Execute with Fathom Context
To run your script and ensure the MLflow context is correctly injected, use the mlflow run command. This command wraps your execution and handles all backend communication.
fathom intelligence mlflow run <COMMAND>
To execute your local Python script:
fathom i mlflow run python3 train.py
Accessing Results
Metrics and experiment history are accessible via the intelligence platform portal.
Key Benefits
The CLI automatically manages MLFLOW_TRACKING_URI and authentication tokens. No need to hardcode credentials or endpoints.
Use the tools you already know (Python, Scikit-learn, PyTorch) without custom Fathom-specific logging libraries.
Models logged during training are immediately visible in the platform and ready for deployment.