Delta is an open-source machine learning framework in Rust
Coming Soon
A package manager for machine learning datasets and models.

Features

Delta

  • Fast

    Built with Rust, Δ is designed for high performance, making it ideal for compute-intensive machine learning tasks.

  • Usability

    APIs are designed for simplicity, making it easy for beginners to get started while providing advanced customization options for experienced users.

  • Extensibility

    The framework is modular, allowing users to plug in custom layers, optimizers, or preprocessing pipelines tailored to their unique needs.

  • Efficient and Scalable Tools

    It provides highly efficient and scalable tools for building and training neural networks, supporting both small-scale experiments and large-scale production systems.

  • Distributed and Parallel Training

    Future

    Native support for distributed and parallel training ensures that Delta scales effortlessly across multi-core systems and cloud environments.

  • Classical ML

    Future

    Includes support for classical ML algorithms such as decision trees, random forests, SVMs and more.

  • Integration to Nebula

    Future

    Direct access to datasets and models managed by the Nebula registry, public or private.

Nebula

  • Command-line tool

    Future

    Manage datasets and models directly from a powerful CLI, providing full control over your workflow without leaving the terminal.

  • Virtual environments

    Future

    Run multiple ML projects on the same machine without conflicts, ensuring that dependencies are isolated for seamless development.

  • Dataset management

    Future

    Organize datasets efficiently by metadata, versions, variants, dependencies, and lifecycles, enabling easy tracking and reproducibility.

  • Pretrained models

    Future

    Access and manage pretrained models with versioning and adaptations, enabling easy integration into your projects and reducing time spent on training.

  • Template projects

    Future

    Use prebuilt templates based on the Delta framework for faster setup, allowing you to quickly begin experiments with minimal configuration.

  • Public registry

    Future

    Browse datasets and models shared by the community in the Nebula registry, ensuring access to high-quality resources for your projects.

  • Private registry

    Future

    Host your own Nebula registry for secure and confidential work, keeping sensitive data and models private while maintaining efficient access management.

Roadmap

  • 2025 Q2

    MVP of Delta

Get Started

Adding Delta to Your Project

To add the Delta library to your Rust project, you need to include it in your Cargo.toml file. Follow these steps:

  1. Open your project’s Cargo.toml file.
  2. Add the following line under [dependencies]:
Cargo.toml
[dependencies]
deltaml = "0.1.0"

Currently, we have published Delta to deltaml, but note that this is still experimental in alpha stage so things might break in the upcoming iterations.

1. Create the main Function

We start with an empty asynchronous main function using #[tokio::main]

src/main.rs
#[tokio::main]
async fn main() {
println!("Starting the Delta example...");
}

2. Define a Neural Network

Next, we create a neural network using Delta’s Sequential model.

src/main.rs
let mut model = Sequential::new()
.add(Flatten::new(Shape::from(IxDyn(&[32, 32, 3])))) // CIFAR-10: 32x32x3 -> 3072
.add(Dense::new(128, Some(ReluActivation::new()), true)) // Input: 3072, Output: 128
.add(Dense::new(10, Some(SoftmaxActivation::new()), false)); // Output: 10 classes
model.summary();

3. Compile the Model

Before training, we need to compile the model by defining the optimizer and loss function.

src/main.rs
let optimizer = Adam::new(0.001);
model.compile(optimizer, MeanSquaredLoss::new());

4. Load the Dataset

Now, we load the CIFAR-10 dataset for training, validation, and testing.

src/main.rs
let mut train_data = Cifar10Dataset::load_train().await;
let val_data = Cifar10Dataset::load_val().await;
let test_data = Cifar10Dataset::load_test().await;
println!("Train dataset size: {}", train_data.len());

5. Train the Model

We train the model using the loaded training data.

src/main.rs
let epoch = 10;
let batch_size = 32;
match model.fit(&mut train_data, epoch, batch_size) {
Ok(_) => println!("Model trained successfully"),
Err(e) => println!("Failed to train model: {}", e),
}

6. Validate the Model

After training, we validate the model using the validation dataset.

src/main.rs
match model.validate(&val_data, batch_size) {
Ok(validation_loss) => println!("Validation Loss: {:.6}", validation_loss),
Err(e) => println!("Failed to validate model: {}", e),
}

7. Evaluate the Model

Finally, we evaluate the model on the test dataset.

src/main.rs
let accuracy = model.evaluate(&test_data, batch_size).expect("Failed to evaluate the model");
println!("Test Accuracy: {:.2}%", accuracy * 100.0);

8. Save the Model

Once satisfied with the model, we save it to a file for later use.

src/main.rs
model.save("model_path").unwrap();