Features
Delta
Fast
Built with Rust, Δ is designed for high performance, making it ideal for compute-intensive machine learning tasks.
Usability
APIs are designed for simplicity, making it easy for beginners to get started while providing advanced customization options for experienced users.
Extensibility
The framework is modular, allowing users to plug in custom layers, optimizers, or preprocessing pipelines tailored to their unique needs.
Efficient and Scalable Tools
It provides highly efficient and scalable tools for building and training neural networks, supporting both small-scale experiments and large-scale production systems.
Distributed and Parallel Training
FutureNative support for distributed and parallel training ensures that Delta scales effortlessly across multi-core systems and cloud environments.
Classical ML
FutureIncludes support for classical ML algorithms such as decision trees, random forests, SVMs and more.
Integration to Nebula
FutureDirect access to datasets and models managed by the Nebula registry, public or private.
Nebula
Command-line tool
FutureManage datasets and models directly from a powerful CLI, providing full control over your workflow without leaving the terminal.
Virtual environments
FutureRun multiple ML projects on the same machine without conflicts, ensuring that dependencies are isolated for seamless development.
Dataset management
FutureOrganize datasets efficiently by metadata, versions, variants, dependencies, and lifecycles, enabling easy tracking and reproducibility.
Pretrained models
FutureAccess and manage pretrained models with versioning and adaptations, enabling easy integration into your projects and reducing time spent on training.
Template projects
FutureUse prebuilt templates based on the Delta framework for faster setup, allowing you to quickly begin experiments with minimal configuration.
Public registry
FutureBrowse datasets and models shared by the community in the Nebula registry, ensuring access to high-quality resources for your projects.
Private registry
FutureHost your own Nebula registry for secure and confidential work, keeping sensitive data and models private while maintaining efficient access management.
Roadmap
2025 Q2
MVP of Delta
Get Started
Adding Delta to Your Project
To add the Delta library to your Rust project, you need to include it in your
Cargo.toml file. Follow these steps:
- Open your project’s
Cargo.tomlfile. - Add the following line under
[dependencies]:
[dependencies]deltaml = "0.1.0"Currently, we have published Delta to deltaml, but note that this is still experimental in alpha stage so things might break in the upcoming iterations.
1. Create the main Function
We start with an empty asynchronous main function using #[tokio::main]
#[tokio::main]async fn main() { println!("Starting the Delta example...");}2. Define a Neural Network
Next, we create a neural network using Delta’s Sequential model.
let mut model = Sequential::new() .add(Flatten::new(Shape::from(IxDyn(&[32, 32, 3])))) // CIFAR-10: 32x32x3 -> 3072 .add(Dense::new(128, Some(ReluActivation::new()), true)) // Input: 3072, Output: 128 .add(Dense::new(10, Some(SoftmaxActivation::new()), false)); // Output: 10 classes
model.summary();3. Compile the Model
Before training, we need to compile the model by defining the optimizer and loss function.
let optimizer = Adam::new(0.001);model.compile(optimizer, MeanSquaredLoss::new());4. Load the Dataset
Now, we load the CIFAR-10 dataset for training, validation, and testing.
let mut train_data = Cifar10Dataset::load_train().await;let val_data = Cifar10Dataset::load_val().await;let test_data = Cifar10Dataset::load_test().await;
println!("Train dataset size: {}", train_data.len());5. Train the Model
We train the model using the loaded training data.
let epoch = 10;let batch_size = 32;
match model.fit(&mut train_data, epoch, batch_size) { Ok(_) => println!("Model trained successfully"), Err(e) => println!("Failed to train model: {}", e),}6. Validate the Model
After training, we validate the model using the validation dataset.
match model.validate(&val_data, batch_size) { Ok(validation_loss) => println!("Validation Loss: {:.6}", validation_loss), Err(e) => println!("Failed to validate model: {}", e),}7. Evaluate the Model
Finally, we evaluate the model on the test dataset.
let accuracy = model.evaluate(&test_data, batch_size).expect("Failed to evaluate the model");println!("Test Accuracy: {:.2}%", accuracy * 100.0);8. Save the Model
Once satisfied with the model, we save it to a file for later use.
model.save("model_path").unwrap();