Learning Blog [3]

A journey to scripting Rust: Array Rank Polymorphism

For a while now, Rust has been my preferred programming language for a broad range of tasks - from developing the static site generator that powers this website, to designing various work-related web applications. However, one area where Rust seems to lag behind is in the realm of quick and dirty scripting for numerical analysis, simulations or visualizations. When compared to programming languages such as Python and Julia, Rust appears to fall short in this regard. Of course, it’s true that Rust was not initially designed for scripting purposes. Nevertheless, why not dream big and imagine a future where Rust can excel in all areas of programming? Let’s push the boundaries of what’s possible with Rust.

From my experience, two key factors contribute to making scripting with Python and Julia a seamless and enjoyable process. A versatile and user-friendly way to handle N-dimensional arrays, as well as common math and linear algebra operations. And a straightforward and effective visualization tool. For Python, of course, these are Numpy and Matplotlib. And Julia has superb N-dimensional array support out of the box and a lot of visualization and plotting tools. However, Rust appears to be lacking in both regards, particularly in terms of user-friendliness and ergonomics. Yeah, we have Nalgebra, which is performant and feature-rich. But if you are thinking of scripting using Nalgebra be prepared for quite possible an arduous task.

Let’s face it - working in programming language semantics and ergonomics in the era of ChatGPT and CodePilote is a bit like working in vacuumed tube technology in the late 60s. But hey, I’m all about enjoying the ride while it last. That is why, for the past month, I decided to use all my free time to prove that a library with Numpy-level ergonomics is possible in Rust. And let me tell you, as a proof of work I’m proud of what I have achieved. Sure, there were plenty of bumps in the road and many questionable decisions made along the way. But I have a feeling that some of you will be pleasantly surprised to see what is possible.

There will be two main sections in this article. First, I present my little thing I call rapl - and some of its cool features. Then, in the second section, I will get into some the internals of the crate and some of my design decisions.

Section 1: rapl features

NOTE: rapl requires Nightly and is not meant to be use in production at all. It uses some unstable features that might cause unexpected behavior, and is not optimize for performance.

Rank polymorphism

For me, one of the most important (and difficult to implement) feature is automatic rank polymorphic broadcasting for arithmetic operation, and scalar extension. This allows us to do operations with arrays of different ranks. For example, lets say we have two arrays a and b with ranks Ra=1 and Ra=2, and a scalar s, equivalent to a rank zero array.

a[3]=[123]b[3,3]=[123345678]s[]=1

Rank polymorphism allows to perform r=a+bs without manually broadcasting each array to be compatible with each other. It works by finding the smallest compatible shape (if it exists) and broadcasting all the array to that shape, which in this case is [3,3]. That is: a[3,3]=[123123123]b[3,3]=[123456789]s[3,3]=[111111111] Now, is quite easy to perform the operation. r[3,3]=a+bsr=a+b+s r[3,3]=[(1+11)(2+21)(3+31)(4+11)(5+21)(6+31)(6+11)(8+21)(9+31)][1354686911]

And yes! this is possible in rust using rapl. First open a Rust project using nightly and add rapl as a dependency.

shell
cargo add rapl

Now in main.rs we can start to have fun.

Rust
#![feature(generic_const_exprs)] use rapl::*; fn main() {     let a = Ndarr::from([1, 2, 3]);     let b = Ndarr::from([[1, 2, 3], [4, 5, 6], [7, 8, 9]]);     let r = a + b - 1;     assert_eq!(r, Ndarr::from([[1, 3, 5], [4, 6, 8], [7, 9, 11]])); }

Cute, isn’t it? And this not only works for arrays of rank 1 and 2; it actually works for arrays of arbitrary dimension:

Rust
    let c = Ndarr::from(1..9).reshape(&[2,2,2]).unwrap();     assert_eq!(c * 2 , Ndarr::from([[[2, 4], [6, 8]], [[10, 12], [14, 16]]]));

As you can see, there are multiple handy ways of initializing an Ndarr.

Tensor operations

rapl also supports commonly use tensor operations like:
- Transpose: Transpose an Nnarr
- Reduce: Given an axis and an dyadic function Fn(T,T)->T reduce the rank of the Ndarr of that axis.
- Slice: Given and Ndarr of rank R and an axis, return an vector of Ndarr of rank R1 which are slices of the original array along one dimension.
- Reshape: Given and Ndarr and a shape &[usize; N] tries to reshape it if the original shape and the new shape are compatible.
In rust:

Rust
let arr = Ndarr::from([[1,2,3],[4,5,6]]);     assert_eq!(arr.shape(), [2,3]);     assert_eq!(arr.clone().t().shape(), [3,2]); //transpose     let sum_axis = arr.clone().reduce(1, |x,y| x + y).unwrap();     assert_eq!(sum_axis, Ndarr::from([6, 15])); //sum reduction     let reshape_arr = arr.reshape(&[6]).unwrap();     assert_eq!(reshape_arr, Ndarr::from(1..7)); //reshaping

And there are other tensor primitive operations like Broadcasting and Extension that I would not get into.

Element wise operations

There is also essential support for common element-wise operations, things like trigonometric functions, absolute value, etc. And it is easy to implement more functions thanks to Map.

Rust
    let x = Ndarr::from([-1.0 , -0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8, 1.0]);     //let sin_x = &x.sin();     //let cos_x = &x.sin();     //let tanh_x = &x.tanh();     let abs_x = &x.abs();     let sum_abs = abs_x.reduce(0, |x,y| x+y).unwrap().scalar();     assert_eq!(sum_abs, 6.0)

Diatic functions

The next logical step was matrix multiplication. But I’m the kind of person who loves to overcomplicate things, so instead of a nice, fast matrix multiplication operation that would handle most use cases, I decided to go all out and create an overly generalized, inefficient _Inner Product_ operation inspired by the likes of APL (you know, the hieroglyph language ). I won’t bore you with all the details of what an Inner Product actually does, but just think of it as a generalized version of matrix multiplication. Of course, there is also a mat_mul operation, but internally it works with an inner_product.

Rust
    let a = Ndarr::from(1..7).reshape(&[2,3]).unwrap();     let b = Ndarr::from(1..7).reshape(&[3,2]).unwrap();         let inner = rapl::ops::inner_product(|x,y| x*y, |x,y| x+y, a.clone(), b.clone());     assert_eq!(inner, rapl::ops::mat_mul(a, b))

Another APL inspired diatic function is the Outer Product, this thing is also quite useful in many situations. In fact the an Outer Product is just an Inner product without the reducing step. Look at this example.

Rust
    let suits = Ndarr::from(["♣","♠","♥","♦"]);     let ranks = Ndarr::from(["2","3","4","5","6","7","8","9","10","J","Q","K","A"]);     let add_str = |x: &str, y: &str| (x.to_owned() + y);     let deck = ops::outer_product( add_str, ranks, suits).flatten();     print!("{:?}", deck)     //["2♣", "2♠", "2♥", "2♦", "3♣", "3♠", "3♥", "3♦", "4♣", "4♠", "4♥", "4♦", "5♣", "5♠", "5♥", "5♦", "6♣", "6♠", "6♥", "6♦",     //"7♣", "7♠", "7♥", "7♦", "8♣", "8♠", "8♥", "8♦", "9♣", "9♠", "9♥", "9♦", "10♣", "10♠", "10♥", "10♦", "J♣", "J♠", "J♥", "J♦",     //"Q♣", "Q♠", "Q♥", "Q♦", "K♣", "K♠", "K♥", "K♦", "A♣", "A♠", "A♥", "A♦"]

Section 2: rpal Internals and design decisions.

Alright, so let’s get one thing straight right off the bat: if any of you brave souls out there have decided to peek under the hood of rapl, I gotta give you fair warning - there are some parts of the library with concerning levels of cursedness. And others parts that seem absurd. In fact, if you compare rapl to other linear algebra libraries, it will look completely different. All comes from the fact that I’m not a Rust wizard. Truth be told, I’m not even that great at coding in general. But what I do have, is a pretty solid grasp on linear algebra and math. So when it came to designing rapl, my approach was to do as much of the heavy lifting as possible using math in a formulaic way, instead of relying on my subpar low-level programming skills. Sure, I knew it would lead to some performance issues, but sometimes you gotta play to your strengths, right?

When building rapl, one of the most crucial decisions was the choice of data structure for N-dimensional arrays, or Ndarr. If you wanted performance, there is a tread off to make between complexity and generality. But performance was never the priority. So I opted to represent N-dimensional arrays in the simplest way possible; a one-dimensional vector of elements and a shape. That means I delegate all the complexity to the operations instead of the data structure. If you’re curious, the first prototype looked something like this:

Rust
pub struct Ndarr<T, const SHAPE: &'static [usize]> { //first prototipe pub data: Vec<T>, }

Although this would be ideal because it would allow checking the compatibility of Ndarr shapes at compile time, sadly, I encounter roadblock after roadblock in the Rust Const Generic System for trying to use the shape as a cost parameter - even when using the latest nightly features. Maybe it would be possible in the future, and that would be awesome becouse it will give rust an endge over Numpy and Julia. But for now, I had to settle for the second-best option:

Rust
pub struct Ndarr<T, const R: usize> {     pub data: Vec<T>,     pub shape: [usize; R], }

Here, at least the rank R is a generic parameter, which, in theory, allows for some compile time inference in certain situations.

With that out of the way, we require a way to project the one-dimensional position of an element in the 1-D Vector into N-dimensions and the other way around. Therefore I derive these two formulas:

  1. Given an index i of an element in a flat vector, and a shape of an R-dimensional array Sm={s0,s1,...sR1}, we can find the indexes In={i0,i2,...,iR1} with the following formula. in=(iimodΠm=n+1R1smΠm=n+1R1sm)modsn In code is a bit easier to understand:
Rust
// Given and index n of a flat array, calculate the indexes of such element in an N rank array of shape: shape fn get_indexes<const R: usize>(n: &usize, shape: &[usize; R]) -> [usize; R] {     let mut ind: [usize; R] = [0; R];     for i in (0..R).rev() {         let n_block = multiply_list(&shape[i + 1..]);         ind[i] = ((n - (n % n_block)) / n_block) % shape[i];     }     return ind; }
  1. Now, the reverse operation, given a shape of an R-dimensional array Sm={s0,s1,...sR1} and the indexes In={i0,i2,...,iR1} of an element in that array, we can find the position of that element if we flattened the array with this formula. i=k=0R1ikΠn=RkR1sn In code:
Rust
fn get_flat_pos<const R: usize>(indexes: &[usize; R], shape: &[usize; R]) -> usize{     let mut ind: usize = 0;     for i in 0..R {         ind += indexes[R - i - 1] * multiply_list(&shape[R - i..]);     }     ind }

With these two mapping functions in place, all operations became much easier. For instance, the R-dimensional Transpose essentially flips the dimensions, meaning that if arr.shape() -> [2,3,6], then arr.t().shape() -> [6,3,2]. So you can imagine that calculating the transpose of an Ndarr is just a matter of playing with the two mapping operations to find the position of each element in the complementary transpose space. Even the Display function uses this projection.

Another essential part of the puzzle was the broadcasting rules. Broadcasting, as explained earlier, is how we can find a commonly compatible shape between two arrays of different shapes. Interestingly, while researching this, I discovered that there is no standardized way of doing this. For instance, the MATLAB and Numpy broadcasting rules differ in some minor but meaningful ways, and the same is true for APL. So I decided to go closer to the Numpy broadcasting rules. The code looks something like this:

Rust
pub fn broadcast_shape<const N: usize, const M: usize>(shape1: &[usize; N],shape2: &[usize; M], ) -> Result<[usize;  max(N, M) ], DimError> {     let mut out_shape: [usize;  max(N, M) ] = [0; { max(N, M) }];     let mut sh1 = shape1.to_vec();     let mut sh2 = shape2.to_vec();     sh1.reverse();     sh2.reverse();     let l = max(N, M);     for i in 0..l {         let size1 = index_or(&sh1, i, 1);         let size2 = index_or(&sh2, i, 1);         if size1 != 1 && size2 != 1 && size1 != size2 {             return Err(DimError::new(&format!(                 "Error arrays with shape {:?} and {:?} can not be broadcasted.",                 shape1, shape2             )));         }         out_shape[l - i - 1] = max(size1, size2)     }     return Ok(out_shape); }

Those are pretty much all the puzzle pieces; the rest is just using them to create all the different primitive operations.

The Future of rapl and final thoughts

I had a dream, a dream of what I wanted in Rust. And I started writing this library, just to see if it was even remotely possible. At last, I was pleasantly surprised to find out that it is not only possible, but it wasn’t even that hard. Of course, I’m aware that I don’t have all the expertise and knowledge to make this library performant, efficient, and reliable. But at the very least, it might inspire some people smarter than me to keep pushing the boundaries of the Rust ecosystem.

Meanwhile, I think I will keep on working on rapl adding new functionalities and merging pull requests. With the same goal of using Rust for my daily scripting tasks. There’s still so much work to be done!; Here are some of the things on my bucket list:

Ultimately, there is still a long journey for Rust to become my scripting language of choice. But every day, that fantasy becomes a little bit more real.

"a human being is never what he is but the self he seeks."
― Octavio Paz