SnarkyNet — MNIST Handwritten Digits in a ZkApp on Mina Protocol
Introduction to zkApps
zkApps (zero-knowledge apps) are Mina Protocol’s smart contracts powered by zero-knowledge proofs, specifically using zk-SNARKs with the SnarkyJS module in typescript. Smart contracts are simply programs that are stored and executed on a blockchain so that they are always available, and can’t be tampered with. zkApps provide the same guarantees, but they work differently. They can run anywhere, and only need to send a small zero knowledge proof on-chain in order to prove that they have executed correctly.
The implementation of zkApps allow for arbitrarily-complex computations to be performed off-chain and thus only incur a flat fee for sending the proof. In contrast to Ethereum, as the complexity of the smart contract increases, the associated gas for the computation increases. A perfect proof-of-concept to investigate is the implementation of a deep neural network in a zkApp.
The proof-of-concept was for Mina’s zkApps Builders Program cohort 1 where the zkApp was deployed on a local instance of Mina and before deployment of the zkApp was available.
Brief Introduction to Neural Networks
Simply said, a neural network is network of neurons that takes a set of inputs to produce an output. The input could range from the pixels from an image for classification of objects, historical stock data for predicted pricing, or block chain data for anomaly detection (e.g. Ethereum).
The smallest element, a neuron, can be broken down as below. Inputs are multiplied by a weight, added together with a bias, and the result is passed through an activation function. The weights of the neurons are obtained during a training phase of the model and the activation functions are selected based on the purpose of the model (e.g. RelU for nonlinearity and Softmax for Multi-Class Classification).
A neural network is simply several layers of neurons linked together. The example below illustrates a neural network with an input layer consisting of three neurons in the input layer, fives neurons in the first hidden layer, an arbitrary hidden layer N, and a single output layer of two neurons. The number of hidden layers is determined based on the application and subsequent training to ensure overfitting does not occur.
Therefore, the resulting neural network is simply a series of dot products and associated activation functions. Implementing the dot products won’t be too much of an issue in SnarkyJS but the activation functions present their own challenges.
Approach
The approach for implementing the SnarkyNet neural network in a zkApp follows:
- Develop and Train MNIST Handwritten Digit Classification model in Tensorflow in Python.
- Weights from the Tensorflow model are imported into SnarkyJS for the zkApp to perform the prediction.
The benefit of splitting the training and the implementation as such allows for the most computational intensive processing, or the training of the model, to be performed within Tensorflow and utilize the associated efficiencies such as GPUs and model evaluation.
Design and Training of the Model on MNIST Digits
The MNIST Handwritten Digit dataset consists of 70,000 handwritten digits as shown below. The dataset is split into 60,000 images for training and 10,000 images for test / validation.
The MNIST Handwritten Digit Classification model will consists of 128 neurons in a hidden layer with the RelU activation function (for non-linearity). The output layer of 10 neurons with the Softmax activation function allows for multinomial probability distributions such as multi-class classification:
# Model
model = keras.Sequential( [ layers.Dense( 128, activation=’relu’ ),
layers.Dense( 10, activation=’softmax’ )
] ) # Compiler
model.compile( optimizer=’rmsprop’,
loss=’sparse_categorical_crossentropy’,
metrics=[‘accuracy’] )
The images are reshaped from a 28 x 28 grayscale pixel image to a 784 x 1 grayscale array for the input of the input layer. In the output layer of 10 neurons, each neuron represents the classification of the digits from 0 to 9.
In summary, the model follows:
- Input layer of 784 x 1
- Hidden layer 1: 128 with RelU activation function
- Classification layer: 10 with Softmax activation function
With training over 100 epochs, the accuracy of the model follows:
Epoch 100/100 469/469 [==============================] - 1s 2ms/step - loss: 3.8147e-10 - accuracy: 1.0000
The weights of the trained model are exported in preparation for importing to the zkApp:
for layer in model.layers:
weights = layer.get_weights()
print( f'Weights: { weights[0].tolist() }')
Summary of Classes in SnarkyNet
The following classes were developed in support of the SnarkyNet zkApp.
SnarkyTensor
- Provides tensor manipulation, dot products, and activation functions
- Implements decimal representation for Int65
SnarkyLayer (Extends SnarkyTensor)
- Emulates Neural Network layer for handling the dot products and activation function
SnarkyNet (Extends SnarkyTensor)
- Utilizes SnarkyLayers and performs prediction on the input image
Creation of the Model in SnarkyJS
As the methodology behind the creation of SnarkyJS was to make the creation of the neural network as painless as possible, the creation of the neural network layers follow:
# create layers and zkapp
let layers = [ new SnarkyLayer( weights_l1, 'relu' ),
new SnarkyLayer( weights_l2, 'softmax' ) ];zkappInstance = new SmartSnarkyNet(amount, snappPubkey,
new SnarkyNet( layers ) );
Where weights_l1
and weights_l2
are the weights exported from the Tensorflow model.
The prediction from the model would follow where image_a_7
is an 784 x 1 array of grayscale pixels.
await Mina.transaction( account1, async () => {
await zkappInstance.predict( [ image_a_7 ] );
})
.send()
.wait()
.catch((e) => console.log(e));
Challenges of SnarkyNet
The zkApp is written in typescript with the SnarkyJS module. Several challenges had to be addressed for the implementation of the SnarkyNet in zkApp.
Representation of a Int65 (or Int64-like) in SnarkyJS
Gregor from O1 Labs implemented an Int65 representation similar to Int64 with a positive / negative sign. The Int65 allows for proper handling of circuit arithmetic associated with positive and negative values.
Representation of Decimals in SnarkyJS
Natively, decimal values in SnarkyJS are not represented in the IEEE float representation but may be scaled by multiplying the decimal value by a scaling factor of 1⁰⁸ such that (e.g. 0.1234 to 12340000). The scaling results in potential truncating errors and range limitation of -²⁶⁴+1 to ²⁶⁴-1. When calculating dot products, the resulting values must be rescaled accordingly to avoid overflow of the variable.
The implementation follows for reading in the weights associated with the Neural Network from the Tensorflow model:
num2int65(x: number): Int65 {
return Int65.fromNumber( Math.floor( x * this.scale_factor ) ); }
Dot Products for Circuits
The dot product for Rank 1 Tensors follows with the associated scaling for each multiplication to prevent overflow of the value:
dot_product_t1( v1: Array<Int65>, v2: Array<Int65> ): Int65 {
let y = Int65.zero;
console.assert( v1.length === v2.length );
v1.forEach( ( v1_value, i ) =>
y = y.add(v1_value.mul(v2[i]).div(this.scale_factor_int65)));
return y;
}
Exponential Methods for Circuits
The Softmax activation function utilizes the exponential for multi-class classification models. The Softmax activation function follows:
Due to the nature of the circuits, the exponential was represented with the Taylor Series and yielded accurate results from ranges of -2.5 to 2.5. However, outside the range, the results were wildly inaccurate and was a major barrier. The Taylor Series implementation of exponential follows:
exp( x: Int65 ): Int65 {
// exponential Implementation
return this.num2int65( 1 ).
add( x ).
add( this.exp_part( x, 2, 2 ) ).
add( this.exp_part( x, 3, 6 ) ).
add( this.exp_part( x, 4, 24 ) ).
add( this.exp_part( x, 5, 120 ) ).
add( this.exp_part( x, 6, 720 ) ).
add( this.exp_part( x, 7, 5040 ) )
}
Results of SnarkyNet
The initial results of the SnarkyNet implementation were initially promising but ran into the barrier of the implementation of the exponential method in SnarkyJS for circuits.
As the output layer or classification layer relied on the Softmax activation function, the results did not accurately predict the class in image due to the limitation. However, when investigating the values of the inputs to the activation function for an image, the results were promising. For instance, in an image of 4 the output follows:
[ -67.09920126
-106.4289438
-75.16996982
-84.14149683
-9.6325425,
-85.53741677
-66.21014034
-53.28420105
-65.38307148
-33.87007553 ]
When passing the values through a Softmax activation function (outside of SnarkyJS), the results follow:
[ 1.10292811e-25
9.15918956e-43
3.44712440e-29
4.37695702e-33
1.00000000e+00
1.08375710e-33
2.68325198e-25
1.10236538e-19
6.13554154e-25
2.97696103e-11 ]
Or in other words:
Class 0: 1.10292811e-25
Class 1: 9.15918956e-43
Class 2: 3.44712440e-29
Class 3: 4.37695702e-33
Class 4: 1.00000000e+00
Class 5: 1.08375710e-33
Class 6: 2.68325198e-25
Class 7: 1.10236538e-19
Class 8: 6.13554154e-25
Class 9: 2.97696103e-11
The multinomial probability calculated for class 4 is 1.0! With the proper implementation of an exponential method, the the model would have accurately predicted the classification!
Conclusion
A Neural Network is viable in SnarkyJS for zkApps with caveats. With the proper implementation of an exponential method, the activation functions for Softmax would allow for the multinomial classification. Neural networks such as binomial classification models or prediction of values are currently viable with proper training of the model utilizing discrete activation functions.
Due to the challenges associated with the exponential implementation, a future potential project is to implement a deep neural network that would not rely on exponentials such as binary classification models or prediction of a value.
Thanks to O1 Labs and the Mina Foundation for the opportunity to play with the zkApps and SnarkyJS! It was a fantastic learning experience and typescript makes it extremely easy for somebody who primarily does C++ and Python!
Checkout the GitHub!