Coalescence of Neural Networks and Blockchain

Authored by: Mohit Dayal , Ameya Chawla , Manju Khari

Handbook of Green Computing and Blockchain Technologies

Print publication date:  December  2021
Online publication date:  December  2021

Print ISBN: 9780367620110
eBook ISBN: 9781003107507
Adobe ISBN:




This chapter gives a brief overview of neural networks and blockchain, starting from introduction to creating a neural network and how blockchain links to neural networks.The chapter starts with giving the reader a brief introduction to neural networks then taking one step ahead with introduction to the terminologies related to neural networks and architecture of the neural network. Data propagation is the most important step in the neural network where the model is trained on basis of data propagated in forward and backward propagation. Final step in full understanding of neural networks comes with the implementation of the neural network using Python libraries. The chapter concludes with a literature review of papers on the use of neural network and blockchain technology, together with open challenges in the field.

 Add to shortlist  Cite

Coalescence of Neural Networks and Blockchain

3.1  Introduction to an Artificial Neural Network

Neural networks or artificial neural networks, as the name suggests, is a connection of nodes which we call neurons or artificial neurons. This structure is comprised of many artificial neuron layers connected together which is inspired by the structure of the human brain. Neural networks are considered better than many other machine learning algorithms due to their capability of finding complex relations between the input data and the output data.

Neural networks are part of deep learning which is a subset of machine learning where we try to replicate the structure of the human brain. Neural networks are one of the most used machine learning models due to their high computational power and wide range of applications like CNN – convolutional neural network which is used in deep learning problems related to images, RNN – recurrent neural network used in deep learning problems related to speech, etc.

Blockchain is an expanding record list which are termed as blocks and linking of these blocks are done using cryptography which form chain-like structures which give it its name blockchain. Each block which is added contains following information like hash code, a timestamp, and transaction data all these are stored in the block and hash code is a cryptographic code which is of the last block which is added to the chain. This type of structure can not be altered as the block is added for each transaction and it is design in the way to be effective against modification. The most common use of blockchain technology is cryptocurrency like bitcoin.

Blockchain enables data systems which can be either used in healthcare or student education sector all can be implemented using blockcahin technology which makes easier to add user data and is more secure. Then neural network can be trained on that type of data to make more easier method to analyze the data fit model on it rather than generic way to collect data [1].

Objective: This chapter will give a brief overview of neural networks to the reader with full understanding about the terms related to the neural network, architecture of the neural network and derivation of equations related to neural network. There will be a brief overview of blockchain technology and real-life application of blockchain technology and neural networks.

3.2  Architecture of an Artificial Neural Network

Neural network architecture is mainly divided into basic components of layers having neurons and neurons having weights, biases, and activation functions.

3.2.1  Layers

Artificial neural networks have basically three types of layers:  Input Layer

First layer of a neural network which takes input from the data and where each neuron gets input from the feature of the given data and input layer neurons are equal to features in the given data.  Output Layer

Last layer of a neural network which gives the output of the model, neural network output can be of either regression or classification. A neural network with regression output will have only one neuron in the output layer while in classification output layer neurons depends on number of classes. In regression a single value will be returned by the neural network and in case of n classes n probabilities are given where each probability signifies the chance of that class being the predicted class. Maximum probability class is taken as the final predicted class.  Hidden Layers

All the layers which are between input layer and output layer are considered as hidden layers. Each layer is connected with its previous and next layer as it takes input from previous layer and gives output to the next layer. These layers are not visible to user as the user only gives input and expects output so the layers are hidden from perspective of the user (Figure 3.1).

Figure 3.1   Neural network structure of a classifier.

Yellow-colored neurons represent the input layer neurons which takes the input, pink-colored neurons are hidden layer neurons which take input from the input layer, and blue-colored neuron is the output layer neuron which gives the predicted class or regression output.

3.2.2  Neurons

The neuron is the smallest unit of the neural network which takes input from all previous layers of neurons, processes it, and sends it to all neurons in the next layer. Let’s understand what a neuron does when it gets input from the previous layers, first it calculates the summation of product of each input value and weights assigned for that value in the particular neuron and then add bias value to it after it activation function is applied and that output is sent to next layer neurons. The activation function transforms the summation of product of weights and inputs (with addition of bias) so that it can be easily recognized by the next layers whether it is a useful input or not (Figure 3.2).

y = w 1 x 1 + w 2 x 2 + w 3 x 3 + + w n x n + b
z = Activate y
x 1, x 2, x 3x n are the n inputs which are received by the neuron and w 1, w 2, w 3w n are the weights assigned to the inputs received and b is the bias assigned to the neuron. The neuron transmits z to the next layer of neurons.

Single node taking all input and giving output.

Figure 3.2   Detailed look at neurons in layers.

3.2.3  Importance of Bias and Weights in a Neural Network  Weights

Weights are used with each feature given as an input to the neuron and weights tell which feature is more important. As an example, if the weight of one feature is more than another then that means that feature is more important while predicting the output than the other feature with less weight value. Negative value of weights determine that the feature is inversely proportional to the target value which we are predicting.  Biases

Bias is a constant value added to the summation of product of weights and features. Bias is added to make the activation function to shift its direction in either left or right, as an example if we are making the decision that if summation value is greater than 0.5 we will assign class 1 and if less than 0.5 we will assign class 0. This threshold value 0.5 which we have set can be used in comparison and this threshold constant is used in decision making. Let’s consider the case when the summation is greater than 0

w i x i > 0.5
w i x i 0.5 > 0
As we can now see, it is easier for the activation function to fit properly on the given data where boundaries are defined for the summation being greater than 0 in the case of class 1 and the summation being less than 0 in class 0. Bias value can also be considered as intercepts; it doesn’t change the slope but changes the position of the graph so that it can fit as per our needs.

3.2.4  Activation Functions

Activation functions are mathematical functions which determines the output of the neuron. Different activation functions are used for different types of problems and activation function used also depends whether problem is regression problem or classification problem. Gradient of weights and biases plays a very important role while choosing the activation function, Gradient is defined as rate of change in a variable respect to other variable and gradient of error with respect to weights and biases which we achieve after training is used to update biases and weights to reduce the error in prediction [2].  Linear Function

This is the simplest activation function where a constant is multiplied without variable.

z = a y
z = a
Gradient of z with respect y shows a constant value which means the weights and biases will get update by a constant factor always and error will remain the same.  Sigmoid Function

The sigmoid function is a mathematical function which transforms the value between range [0,1] and it is a non-linear activation function.

z = 1 1 + e y
z ' = 1 1 + e y 1 1 1 + e y
Gradient of the sigmoid function shows the gradient is dependent on the variable y and as y tends to either +∞ or −∞ the gradient will become 0 and will be the same as the linear function gradient for higher values.  Hyperbolic Tangent Function

This is a non-linear function which is derived from the sigmoid function by making small changes in the sigmoid function.

z = 2 1 1 + e 2 y 1
This function gives range from [−1,1] and is symmetric about origin.
z = 1 2 1 1 + e 2 y 1 2
As y tends to either +∞ or −∞ the gradient will be become 0 and will be as same as the linear function gradient for higher values.  Rectified Linear Unit

This is a non-linear function where if our value of y is greater than 0 then y is returned else 0 is returned and this property is advantage of rectified linear unit that it only send positive values or 0.

z = maximum 0 y
z = maximum 0.1
Gradient in this case is always constant which means the means the weights and biases will get update by a constant factor always and error will remain same.  Exponential Function

The exponential function is a non-linear function where if y is greater than equal to zero it remains same else it is transformed using exponential function.

z = y if y > 0
z = a e y 1 if y < = 0
For positive values it is a linear curve and for negative values it is a exponeential curve. This function allows negative values as output which makes the model learn faster and better.
z = 1 if y > 0
z = a e y if y < = 0
Gradient is constant for values of y greater than 0 and exponential for values less than 0 which signifies that in case of negative values of y the model weights and biases will be updated.

3.3  Working of an Artificial Neural Network

When we train an artificial neural network the data is transmitted in each epoch twice through the model once transmitted forward and once backward. The transmission of data through the whole model is called propagation.

3.3.1  Forward Propagation

Data is transmitted from input layer neurons to the output layer and output is predicted from the output layer. Output given from output layer marks the end of forward propagation. Each neuron performs the same function by calculating weighted sum with addition of bias and then passing it to the activation function. The output is given to each neuron in the next layer [3] (Figure 3.3).

Figure 3.3   Forward propagation.

Yellow, pink, and blue arrows show how the data is transmitted in the forward propagation.

3.3.2  Backward Propagation

This is the reverse process of forward propagation where process goes in backward direction. Backward propagation is based on the principle of error where we first calculate error in our prediction, then we try to propagate backward and check how the error is affected by specific neurons’ weights and bias by calculating the gradient of error in relation to the variable weights and bias and then update them accordingly [4].

Let’s consider an example in Figure 3.4:

Figure 3.4   Backward propagation.

Blue arrows show how data is transmitted from the output layer to hidden layer with pink-colored neurons. Let’s find the loss function which is defined as square of difference in actual value and predicted value.

E = z p z a 2
z p is the value predicted by the neural network and z a is the actual value expected at the output, where z a is a constant and z p is calculated using the activation formula in last neuron.
z p = Activation y p
y p = w i x i + b
Now let’s consider the weights in the next Figure 3.5:

Figure 3.5   Backward propagation.

Let’s consider the output neuron (blue neuron): it is receiving 2 weights W 4 and W 3 and has bias value b 3 so while back propagating the values of them will be updated as:

W 4 = W 4 η gradient loss gradient with respect t o W 4
W 3 = W 3 η gradient loss gradient with respect t o W 3
b 3 = b 3 η gradient loss gradient with respect t o b 3
where ƞ is the learning factor which is a hyper parameter and the goal of the algorithm is to update weights and bias for each of the neurons and to optimize the error.

Gradient of loss with respect to W 4 is calculated as:

E W 4 = y p y a 2 W 4
y p y a 2 W 4 = 2 y p y a y p W 4

Let’s consider linear activation function:

y p = a W 4 x 4 + W 5 x 5 + b 3
y p W 4 = a x 4
W 3, x 4, x 3, b 3 are considered constants while doing partial derivative with W 4. Similarly we can calculate gradient of error due to bias.
y p b 3 = a
Let’s consider example of pink neuron with bias b 2. Let’s consider it is sending z 2 as input then:
E b 2 = E z 2 z 2 b 2
z 2 can be considered as x 5 sent as input which implies
z 2 b 2 = a
E x 5 = y p y a 2 x 5
y p y a 2 x 5 = 2 y p y a y p x 5
As we have already calculated y p then substituting it and we can calculate following gradient.
y p = a W 4 x 4 + W 5 x 5 + b 3
y p W 4 = W 5
using the above result
E b 2 = a W 4
Hence we can calculate gradients for all weights and biases of each and every neuron and update them in each epoch.

3.4  Implementation of Neural Network Using Python

There are many ways to implement a neural network either by making our own class and defining all weights, biases, and activation functions; we will be using predefined modules in Python. We will be using Scikit-learn library to create our neural network module. Data which we will use in this implementation is Titanic dataset. Many fields have practical implementation of ANN [5, 7] (Figures 3.63.9).

Initial steps of data preprocessing and importing required libraries. (Continued)

Initial steps of data preprocessing



Figure 3.7   Initializing the model and training it and testing it.

Information about all the parameters given for prediction.

Figure 3.8   Using method info for information about data.

Information about all parameters after removal of all rows with having one or more null values.

Figure 3.9   Using method described for statistics of the data.

The model trained obtained an accuracy of 60.81%. The accuracy depends on many features like choosing activation functions, number of neurons in each layer, number of layers, and optimizing algorithm, etc.

3.5  Literature Review

  1. DeepRing: Protecting Deep Neural Network With Blockchain Deep neural networks have vast variety of applications and blockchain technology is also accepted as one of the most secure techonolgy in cybersecurity field. This paper presents the combination of both the worlds by first making a deep neural network model and then securing the model using blockchain technology [8].
  2. Machine Learning Adoption in Blockchain-Based Smart Applications: The Challenges, and a Way Forward Security is one of the most important aspects of any technology; there are attacks possible on one of the most secure blockchain technologies so to protect it machine learning models are used to analyze the attacks on the blockchain and to make it more secure [9].
  3. An AI-Based Super Node Selection Algorithm in Blockchain Networks One of the major problems faced in blockchain technology is the large consumption of electricity. Application of artificial intelligence (AI) can help in reducing the unwanted hash operations performed and hence can help in saving electricity [10].
  4. Machine Learning In/for Blockchain: Future and Challenges Machine learning and blockchain are both among the most important technologies used in the world and both are data-driven technologies. The combination of machine learning and blockchain to make more efficient and effective solution for data-driven technologies [11].
  5. Secure Decentralized Peer-to-peer Training of Deep Neural Networks Based on Distributed Ledger Technology Privacy is a key factor while making secure systems and blockchain is one of the most secure transaction technologies. Using deep learning techniques, a secure peer-to-peer-based framework can be made on based on blockchain technology to make a more secure system for transactions [12].

3.6  Open Challenges

There are many open challenges related to the technology of blockchain but the most common challenge is to convince the public to adopt it as most users have a misconception in their mind relating blockchain technology to cryptocurrency which most people think as an illegal currency or currency used by hackers and fraud users. Another main challenge faced in the technology is speed as this system used for transactions is very slow as compared to other transaction systems used in modern-day technologies.


M. Dayal , N. Singh , Indian health care analysis using big data programming tool, Procedia Computer Science 89 (2016) 521–527.
M. M. Lau , K. H. Lim , Review of adaptive activation function in deep neural network, in: 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), IEEE, 2018, pp. 686–690.
G. Cerri , M. Cinalli , F. Michetti , P. Russo , Feed forward neural networks for path loss prediction in urban environment, IEEE Transactions on Antennas and Propagation 52 (11) (2004) 3137–3139.
Z. Tang , O. Ishizuka , H. Matsumoto , Backpropagation learning in analog t-model neural network hardware, in: Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan), Vol. 1, IEEE, 1993, pp. 899–902.
M. Khari , A. K. Garg , R. G. Crespo , E. Verdu , Gesture recognition of rgb and rgb-d static images using convolutional neural networks, International Journal of Interactive Multimedia & Artificial Intelligence 5 (7) (2019) 22–27.
M. Dua , R. Gupta , M. Khari , R. G. Crespo , Biometric iris recognition using radial basis function neural network, Soft Computing 23 (22) (2019) 11801–11815.
Y. H. Robinson , S. Vimal , M. Khari , F. C. L. Hernandez , R. G. Crespo , Tree-based convolutional neural networks for object classification in segmented satellite images, The International Journal of High Performance Computing Applications (2020) 1094342020945026.
A. Goel , A. Agarwal , M. Vatsa , R. Singh , N. Ratha , Deepring: Protecting deep neural network with blockchain, in: Proceedings of the IEEE/CVF 16 Conference on Computer Vision and Pattern Recognition Workshops, 2019, pp. 2821–2828.
S. Tanwar , Q. Bhatia , P. Patel , A. Kumari , P. K. Singh , W.-C. Hong , Machine learning adoption in blockchain-based smart applications: The challenges, and a way forward, IEEE Access 8 (2019) 474–488.
J. Chen , K. Duan , R. Zhang , L. Zeng , W. Wang , An AI based super nodes selection algorithm in blockchain networks, arXiv preprint arXiv:1808.00216.
F. Chen , H. Wan , H. Cai , G. Cheng , Machine learning in/for blockchain: Future and challenges, arXiv preprint arXiv:1909.06189.
A. Fadaeddini , B. Majidi , M. Eshghi , Secure decentralized peer-to-peer training of deep neural networks based on distributed ledger technology, The Journal of Supercomputing 76 (2020) 1–15.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.