Hilaal Alam
2 min readOct 15, 2021

--

Activation Functions

An activation function in a neural network defines how the weighted sum of the input is transformed into an output from a node or nodes in a layer of the network.

Why do we use Activation Function (AF)?

The neuron in NN performs linear computation. But, our data may have a non-linear relationship. Here comes the activation function to introduce non-linearity to the model & to improve its performance.
After computation is performed the results are passed to activation function. Then we get the output.

Most popular AF are ReLU, Sigmoid & Hyperbolic Tangent (Tanh).

ReLU - Rectified Linear Unit - is the simplest yet it took more than a decade to implement in NN after the a brief stagnant.

Image: Analytics Vidhya

ReLU keeps only the positive values & nullifies all negatives. It is the most commonly used activation function.

Sigmoid is a variant & advanced form of ReLU. It, like ReLU has only the positive values but the transition to the positive is smoother than the former.

Img: https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6

The 3rd type of activation function, Hyperbolic Tangent (Tanh) is symmetric around the axis.

Image: Wolfram Mathworld

Thus after receiving inputs, computation is performed & results are passed to AF. When the output result reaches beyond a certain value, AF activates the neuron.

--

--

Hilaal Alam

| Dreamer, Explorer, Innovator | Startups | Quantum-Information, Computing, Complexity, Error Correction, Gravity, Biomimicry | Design-Flexures, PBDL |