The most complex, non-linear and parallel computer is the human brain. It consists of 10 billions of neurons and about 60 trillions of connections. It’s a massively parallel network of neurons and can process in milliseconds. A computer can process in nanoseconds, which is multifold faster. If the capabilities of a brain are embedded into a computer, one could create an incomprehensible supercomputer with imperceivable power.
The basic structural unit of a brain is a neuron. It consists of various receptors that take synaptic inputs and process it and give an output which is considered an input for the next neuron. This transmission of input to cell body to output happens through connections present in the neurons. Some of them are strong and some are not as much. These connections can be intuitively referred to as “free” parameters.Artificial neural networks(ANNs) have become a popular phrase in the recent times but the concept is rather old. Research has been going on in the field for a very long time.
They’re neurobiologically motivated models which are used to process data. The equivalent electronic model has synaptic weights analogous to the strengths of the connections. These weights are cyclically modified to achieve a decent accuracy.ANNs are highly useful and able models compared to other data models due to their capabilities like their non-linearity of models and input-output mapping feature. That is, the network doesn’t know the output the first time a pattern is fed.
It learns “with a teacher” in contrast to learning “under the supervision of a teacher” which makes it remarkably different from other algorithms. They’re also highly adaptable as they consist of “free” parameters which are modified based on the surrounding environment. Also, ANNs are fault tolerant. That is, if a connection is missing, it doesn’t lead to catastrophic failure. ANNs have become rather popular for two reasons – we’re in the middle of a data revolution which is considered to have more influence than the internet revolution.
The second reason is the increasing computational power of computers. They are data hungry algorithms that need immense computational power.They are capable of analyzing the kind of unstructured data available today like photos, videos, multimedia contents, audio files etc.
and are very good at it. They have applications in computer vision, language translation, text summarization etc.But we can still not use billions of neurons and have trillions of connections. We’re only capable of mimicking the human brain in small scale in order to incorporate the learning mechanism. It is indeed getting better at a fast rate.
Moore’s Law states that “Processing and computational capabilities of computers double every year.” While Artificial Intelligence applications do not fail to amaze us, there’s a long way to go.