|
Human eyes can process 10-15 images per second,
it can handle higher rates too, but will lower precision. |
Consider you are sitting in a car, the car is moving on a road, and you are looking through the window. The best you can do is processing 10-15 images per second, so you unquestionably miss much information, and can not get information about many things happen out there.
However, your brain does not let you feel the information gap; it constructs the best reality it can from the given information and the things it has learned through your past life.
That is somehow the reason reality is different from the different observers point of view. Someone may say I saw a tree in the corner of the street while another one may claim to see a wooden street light.
If we believe through the evolution, the human brain does the best it can; we need to (or, at least, can) do the same in Machine Learning (ML) algorithms too. Because the idea of ML itself is from the way our brain works. When you see a bus on the road or a traffic sign your brain does not process the given information as a raw bitmap of a picture, it just works with edges, curves, colors, etc. It uses the features it thinks are best to understand the reality better and faster.