Following are the milestone accounts of big data over the last 5 decades: 1978, it was found that, the demand for information provided by personal communication, which is characterized to be two way, increased drastically, whereas the demand of information provided by mass media, which is characteristically one-way, had become stagnant. In 1980, I.A Tjomsland deduced that, obsolete data is being stored because the penalties of doing so are far less drastic than that of, discarding useful data The need for machines that can recognize or predict patterns in data without understanding the meaning of the patterns was realized as early as 1990. In 1996, RJT Morris and B.J Truskowski observed that, digital storage has become more cost-effective for storing data than paper In 1997, it is realized that,the data sets are generally quite large, taxing the capacities of main memory, local disk and even remote disk and this challenge is imposed by Visualization. Got your short journey, then go through Understanding Hadoop Technology Before Hadoop Download blog. In 2000 it was found that, 1.5 Exabyte of unique information was generated by the world The world produced about 5 exabytes of new information in 2002 and 92% of the new information was stored on magnetic media, mostly in hard disks. In 2000 Francis X Diebold stated, “Recently, much good science, whether physical, biological, or social, has been forced to confront—and has often benefited from—the “Big Data” phenomenon”. IDC estimated that, the world created 2837 exabytes of data in 2012. Demand for Computer Systems Analysts with big data expertise increased by 89.9% in the last twelve months and the global data is projected to grow by 40 % per year. Watch this video on ‘Big Data & Hadoop Full Course – Learn Hadoop In 12 Hours’: The opinions expressed in this article are the author’s own and do not reflect the view of the organization.