https://www.siter.org.uk/wp-content/uploads/2020/12/Anoop-320x280.jpg
SITER Author

Anoop M

Anoop. M is an Assistant professor in Alpha Arts and Science College and Researcher in Department of Computer Applications, Vels Institute of Science, Technology and Advanced Studies, Chennai India


Anoop’s Article

THE SEXIEST JOB OF THE 21 CENTURY

Yes, you read that properly, from banking to healthcare, big data analytics is all over. The use of manual paper records, files, floppy and discs have now become outdated. The reason for this is the exponential evolution of data. There has been a rush in demand for experts in this field and doubled pains on the part of varieties and assistances to boost salaries and attract data science endowments. Public began storing their data in relational database systems but with the hunger for new technologies, inventions,  applications with quick response time and with the introduction of the internet, even that is insufficient now. This generation of continuous and massive data is Big Data

“The world is one big data problem.” – by Andrew McAfee, co-director of the MIT Initiative

Architecture:
Big data repositories have existed in many forms, often built by corporations with a special need.
In 1984: Teradata Corporation marketed the parallel processing DBC 1012 system
In 1990: Commercial vendors historically offered parallel database management systems for big data .For many years, WinterCorp published the largest database report.
In 1991: Hard disk drives were 2.5 GB, so the definition of big data continuously evolves according to Kryder’s Law.
In 1992: Teradata systems were the first to store and analyse 1 terabyte of data.
In 2000: Seisint Inc. (now LexisNexis Risk Solutions) developed a C++-based distributed platform for data processing and querying known as the HPCC Systems platform. This system automatically partitions, distributes, stores and delivers structured, semi-structured, and unstructured data across multiple commodity servers. Users can write data processing pipelines and queries in a declarative dataflow programming language called ECL. Data analysts working in ECL are not required to define data schemas upfront and can rather focus on the particular problem at hand, reshaping data in the best possible manner as they develop the solution.
In 2004 : LexisNexis acquired Seisint Inc. and their high-speed parallel processing platform and successfully used this platform to integrate the data systems of Choicepoint Inc. when they acquired that company in 2008.
Google published a paper on a process called MapReduce that uses a similar architecture. The MapReduce concept provides a parallel processing model, and an associated implementation was released to process huge amounts of data. With MapReduce, queries are split and distributed across parallel nodes and processed in parallel (the Map step). The results are then gathered and delivered (the Reduce step). The framework was very successful, so others wanted to replicate the algorithm. Therefore, an implementation of the MapReduce framework was adopted by an Apache open-source project named Hadoop. Apache Spark was developed in 2012 in response to limitations in the MapReduce paradigm, as it adds the ability to set up many operations (not just map followed by reducing).
In 2007: Teradata installed the first petabyte class RDBMS based system.
In 2011: the HPCC systems platform was open-sourced under the Apache v2.0 License.
CERN and other physics experiments have collected big data sets for many decades, usually analyzed via high-throughput computing rather than the map-reduce architectures usually meant by the current “big data” movement.
In 2012: studies showed that a multiple-layer architecture is one option to address the issues that big data presents. A distributed parallel architecture distributes data across multiple servers; these parallel execution environments can dramatically improve data processing speeds.
In 2017: there are a few dozen petabyte class Teradata relational databases installed, the largest of which exceeds 50 PB. Systems up until 2008 were 100% structured relational data. Since then, Teradata has added unstructured data types including XML, JSON, and Avro.

There are multiple tools for processing Big Data such as Hadoop, Pig, Hive, Cassandra, Spark, Kafka, etc. depending upon the requirement of the organization.

Big Data Applications
These are some of the following domains where Big Data Applications has been revolutionized:
Entertainment.
Insurance
Driver-less Cars
Education
Automobile
Government

Conclusion

The availability of Big Data, low-cost product hardware, and new information management and analytic software have shaped a single moment in the history of data analysis. The convergence of these trends means that we have the abilities required to analyse amazing data sets quickly and cost-effectively for the first time in history. These abilities are neither hypothetical nor trivial. They represent a genuine leap forward and a clear opportunity to realize huge gains in terms of efficiency, productivity, revenue, and profitability.

As the career paths available in big data continue to raise so does the shortage of big data professionals needed to fill those positions. The characteristics such as communication, knowledge of big data concepts, and agility are equally as important as the technical skill aspects of big data.

Big data professionals are the link between raw data and useable information. They should have the services to manipulate data on the lowest levels, and they must know how to interpret its trends, patterns, and outliers in many different forms. The languages and methods used to achieve these areas are growing in strength and numbers, a pattern unlikely to change in the near future, especially as more languages and tools enter and gain popularity in the big data argument.



Make

Connection

Follow Anoop M


Affiliate

Faculty

Web Link
Affiliate

Research

Web Link