Hadoop Archives - Bitwise https://www.bitwiseglobal.com/en-us/categories/data-analytics-and-ai/hadoop/ Technology Consulting and Data Management Services Mon, 01 Jul 2024 06:19:20 +0000 en-US hourly 1 https://cdn2.bitwiseglobal.com/bwglobalprod-cdn/2022/12/cropped-cropped-bitwise-favicon-32x32.png Hadoop Archives - Bitwise https://www.bitwiseglobal.com/en-us/categories/data-analytics-and-ai/hadoop/ 32 32 Join us at Apache Kafka Meetup in Columbus https://www.bitwiseglobal.com/en-us/event/bitwise-at-apache-kafka-meetup-columbus/ https://www.bitwiseglobal.com/en-us/event/bitwise-at-apache-kafka-meetup-columbus/#respond Tue, 15 Oct 2019 00:00:00 +0000 https://www.bitwiseglobal.com/en-us/event/bitwise-at-apache-kafka-meetup-columbus/ Apache Kafka meetup in Columbus: Join us tonight for Apache Kafka Meetup featuring speakers from Nationwide and Bitwise. Our own Shahab Kamal, EVP Solution Engineering and Customer Success, is presenting a retail case study on “Point of Sale Order Processing” focused on Kafka implementation at a leading Ohio based retailer to gather store order information ... Read more

The post Join us at Apache Kafka Meetup in Columbus appeared first on Bitwise.

]]>
Apache Kafka meetup in Columbus:

Join us tonight for Apache Kafka Meetup featuring speakers from Nationwide and Bitwise. Our own Shahab Kamal, EVP Solution Engineering and Customer Success, is presenting a retail case study on “Point of Sale Order Processing” focused on Kafka implementation at a leading Ohio based retailer to gather store order information in real time.

Together with our partner Confluent and friends from MODUG (Mid-Ohio Data User Group Meetup Group) we are delighted to invite you to this event. Pizza and drinks start at 5:15pm and speakers start at 6:15pm.
Please RSVP in advance: https://www.meetup.com/Columbus-Kafka/events/264738652/

The post Join us at Apache Kafka Meetup in Columbus appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/event/bitwise-at-apache-kafka-meetup-columbus/feed/ 0
BIG DATA DAY LA 2017 https://www.bitwiseglobal.com/en-us/event/big-data-day-la-2017/ https://www.bitwiseglobal.com/en-us/event/big-data-day-la-2017/#respond Thu, 07 Sep 2017 00:00:00 +0000 https://www.bitwiseglobal.com/en-us/event/big-data-day-la-2017/ What you need to know Big Data Day LA is the largest of its kind, and completely free, Big Data conference in Southern California. Spearheaded by Subash D’Souza and organized and supported by a community of volunteers including Bitwise, sponsors and speakers, Big Data Day LA features the most vibrant gathering of data and technology ... Read more

The post BIG DATA DAY LA 2017 appeared first on Bitwise.

]]>
What you need to know

Big Data Day LA is the largest of its kind, and completely free, Big Data conference in Southern California. Spearheaded by Subash D’Souza and organized and supported by a community of volunteers including Bitwise, sponsors and speakers, Big Data Day LA features the most vibrant gathering of data and technology enthusiasts in Los Angeles.

2017 session tracks:

 

  • Big Data
  • Data Science
  • Hadoop/Spark/Kafka
  • NoSQL
  • Use Case Driven IoT (New)
  • Entertainment (New)
  • AI/Machine Learning (New)

See Who Will Be There

 

  • DATA SCIENTISTS
  • SOFTWARE DEVELOPERS
  • SYSTEM ARCHITECTS
  • HEAD RESEARCHERS
  • BUSINESS ANALYSTS
  • DATA ENGINEERS
  • TECHNICAL LEADS
  • CEOS, CTOS, CIO, ETC.
  • IT MANAGERS
  • BUSINESS STRATEGISTS
  • DATA ANALYSTS
  • RESEARCHERS
  • HEAD DATA SCIENTISTS
  • ENTREPRENEURS
  • CONSULTANTS

REGISTRATION STARTS AT 7 AM on Saturday, August 5, 2017

Register

The post BIG DATA DAY LA 2017 appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/event/big-data-day-la-2017/feed/ 0
Empower your Data and Ensure Continuity of Operations with Hadoop Administration https://www.bitwiseglobal.com/en-us/blog/empower-your-data-and-ensure-continuity-of-operations-with-hadoop-administration/ https://www.bitwiseglobal.com/en-us/blog/empower-your-data-and-ensure-continuity-of-operations-with-hadoop-administration/#respond Sat, 18 Mar 2017 10:47:00 +0000 https://www.bitwiseglobal.com/en-us/empower-your-data-and-ensure-continuity-of-operations-with-hadoop-administration/ Planning A Hadoop administration team’s responsibilities starts when a company kick-starts with the Hadoop POC. An experienced team like Bitwise comes up with a roadmap right at the beginning to help scale from POC to production with minimal wastage of initial investment and effective guidance on investment decisions, be it in-house infrastructure, POC or to ... Read more

The post Empower your Data and Ensure Continuity of Operations with Hadoop Administration appeared first on Bitwise.

]]>

Planning

A Hadoop administration team’s responsibilities starts when a company kick-starts with the Hadoop POC. An experienced team like Bitwise comes up with a roadmap right at the beginning to help scale from POC to production with minimal wastage of initial investment and effective guidance on investment decisions, be it in-house infrastructure, POC or to choose PAAS options.

For any organization, understanding of the estimated investments is mandatory in the initial phases. Capacity planning/estimation is the next step after successful completion of the POC. Choosing the right combination of storage and computing hardware, interconnected network, operating system, storage configuration/disk performance, network setup etc. play an important role on the overall cluster performance. Similarly, special considerations are required for the master and slave node hardware configuration. The right balance of needs vs. greed can be achieved only after years of implementation experience.

Deployment

Once you have the hardware defined and in place, the next stage is the planning and deployment of the Hadoop cluster. This involves configuring the OS with the recommended configuration changes to suite the Hadoop stack, configuration of SSH and Disk, choosing and installing a Hadoop distribution (Cloudera, Hortonworks, MapR or Apache Hadoop) as per the requirements, meeting the configuration requirements for Hadoop daemons for optimized performance. All of these setups vary based on the size of your cluster, so it’s imperative that you configure and deploy after covering all the aspects and pre-requisites.

Another important aspect is designing of cluster from development perspective, various environments (Dev, QA, Prod etc.) and usage perspective, i.e. access security and data security.

Managing a Hadoop Cluster

After implementation of the Hadoop cluster, the Hadoop admin team needs to maintain the health and availability of the cluster round the clock. Some of the common tasks include management of the name node, data nodes, HDFS and Mapreduce jobs which forms the core of the Hadoop eco system. Impact to any of the components can negatively affect the cluster performance. For e.g. unavailability of a data node, say due to a network issue, will cause the HDFS to replicate the under-replicated blocks which will bring a lot of overhead and cause the cluster to slow down or even make it inaccessible in case of multiple data node disconnections.

Name node is another important component in a Hadoop cluster and acts as a single point of failure. Consequently, it is important that a backup of fsimage and editlogs are taken periodically using the secondary name node so as to recover from a name node failure. The other administrative tasks include:

  • Managing HDFS quota at application or user level
  • Configuring scheduler (FIFO, Fair or Capacity) and resource allocation to different services like YARN, HIVE, HBASE, HDFS etc.
  • Upgrading and applying patches
  • Configuring logging for effective debugging in case of failures or performance issues
  • Commissioning and decommissioning nodes
  • User management

Hardening your Hadoop Cluster

Productionization of a Hadoop cluster mandates implementation of hardening measures. Hardening of Hadoop typically covers:

  1. Configuring Security: This is one of the most crucial and required configuration to make your cluster enterprise ready and can be classified at user and data level.
    1. User Level: User security addresses the authentication (who am I) and authorization (what can I do) part of the security implementation along with configuring access control over resource. Kerberos takes care of the authentication protocol between the client/server applications and is majorly used to sync with LDAP for better management. Different distribution recommends different authorization mechanism. For e.g. Cloudera has good integration with Sentry that provides a fine grained row level security to Hive and Impala. Further integration with HDFS ACL’s percolates the same access to other services like Pig, HBASE, etc.
    2. Data Level: Data Security, HDFS transparent encryption provides another level of security for data at rest. This is one of the mandatory requirements for some of the organizations to be complied with different government and financial regulatory bodies. Having transparent encryption built into HDFS makes it easier for organizations to comply with these regulations.
  2. High Availability: Name node as mentioned earlier is a single point of failure and unavailability of the same results in making the whole cluster unavailable, which is not a recommended approach for a production cluster. Name node HA helps to mitigate this risk by having a standby node which automatically takes over from the primary name node in cases of failure.
  3. Name Node Scaling: This is mostly applicable in case of a large cluster. As name node stores data in memory with large volume of files, name node memory can become a bottleneck. HDFS federation helps in resolving the issues by facilitating multiple name nodes with each name node managing a part of the HDFS namespace.

Monitoring

Proactive monitoring is essential to maintain the health and availability of the cluster. General monitoring tasks includes monitoring cluster nodes and networks for CPU, memory, network bottlenecks, and more. The Hadoop administrator should be competent to track the health of the system, monitor workloads and work with the development team to implement new functionality. Failure to do so can have severe impact on the health of the system, quality of data and ultimately will affect the business user’s ease of access and decision making capability.

Performance Optimization and Tuning

Performance tuning and identifying bottlenecks is one of the most vital tasks for a Hadoop Administrator. Considering the distributed nature of the system and a manifold of configuration files and parameters, it may take hours to days to identify and resolve a bottleneck, if not get started in the right direction. Often it is found that the root cause is at a different end of the system rather that what is pointed out by the application. This can be counterbalanced with the help of an expert who can assist with a detailed understanding of the Hadoop ecosystem along with the application. Moreover, an optimized resource (CPU, Memory) is essential for an effective utilization of the cluster and aids in the distribution between different Hadoop components like HDFS, YARN, and HBASE etc. To overcome such challenges, it’s important to have the statistics in place in the form of benchmarks, tuning of the configuration parameters for best performance, strategies and tools in place for rapid resolutions.

This blog is part of the Hadoop administration blog series and aims to provide a high level overview of Hadoop administration, associated roles, responsibilities and challenges a Hadoop admin faces. In the future editions, we will dwell further into the above mentioned points, various aspects of Hadoop Infrastructure Management responsibilities and further understand how each phase plays an important role in administering an enterprise Hadoop cluster. For more on how we can help, visit

The post Empower your Data and Ensure Continuity of Operations with Hadoop Administration appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/empower-your-data-and-ensure-continuity-of-operations-with-hadoop-administration/feed/ 0
Unlock the Best Value Out of Your Big Data Hadoop https://www.bitwiseglobal.com/en-us/blog/unlock-the-best-value-out-of-your-big-data-hadoop/ https://www.bitwiseglobal.com/en-us/blog/unlock-the-best-value-out-of-your-big-data-hadoop/#respond Sat, 21 May 2016 09:09:00 +0000 https://www.bitwiseglobal.com/en-us/unlock-the-best-value-out-of-your-big-data-hadoop/ Planning A Hadoop administration team’s responsibilities starts when a company kick-starts with the Hadoop POC. An experienced team like Bitwise comes up with a roadmap right at the beginning to help scale from POC to production with minimal wastage of initial investment and effective guidance on investment decisions, be it in-house infrastructure, POC or to ... Read more

The post Unlock the Best Value Out of Your Big Data Hadoop appeared first on Bitwise.

]]>

Planning

A Hadoop administration team’s responsibilities starts when a company kick-starts with the Hadoop POC. An experienced team like Bitwise comes up with a roadmap right at the beginning to help scale from POC to production with minimal wastage of initial investment and effective guidance on investment decisions, be it in-house infrastructure, POC or to choose PAAS options.

For any organization, understanding of the estimated investments is mandatory in the initial phases.
Capacity planning/estimation is the next step after successful completion of the POC. Choosing the right combination of storage and computing hardware, interconnected network, operating system, storage configuration/disk performance, network setup etc. play an important role on the overall cluster performance. Similarly, special considerations are required for the master and slave node hardware configuration. The right balance of needs vs. greed can be achieved only after years of implementation experience.

Deployment

Once you have the hardware defined and in place, the next stage is the planning and deployment of the Hadoop cluster. This involves configuring the OS with the recommended configuration changes to suite the Hadoop stack, configuration of SSH and Disk, choosing and installing a Hadoop distribution (Cloudera, Hortonworks, MapR or Apache Hadoop) as per the requirements, meeting the configuration requirements for Hadoop daemons for optimized performance. All of these setups vary based on the size of your cluster, so it’s imperative that you configure and deploy after covering all the aspects and pre-requisites.

Another important aspect is designing of cluster from development perspective, various environments (Dev, QA, Prod etc.) and usage perspective, i.e. access security and data security.

Managing a Hadoop Cluster

After implementation of the Hadoop cluster, the Hadoop admin team needs to maintain the health and availability of the cluster round the clock. Some of the common tasks include management of the name node, data nodes, HDFS and Mapreduce jobs which forms the core of the Hadoop eco system. Impact to any of the components can negatively affect the cluster performance. For e.g. unavailability of a data node, say due to a network issue, will cause the HDFS to replicate the under-replicated blocks which will bring a lot of overhead and cause the cluster to slow down or even make it inaccessible in case of multiple data node disconnections.

Name node is another important component in a Hadoop cluster and acts as a single point of failure. Consequently, it is important that a backup of fsimage and editlogs are taken periodically using the secondary name node so as to recover from a name node failure. The other administrative tasks include:

  • Managing HDFS quota at application or user level
  • Configuring scheduler (FIFO, Fair or Capacity) and resource allocation to different services like YARN, HIVE, HBASE, HDFS etc.
  • Upgrading and applying patches
  • Configuring logging for effective debugging in case of failures or performance issues
  • Commissioning and decommissioning nodes
  • User management

Hardening your Hadoop Cluster

Productionization of a Hadoop cluster mandates implementation of hardening measures. Hardening of Hadoop typically covers:

  1. Configuring Security: This is one of the most crucial and required configuration to make your cluster enterprise ready and can be classified at user and data level.
    1. User Level: User security addresses the authentication (who am I) and authorization (what can I do) part of the security implementation along with configuring access control over resource. Kerberos takes care of the authentication protocol between the client/server applications and is majorly used to sync with LDAP for better management. Different distribution recommends different authorization mechanism. For e.g. Cloudera has good integration with Sentry that provides a fine grained row level security to Hive and Impala. Further integration with HDFS ACL’s percolates the same access to other services like Pig, HBASE, etc.
    2. Data Level: Data Security, HDFS transparent encryption provides another level of security for data at rest. This is one of the mandatory requirements for some of the organizations to be complied with different government and financial regulatory bodies. Having transparent encryption built into HDFS makes it easier for organizations to comply with these regulations.
  1. High Availability: Name node as mentioned earlier is a single point of failure and unavailability of the same results in making the whole cluster unavailable, which is not a recommended approach for a production cluster. Name node HA helps to mitigate this risk by having a standby node which automatically takes over from the primary name node in cases of failure.
  1. Name Node Scaling: This is mostly applicable in case of a large cluster. As name node stores data in memory with large volume of files, name node memory can become a bottleneck. HDFS federation helps in resolving the issues by facilitating multiple name nodes with each name node managing a part of the HDFS namespace.

Monitoring

Proactive monitoring is essential to maintain the health and availability of the cluster. General monitoring tasks includes monitoring cluster nodes and networks for CPU, memory, network bottlenecks, and more. The Hadoop administrator should be competent to track the health of the system, monitor workloads and work with the development team to implement new functionality. Failure to do so can have severe impact on the health of the system, quality of data and ultimately will affect the business user’s ease of access and decision making capability.

Performance Optimization and Tuning

Performance tuning and identifying bottlenecks is one of the most vital tasks for a Hadoop Administrator. Considering the distributed nature of the system and a manifold of configuration files and parameters, it may take hours to days to identify and resolve a bottleneck, if not get started in the right direction. Often it is found that the root cause is at a different end of the system rather that what is pointed out by the application. This can be counterbalanced with the help of an expert who can assist with a detailed understanding of the Hadoop ecosystem along with the application. Moreover, an optimized resource (CPU, Memory) is essential for an effective utilization of the cluster and aids in the distribution between different Hadoop components like HDFS, YARN, and HBASE etc. To overcome such challenges, it’s important to have the statistics in place in the form of benchmarks, tuning of the configuration parameters for best performance, strategies and tools in place for rapid resolutions.

This blog is part of the Hadoop administration blog series and aims to provide a high level overview of Hadoop administration, associated roles, responsibilities and challenges a Hadoop admin faces. In the future editions, we will dwell further into the above mentioned points, various aspects of Hadoop Infrastructure Management responsibilities and further understand how each phase plays an important role in administering an enterprise Hadoop cluster. For more on how we can help, visit.

 

The post Unlock the Best Value Out of Your Big Data Hadoop appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/unlock-the-best-value-out-of-your-big-data-hadoop/feed/ 0
Reduce Data Latency and Refine Processes with Hadoop Data Ingestion https://www.bitwiseglobal.com/en-us/blog/reduce-data-latency-and-refine-processes-with-hadoop-data-ingestion/ https://www.bitwiseglobal.com/en-us/blog/reduce-data-latency-and-refine-processes-with-hadoop-data-ingestion/#respond Wed, 18 May 2016 09:11:00 +0000 https://www.bitwiseglobal.com/en-us/reduce-data-latency-and-refine-processes-with-hadoop-data-ingestion/ Hadoop data ingestion has challenges like There could be different source types like OLTP systems generating events, batch systems generating files, RDBMS systems, web based APIs, and more Data may be available in different formats like ASCII text, EBCDIC and COMPs from Mainframes, JSON and AVRO Data is often required to be transformed before persisting ... Read more

The post Reduce Data Latency and Refine Processes with Hadoop Data Ingestion appeared first on Bitwise.

]]>

Hadoop data ingestion has challenges like

  1. There could be different source types like OLTP systems generating events, batch systems generating files, RDBMS systems, web based APIs, and more
  2. Data may be available in different formats like ASCII text, EBCDIC and COMPs from Mainframes, JSON and AVRO
  3. Data is often required to be transformed before persisting on Hadoop. Some of the common transformations could be data masking, converting data to standard format, applying data quality rules, encryption etc.
  4. As more and more data is ingested into Hadoop, metadata plays an important role. There is no point in having large volumes of data without the knowledge of what is available. Discovery of data and other key aspects like format, schema, owner, refresh rate, source and security policy should be kept simple and easy. Features like custom tagging, data set registry, searchable repository can make life much easier. The need of the hour is a data set registry and data governance tool that can communicate with data ingestion tool to pass and use this metadata.

At present, there are many tools available for ingesting data into Hadoop. Some tools are good for specific use cases, for example Apache Sqoop is a great tool to export/import data from RDBMS systems, Apache Falcon is a good option for data set registry, Apache Flume is preferred to ingest real-time event stream of data and there are many more commercial alternatives as well. Few of the tools available are for general purposes like Spring XD (now spring cloud data flow) and Gobblin. The selection of options can be overwhelming and you certainly need the right tool for your job.

But none of these tools are capable of solving all the challenges, so enterprises have to use multiple tools for data ingestion. Overtime they also create custom tools or wrapper on top of existing tools to solve their needs. Furthermore all these tools have text based configuration files (mostly XML) which is not very convenient and user friendly to work with. All this results in lot of complexity and overhead to maintain data ingestion applications.

Looking at these gaps and to enable our clients to streamline Hadoop adoption, Bitwise has developed a GUI based tool for data ingestion and transformation on Hadoop. With convenient drag/drop GUI, it enables developers to quickly develop end to end data pipelines all through from single tool. Apart from multiple source and target options, it also has many pre-built transformations that ranges from usual data warehousing to machine learning and sentiment analysis. The tool is loaded with the following data ingestion features:

  • Pluggable Source and Targets – As new source and target systems emerge, it’s convenient to integrate them with ingestion framework
  • Scalability – It’s scalable to ingest huge amounts of data at a higher velocity
  • Masking and Transforming On The Fly – It’s possible to apply transformations like masking and encryption on the fly as data can be ingested swiftly in the pipeline
  • Data Quality – data quality checkpoints can be checked before data is published
  • Data Lineage and Provenance – detailed data lineage and provenance can be tracked
  • Searchable Metadata – datasets and their metadata can be searchable along with the option to apply custom tags

Bitwise’s Hadoop Data Ingestion and transformation tool can save enormous effort to develop and maintain data pipelines. Stay tuned for subsequent features that explore the other phases of the data value chain.

The post Reduce Data Latency and Refine Processes with Hadoop Data Ingestion appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/reduce-data-latency-and-refine-processes-with-hadoop-data-ingestion/feed/ 0
Understanding the Hadoop Adoption Roadmap https://www.bitwiseglobal.com/en-us/blog/understanding-the-hadoop-adoption-roadmap/ https://www.bitwiseglobal.com/en-us/blog/understanding-the-hadoop-adoption-roadmap/#respond Wed, 18 May 2016 08:48:00 +0000 https://www.bitwiseglobal.com/en-us/understanding-the-hadoop-adoption-roadmap/ Stage 1: Understanding and Identifying Business Cases As with every technology switch, the first stage is often understanding the new technology and tool stack as well as propagating the benefits that the end user and the organization sees. At this stage looking at your current system with a close eye helps to identify the business ... Read more

The post Understanding the Hadoop Adoption Roadmap appeared first on Bitwise.

]]>

Stage 1: Understanding and Identifying Business Cases

As with every technology switch, the first stage is often understanding the new technology and tool stack as well as propagating the benefits that the end user and the organization sees. At this stage looking at your current system with a close eye helps to identify the business cases that are redundant or can be merged together to bring in only the things that matter. This also helps build a priority list of projects that reflect definite business use cases. You need to define the key indicators of success here. The performance and success criteria of the legacy system you are currently running need to be revamped completely as well. The business stakeholders are key here to refine the business SLAs.

Stage 2: Warming Up to the Technology Stack

Next up, bring in the technology stack for people to familiarize themselves with. Build a playground or a dirty development environment where developers and analysts can experiment and innovate without the fear of bringing down the business. This will allow the Data Modelers and DBAs to build the most optimal warehouse as well as enable the ETL developers to learn the pit falls. This will ensure they build the best practices before heading into full-fledged project work.

Stage 3: Converting the Old to New

A key element of Hadoop Adoption is running an efficient conversion of the old to new. Identify early on the dark or missing data elements, build coding standards and optimization techniques, automate as much as possible to reduce the conversion errors and validate against the legacy the correctness of the conversion. Bitwise recommends a Proof -> Pilot -> Production path to conversion where we nibble away at the legacy applications and build a repeatable framework before biting a big chunk of business requirement.

Stage 4: Maintenance and Support

Once in production, Hadoop needs what every production system in the world needs – maintenance and support. Things breakdown and undergo upgradation or deprecation. What is needed is a dedicated team to keep track of the Hadoop ecosystem. Besides the regular application production, a support team structure is required to ensure availability and reliability of the environment.

Backed by extensive experience and having worked with Fortune 500 companies, we at Bitwise have ensured a walk in the park for our clients adopting Hadoop and have unraveled effective usage of Hadoop to meet their ELT and Analytics needs. Have a look at our Excellerators and get to know how organizations worldwide are unlocking the real value of Hadoop with our proven methodology.

The post Understanding the Hadoop Adoption Roadmap appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/understanding-the-hadoop-adoption-roadmap/feed/ 0
Crossing Over Big Data’s Trough of Disillusionment https://www.bitwiseglobal.com/en-us/blog/crossing-over-big-datas-trough-of-disillusionment/ https://www.bitwiseglobal.com/en-us/blog/crossing-over-big-datas-trough-of-disillusionment/#respond Mon, 24 Aug 2015 15:33:00 +0000 https://www.bitwiseglobal.com/en-us/crossing-over-big-datas-trough-of-disillusionment/ Defining this Trough of Disillusionment Enterprises are feeling the pressure that they should be doing “something” with Big Data. There are a few organizations that have figured it out and are creating breakthrough insights. However, there’s a much larger set that has maybe reached the stage of installing say 10 Hadoop nodes and are wondering ... Read more

The post Crossing Over Big Data’s Trough of Disillusionment appeared first on Bitwise.

]]>

Defining this Trough of Disillusionment

Enterprises are feeling the pressure that they should be doing “something” with Big Data. There are a few organizations that have figured it out and are creating breakthrough insights. However, there’s a much larger set that has maybe reached the stage of installing say 10 Hadoop nodes and are wondering “now what?”

Per Gartner, this is the phase where excitement over the latest technology gives way to confusion or ambiguity – referred to as the “Trough of Disillusionment.”

Data Democracy – The Foundation for Big Data

Use cases involving Analytics or Data Mining with an integrated Social Media component are being thrown at enterprise executives. These use cases appear “cool” and compelling upfront but upon thorough analysis reveal that they are missing some necessary considerations such as Data/Info Security, Privacy regulations, Data Lineage from an implementation perspective, and in addition fail to build a compelling ROI case.

One needs to realize that for any “cool” use case to generate eventual ROI, it is very important to focus on Big Data Integration (i.e. Access, Preparation, and Availability of the data – see firms must not overlook importance of big data integration). Doing so essentially will empower the enterprises to implement ANY use case that makes the most sense to their particular business.

“Data Democracy” should be the focus. This focus also helps address the technology challenge of handling ever-growing enterprise data efficiently and leverage the scalable and cost-effective nature of these technologies – and an instant ROI!

Concept to Realization – Real Issues

Once this is understood, the next step is to figure out a way to introduce the use of these new technologies to achieve the above goals and doing so in the least disruptive and most cost-effective way. In fact, enterprises are looking at ETL as a standard use case for Big Data technologies like Hadoop. Using Hadoop as a Data Integration or ETL platform requires developing Data Integration applications using programming languages such as Map Reduce. This presents a new challenge in combining of Java skillsets with the expertise of ETL design and implementation. Most ETL designers do not have Java skills as they are used to working in a tool environment and most Java developers do not have experience in handling large volumes of data resulting in massive overheads of training, maintaining and “firefighting“ coding issues. This can cause massive delays and soak up valuable resources for organizations to solve half the problem.

Moreover, while making the investments in the form of hardware and skillsets like Map Reduce, when the underlying technology platforms inevitably would advance, development teams would be forced to rewrite the application to leverage these advancements.

Concept to Realization – a Possibility?

Yes it is. One of the key criteria for any data integration development environment on Hadoop is code abstraction to allow users to specify the data integration logic as a series of transformations chained together in a directed acyclic graph that models how users think about data movement making it significantly simpler to comprehend and change than a series of Map Reduce scripts.

Another important feature to look out for is technology insulation – provisions in the design to change the run-time environments such as Hadoop with any future technologies prevalent at that time.

Conclusion

The “3 V’s” in Big Data implementations are well defined – Volume, Variety, and Velocity – and relatively quantifiable. We should begin to define a 4th ‘V’, for “Value.” The fourth is equally important, or more important in some cases, but less tangible and less quantifiable.

Having said that, jumping off a diving board into a pool of Big Data doesn’t have to be a lonely job. The recommended approach would be to seek help from Big Data experts like Bitwise to assess whether you really need Big Data. If yes, what business areas will you target for the first use case, which DI platform will you use? And lastly, how will you calculate the ROI of the Big Data initiative?

The post Crossing Over Big Data’s Trough of Disillusionment appeared first on Bitwise.

]]>
https://www.bitwiseglobal.com/en-us/blog/crossing-over-big-datas-trough-of-disillusionment/feed/ 0