Monday, 8 February 2016

Big Data Training in chennai


Big Data Training in chennai


A massive amount of data is continuously being generated in any process, be it weather analysis, engineering systems, marketing trends or even user behaviour. The massive amount of data generated is at a high speed and requires a detailed analysis and curation to reveal patterns and trends, in order to effect real value or optimization to a process. The massive amount of structured or unstructured data that is so generated is called Big Data.

Greens Technologys we help you gain a good hand with Enterprise class Biztalk server and show why it is a highly effective platform that helps to make sense of data. Using Microsoft Boztalk, one can easily integrate different applications and diverse datasets to deliver coherent real-world solutions like business reporting, intelligence gathering, payroll processing etc and these are indicated n real time by us.

    Introduction to Distributed systems



  • High Availability
  • Scaling
  • Advantages

    • Introduction to Big Data



  • Big Data opportunities
  • Big Data Challenges

    • Introduction to Hadoop



  • Hadoop Distributed File System
  • Hadoop Architecture
  • Map Reduce & HDFS

    • Hadoop Eco Systems



  • o Introduction to Pig
  • Introduction to Hive
  • Introduction to HBase
  • Other eco system Map

    • Hadoop Administration



  • Hadoop Installation & Configuration
  • Setting up Standalone system
  • Setting up pseudo distributed cluster
  • Setting up distributed cluster

    • The Hadoop Distributed File System (HDFS)



  • HDFS Design & Concepts
  • Blocks, Name nodes and Data nodes
  • Hadoop DFS The Command-Line Interface
  • Basic File System Operations
  • Reading Data from a Hadoop URL
  • Reading Data Using the File System API

    • Map Reduce



  • Map and Reduce Basics.
  • How Map Reduce Works
  • Anatomy of a Map Reduce Job Run
  • Job Submission, Job Initialization, Task Assignment, Task Execution
  • Progress and Status Updates
  • Job Completion, Failures
  • Shuffling and Sorting.
  • Combiner
  • Hadoop Streaming

    • Map/Reduce Programming – Java



  • Hands on “Word Count” in Map/Reduce in Eclipse
  • Sorting files using Hadoop Configuration API discussion
  • Emulating “grep” for searching inside a file in Hadoop
  • Chain Mapping API discussion
  • Job Dependency API discussion and Hands on
  • Input Format API discussion and hands on
  • Input Split API discussion and hands on
  • Custom Data type creation in Hadoop
  • HadoopTraining in Chennai

    HadoopTraining in Chennai

    About The Course
    Hadoop training in Chennai providing professional course in Hadoop Technology. The growing, data vows cannot be met by conventional technologies and need some really organized and automated technology. Big data and Hadoop are the two kinds of the promising technologies that can analyze, curate, and manage the data. The course on Hadoop and Big data is to provide enhanced knowledge and technical skills needed to become an efficient developer in Hadoop technology. Along with learning, there is virtual implementation using the core concepts of the subject upon live industry based applications. With the simple programming modules, large clusters of data can be managed into simpler versions for ease of accessibility and management. Greens technologies has the best expertise to handle the Hadoop training in Chennai.
    Course Objectives
    The Course goes with the aim to understand key concepts about:
    • HDFS and MapReduce Framework
    • Architecture of Hadoop 2.x
    • To write Complex MapReduce Programs and Set Up Hadoop Cluster
    • Making Data Analytics by using Pig, Hive and Yarn
    • Sqoop and Flume for learning Data Loading Techniques
    • Implementation of integration by HBase and MapReduce
    • To implement Indexing and Advanced Usage
    • To Schedule jobs with the use of Oozie application
    • To implement best practices for Hadoop Development Program
    • Working on Real Life Projects basing on Big Data Analytics
    Who should go for this course?
    Greens technologies the only Hadoop training Institute in Chennai with great expertise in Hadoop. Being one of the fastest growing technologies in the business industry, Hadoop is the big essential technology to stand tall in the rapidly growing competitors in the market. Many IT industry aspirants are looking for online course by any Hadoop training Institute in Chennai. Many business experts analyze that, 2015 will be the Emerging year for Hadoop. Following industry professionals must be well in verse in this course:
    • Analytics Professionals
    • For BI /ETL/DW Professionals
    • Project Managers of IT Firms
    • Software Testing Professionals
    • Mainframe Professionals
    • Software Developers
    • Aspirants of Big Data Services
    • System Administrators
    • Graduates
    • Data Warehousing Professionals
    • Business Intelligence Professionals
    Why learn Big Data and Hadoop?
    In all aspects, Hadoop is an essential element for companies handling with lots of information. Hadoop business experts predict that, 2015 will be the year where both the companies and professionals start to bank upon to rush for organizational scope and career opportunities. With the data exploding caused due to immense digitalization, big data and Hadoop are promising software that allow data management in smarter ways.
    What are the pre-requisites for this Course?
    To learn Hadoop in any of the Hadoop training Institutes in Chennai, it is needed to have sound knowledge in Core Java concepts, which is a must to understand the foundations about Hadoop. Anyhow, Essential concepts in Java will be provided by us to get into the actual concepts of Hadoop. As foundation of Java is very much important for effective learning of Hadoop technologies. Having good idea about Pig programming will make Hadoop run easier. Also Hive can be useful in performing Data warehousing. Basic knowledge on Unix Commands also needed for day-to-day execution of the software.
    How will I execute the Practicals?
    The practical experience here at Greens technologies will be worth and different than that of other Hadoop training Institutes in Chennai. Practical knowledge of Hadoop can be experienced through our virtual software of Hadoop get installed in your machine. As, the software needs minimum system requirements, with the virtual class room set up learning will be easier. Either with your system or with our remote training sessions you can execute the practical sessions of Hadoop.



    Hadoop Big Data Training in Chennai

    Hadoop Big Data Training in Chennai


    LEARNING OBJECTIVE OF THIS COURSE

    • Introduction to Big Data and Hadoop
    • Understanding Hadoop architecture and Hadoop Eco-system
    • Working with Hadoop Distributed File System (HDFS)
    • Understanding Map Reduce concepts
    • Understanding Hadoop API
    • Writing Map Reduce programs in Java
    • Understanding Hadoop programming best practices
    • Building Hadoop application based on real-world example - architecture, design and coding
    • Learning Pig and Pig Latin
    • Learning Hive, distributed data warehousing system
    • Learning HBase, distributed column based database for Hadoop
    • Building applications based on real-world examples
    WHO CAN DO THE HADOOP TRAINING??
    Anybody with the basic knowledge of Java is preferable to do the Hadoop Training.
    ELIGIBILITY:
    For Hadoop Development - Basic knowledge of Java is Preferred
    For Hadoop Admin - Knowledge of Linux/Unix is Preferred.
    IS THERE ANY CERTIFICATION AVAILABLE IN HADOOP?
    For Hadoop- Cloudera and hortonworks certifications are available for various levels.
    WHO WILL BE THE TRAINERS?


    All of the Hadoop Trainer's are industry working professionals with 5+ years of working knowledge on Hadoop 7 overall 10+ year of experience.

    Hadoop BigData Training Institutes in Chennai.



    Hadoop Big Data Training in Chennai


    LEARNING OBJECTIVE OF THIS COURSE

    • Introduction to Big Data and Hadoop
    • Understanding Hadoop architecture and Hadoop Eco-system
    • Working with Hadoop Distributed File System (HDFS)
    • Understanding Map Reduce concepts
    • Understanding Hadoop API
    • Writing Map Reduce programs in Java
    • Understanding Hadoop programming best practices
    • Building Hadoop application based on real-world example - architecture, design and coding
    • Learning Pig and Pig Latin
    • Learning Hive, distributed data warehousing system
    • Learning HBase, distributed column based database for Hadoop
    • Building applications based on real-world examples
    WHO CAN DO THE HADOOP TRAINING??
    Anybody with the basic knowledge of Java is preferable to do the Hadoop Training.
    ELIGIBILITY:
    For Hadoop Development - Basic knowledge of Java is Preferred
    For Hadoop Admin - Knowledge of Linux/Unix is Preferred.
    IS THERE ANY CERTIFICATION AVAILABLE IN HADOOP?
    For Hadoop- Cloudera and hortonworks certifications are available for various levels.
    WHO WILL BE THE TRAINERS?
    All of the Hadoop Trainer's are industry working professionals with 5+ years of working knowledge on Hadoop 7 overall 10+ year of experience.

    Hadoop BigData Training Institutes in Chennai.

    Hadoop BigData Training Institutes in Chennai.


    Greens Technologys Welcome’s you to best Hadoop Big Data Training Institute in Chennai

    Greens Technologys is a leading Hadoop Big Data training Institute in Chennai which offer diverse range of training packages which cover wide range of Hadoop Big Data courses, you can be sure of our 100% commitment for courses. We have highly Hadoop Big Data experienced trainers who are working side by side on one of the biggest projects in IT Corporates and all over the world. Our experienced Hadoop Big Data trainers are regularly reviewing the curriculum of all Hadoop Big Data courses and keep it up to date according to industry trends. We also provide full flexibility to students regarding study timing. Students can choose their timing for their course of study with the mutual consent and availability of trainers.

    Hadoop Big Data Job Prospects in India and abroad Due to the value added by Hadoop Big Data to medium and big sized businesses in terms of making them to become ever more efficient, the growth in the Hadoop Big Data is inevitable. Supporting many industries all over the world. Lot of organisations continue to implement Hadoop Big Data Solutions globally. Therefore there is a huge demand for Hadoop Big Data Professionals in India and all over the world to help enterprises implement Hadoop Big Data Solutions and to work on on-going support projects and also to work either on a self-employed basis or with an Hadoop Big Data training provider as a trainer training prospective Mobile Application Developers. This gives an opportunity to individuals with technical expertise in the field of Hadoop Big Data Solutions to fill this demand and supply gap in the workforce. Hadoop Big Data is also one of the smoothest ways of transition into the highly paid IT industry for people with non IT backgrounds. Opportunities in Hadoop Big Data are available in various industries and especially with companies in the technology sector such as IBM, Accenture, TCS, HP, Wipro Etc.


    Hadoop Big Data Placements in Chennai with Greens Technologys, Although there is no guarantee of a job on course completion, we are almost certain that we shall be able to place you in a suitable position within a few weeks after successful completion of the course due to our position and reputation in the technology consulting industry and more importantly the network of organization’s we work 0with who use Hadoop Big Data for their enterprise needs.

    Hadoop Big Data Training curriculum

    1. Understanding Hadoop and Big Data

    Learning Objectives –  In this module, you will understand Big Data, the limitations of the existing solutions for Big Data problem, how Hadoop solves the Big Data problem, the common Hadoop ecosystem components, Hadoop Architecture, HDFS, Anatomy of FileWriteandRead,RackAwareness.

    Topics –  Big Data, Limitations and Solutions of existing Data Analytics Architecture, Hadoop, Hadoop Features, Hadoop Ecosystem, Hadoop 2.x core components, Hadoop Storage: HDFS, Hadoop Processing: MapReduce Framework, Anatomy of File Write and Read, Rack Awareness.

    2. Hadoop Architecture and HDFS

    Learning Objectives – In this module, you will learn the Hadoop Cluster Architecture, Important Configuration files in a Hadoop Cluster, Data Loading Techniques.

    Topics – Hadoop 2.x Cluster Architecture – Federation and High Availability, A Typical Production Hadoop Cluster, Hadoop Cluster Modes, Common Hadoop Shell Commands, Hadoop 2.x Configuration Files, Password-Less SSH, MapReduce Job Execution, Data Loading Techniques: Hadoop Copy Commands, FLUME, SQOOP.

    3. Hadoop MapReduce Framework – I

    Learning Objectives – In this module, you will understand Hadoop MapReduce framework and the working of MapReduce on data stored in HDFS. You will learn about YARN concepts in MapReduce.

    Topics – MapReduce Use Cases, Traditional way Vs MapReduce way, Why MapReduce, Hadoop 2.x MapReduce Architecture, Hadoop 2.x MapReduce Components, YARN MR Application Execution Flow, YARN Workflow, Anatomy of MapReduce Program, Demo on MapReduce.

    4. Hadoop MapReduce Framework – II

    Learning Objectives – In this module, you will understand concepts like Input Splits in MapReduce, Combiner & Partitioner and Demos on MapReduce using different data sets.

    Topics – Input Splits, Relation between Input Splits and HDFS Blocks, MapReduce Job Submission Flow, Demo of Input Splits, MapReduce: Combiner & Partitioner, Demo on de-identifying Health Care Data set, Demo on Weather Data set.

    5. Advanced MapReduce

    Learning Objectives – In this module, you will learn Advanced MapReduce concepts such as Counters, Distributed Cache, MRunit, Reduce Join, Custom Input Format, Sequence Input Format and how to deal with complex MapReduce programs.

    Topics – Counters, Distributed Cache, MRunit, Reduce Join, Custom Input Format, Sequence Input Format.

    6. Pig

    Learning Objectives – In this module, you will learn Pig, types of use case we can use Pig, tight coupling between Pig and MapReduce, and Pig Latin scripting.

    Topics – About Pig, MapReduce Vs Pig, Pig Use Cases, Programming Structure in Pig, Pig Running Modes, Pig components, Pig Execution, Pig Latin Program, Data Models in Pig, Pig Data Types.
    Pig Latin : Relational Operators, File Loaders, Group Operator, COGROUP Operator, Joins and COGROUP, Union, Diagnostic Operators, Pig UDF, Pig Demo on Healthcare Data set.

    7. Hive

    Learning Objectives – This module will help you in understanding Hive concepts, Loading and Querying Data in Hive and Hive UDF. 

    Topics – Hive Background, Hive Use Case, About Hive, Hive Vs Pig, Hive Architecture and Components, Metastore in Hive, Limitations of Hive, Comparison with Traditional Database, Hive Data Types and Data Models, Partitions and Buckets, Hive Tables(Managed Tables and External Tables), Importing Data, Querying Data, Managing Outputs, Hive Script, Hive UDF, Hive Demo on Healthcare Data set.

    8. Advanced Hive and HBase

    Learning Objectives – In this module, you will understand Advanced Hive concepts such as UDF, Dynamic Partitioning. You will also acquire in-depth knowledge of HBase, HBase Architecture and its components.

    Topics – Hive QL: Joining Tables, Dynamic Partitioning, Custom Map/Reduce Scripts, Hive : Thrift Server, User Defined Functions.
    HBase: Introduction to NoSQL Databases and HBase, HBase v/s RDBMS, HBase Components, HBase Architecture, HBase Cluster Deployment.

    9. Advanced HBase

    Learning Objectives – This module will cover Advanced HBase concepts. We will see demos on Bulk Loading , Filters. You will also learn what Zookeeper is all about, how it helps in monitoring a cluster, why HBase uses Zookeeper.

    Topics – HBase Data Model, HBase Shell, HBase Client API, Data Loading Techniques, ZooKeeper Data Model, Zookeeper Service, Zookeeper, Demos on Bulk Loading, Getting and Inserting Data, Filters in HBase.

    10. Oozie and Hadoop Project

    Learning Objectives – In this module, you will understand working of multiple Hadoop ecosystem components together in a Hadoop implementation to solve Big Data problems. We will discuss multiple data sets and specifications of the project. This module will also cover Flume & Sqoop demo and Apache Oozie Workflow Scheduler for Hadoop Jobs.



    Topics – Flume and Sqoop Demo, Oozie, Oozie Components, Oozie Workflow, Scheduling with Oozie, Demo on Oozie Workflow, Oozie Co-ordinator, Oozie Commands, Oozie Web Console, Hadoop Project Demo.

    Saturday, 6 February 2016

    Hadoop Training in Chennai

    Hadoop Training in Chennai

    Hadoop Training in Chennai

    Green technologys  Academy offers best Hadoop Training in chennai with most experienced professionals. Our Instructors are working in Hadoop and related technologies for more years in MNC’s. We aware of industry needs and we are offering Hadoop Training in chennai in more practical way. Our team of Hadoop trainers offers Hadoop in Classroom training, Hadoop Online Training and Hadoop Corporate Training services. We framed our syllabus to match with the real world requirements for both beginner level to advanced level. Our training will be handled in either weekday or weekends programme depends on participants requirement. We do offer Fast-Track Hadoop Training in chennaiand One-to-One Hadoop Training in chennai. Here are the major topics we cover under this Hadoop course Syllabus Introduction to Hadoop, Hadoop Eco Systems, Hadoop Developer, Installing Hadoop Eco System and Integrate With Hadoop,Monitoring The Hadoop Cluster.Every topic will be covered in mostly practical way with examples.
    We are the best Training Institute offers certification oriented Hadoop Training in chennai. Our participants will be eligible to clear all type of interviews at end of our sessions. We are building a team of Hadoop trainers and participants for their future help and assistance in subject. Our training will be focused on assisting in placements as well. We have separate HR team professionals who will take care of all your interview needs. Our Hadoop Training Course Fees is very moderate compared to others. We are the only Hadoop training institute who can share video reviews of all our students. We mentioned the course timings and start date as well in below.

    Hadoop Training Syllabus in Chennai

    Introduction to Hadoop

    • Hadoop Distributed File System
    • Hadoop Architecture
    • MapReduce & HDFS

    Hadoop Eco Systems

    • Introduction to Pig
    • Introduction to Hive
    • Introduction to HBase
    • Other eco system Map

    Hadoop Developer

    • Moving the Data into Hadoop
    • Moving The Data out from Hadoop
    • Reading and Writing the files in HDFS using java program
    • The Hadoop Java API for MapReduce
      • Mapper Class
      • Reducer Class
      • Driver Class
    • Writing Basic MapReduce Program In java
    • Understanding the MapReduce Internal Components
    • Hbase MapReduce Program
    • Hive Overview
    • Working with Hive
    • Pig Overview
    • Working with Pig
    • Sqoop Overview
    • Moving the Data from RDBMS to Hadoop
    • Moving the Data from RDBMS to Hbase
    • Moving the Data from RDBMS to Hive
    • Flume Overview
    • Moving The Data from Web server Into Hadoop
    • Real Time Example in Hadoop
    • Apache Log viewer Analysis
    • Market Basket Algorithms
    • Big Data Overview
    • Introduction In Hadoop and Hadoop Related Eco System.
    • Choosing Hardware For Hadoop Cluster nodes
    • Apache Hadoop Installation
      • Standalone Mode
      • Pseudo Distributed Mode
      • Fully Distributed Mode
    • Installing Hadoop Eco System and Integrate With Hadoop
      • Zookeeper Installation
      • Hbase Installation
      • Hive Installation
      • Pig Installation
      • Sqoop Installation
      • Installing Mahout
    • Horton Works Installation
    • Cloudera Installation
    • Hadoop Commands usage
    • Import the data in HDFS
    • Sample Hadoop Examples (Word count program and Population problem)
    • Monitoring The Hadoop Cluster
      • Monitoring Hadoop Cluster with Ganglia
      • Monitoring Hadoop Cluster with Nagios
      • Monitoring Hadoop Cluster with JMX
    • Hadoop Configuration management Tool
    • Hadoop Benchmarking

    Hadoop trainer Profile & Placement

    Our Hadoop Trainers

    • More than 10 Years of experience in Hadoop® Technologies
    • Has worked on multiple realtime Hsdoop projects
    • Working in a top MNC company in chennai
    • Trained 2000+ Students so far
    • Strong Theoretical & Practical Knowledge
    • Hadoop certified Professionals

    Hadoop Placement Training in Chennai

    • More than 2000+ students Trained
    • 93% percent Placement Record
    • 1100+ Interviews Organized

    HADOOP Training In Chennai

    HADOOP Training In Chennai

    Hadoop is created by Douglas Reed cutting. Who named hadoop after his child’s stuffed elephant to support Lucene and Nutch search engine products
    Open source project administered by Apache software foundation
    Hadoop Consists of two key services( HDFS and MedReduce)
    Hadoop is a software framework for data intensive computing applications

    1. 
    Software platform that lets one easily write and run applications that process vast amounts of data. It includes:
    – MapReduce – offline computing engine
    – HDFS – Hadoop distributed file system
    – HBase (pre-alpha) – online data access
    2. Yahoo! is the biggest contributor
    3. Hadoop implements Google’s MapReduce, using HDFS
    4. MapReduce divides applications into many small blocks of work.
    5. HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster.
    6. MapReduce can then process the data where it is located.
    7. Hadoop‘s target is to run on clusters of the order of 10,000-nodes.

    Example Applications and Organizations using Hadoop

    a.     Amazon
    b.    Yahoo
    c.     AOL
    d.    FaceBook
    e.     FOX interactive media

    Why do We Need Hadoop ?

    a.     Hadoop provides storage for Big Data at reasonable cost
    b.    Hadoop allows to capture new or more data
    c.     With Hadoop, you can store data longer
    d.    Hadoop provides scalable analytics
    e.     Hadoop provides rich analytics

    What qualities/skills in trainees help

    a.     Good understanding of data warehouse concepts and design patterns
    b.    Strong experience with Core Java
    c.     Good experience on Hadoop ,Experience with HDFS, Map-reduce and other tools in Hadoop ecosystem
    d.    Strong knowledge and hands-on experience with Map-reduce programming model and high level languages like pig or hive
    e.     Experience with NoSQL data-stores like HBase, Cassandra
    f.     Understands various configuration parameters and helps arrive at values for optimal cluster performance
    g.    Knowledge of configuration management / deployment tools like Puppet / Chef
    h.     Setting up cluster monitoring and alerting mechanism tools like Ganglia, Nagios etc
    i.      Experience in setting up cross-data center replication

    j.      Understands how security model using Kerberos and enterprise LDAP product works and helps implement the same

    BIGDATA – HADOOP Traning in Chennai


    BIGDATA – HADOOP Traning in Chennai

    Hadoop Developer  Best Bigdata Hadoop Training with Projects 

    How we are Different from Others : Covers each topics with Real Time Examples . Covers 8 Real time project and more than 70+ Assignments which is divided into Basic , Intermediate and  Advanced . Trainer from Real Time Industry with 8 years experience in DWH. Working as BI and Hadoop consultant having 4+ years in Bigdata & Hadoop real time implementation and migrations.

    This is completely hands own training , which covers 90% Practical And 10% Theory .Here in Greens Technologys , we will take all prerequisite like Java ,SQL, which is required to learn Hadoop Developer and Analytical skills. This way We will accommodate technology illiterate and Technical experts in the same session and at the end of the training , they will gain the confidence  that , they got upskilled to a different level. 
    ·         8 Domain Based Project With Real Time Data
    ·         5 POC 
    ·         72 Assignments 
    ·         25 Real Time Scenarios On 16 Node Clusters
    ·         Smart Class 
    ·         Basic Java 
    ·         DWH Concept 
    ·         Pig|Hive|Mapreduce|Nosql|Hbase|Zookeeper|Sqoop|Flume|Oozie|Yarn|Hue|Spark |Scala 
    istration Process : We never take any registration fee from the candidate without experiencing our training quality.Once you satisfied with the demo , you can register with full payment and avail discount .

    Bigdata Hadoop Syllabus
    For whom Hadoop is?
    IT folks who want to change their profile in a most demanding technology which is in demand by almost all clients in all domains because of below mentioned reasons-
    ·          Hadoop is open source (Cost saving / Cheaper)
    ·          Hadoop solves Big Data problem which is very difficult or impossible to solve using highly paid tools in market
    ·          It can process Distributed data and no need to store entire data in centralized storage as it is there with other tools.
    ·          Now a days there is job cut in market in so many existing tools and technologies because clients are moving towards a cheaper and efficient solution in market named HADOOP
    ·          There will be almost 4.4 million jobs in market on Hadoop by next year.

    Can I Learn Hadoop If I Don’t know Java?
    Yes,
    It is a big myth that if a guy don’t know Java then he can’t learn Hadoop. The truth is that Only Map Reduce framework needs Java except Map Reduce all other components are based on different terms like Hive is similar to SQL, HBase is similar to RDBMS and Pig is script based.
    Only MR requires Java but there are so many organizations who started hiring on specific skill set also like HBASE developer or Pig and Hive specific requirements. Knowing MapReuce also is just like become all-rounder in Hadoop for any requirement.
    Why Hadoop?
    ·         Solution for BigData Problem
    ·         Open Source Technology
    ·         Based on open source platforms
    ·         Contains several tool for entire ETL data processing Framework
    ·         It can process Distributed data and no need to store entire data in centralized storage as it is required for SQL based tools. 

    Course Content                                                      ,
    Hadoop Introduction
    ·         Why we need Hadoop
    ·         Why Hadoop is in demand in market now a days
    ·         Where expensive SQL based tools are failing
    ·         Key points , Why Hadoop is leading tool in current It Industry
    ·         Definition of BigData
    ·         Hadoop nodes
    ·         Introduction to Hadoop Release-1
    ·         Hadoop Daemons in Hadoop Release-1
    ·         Introduction to Hadoop Release-2
    ·         Hadoop Daemons in Hadoop Release-2
    ·         Hadoop Cluster and Racks
    ·         Hadoop Cluster Demo
    ·         How Hadoop is getting two categories Projects-
    ·         New projects on Hadoop
    ·         Clients want POC and migration of Existing tools and Technologies on Hadoop Technology
    ·         How Open Source tool (HADOOP) is capable to run jobs in lesser time which take longer time in
    ·         Hadoop Storage – HDFS (Hadoop Distributed file system)
    ·         Hadoop Processing Framework (Map Reduce / YARN)
    ·         Alternates of Map Reduce
    ·         Why NOSQL is in much demand instead of SQL
    ·         Distributed warehouse for HDFS
    ·         Most demanding tools which can run on the top of Hadoop Ecosystem for specific requirements in specific scenarios
    ·         Data import/Export tools
    Hadoop Installation and Hands-on on Hadoop machine 
    ·         Hadoop installation
    ·         Introduction to Hadoop FS and Processing Environment’s UIs
    ·         How to read and write files
    ·         Basic Unix commands for Hadoop
    ·         Hadoop  FS shell
    ·         Hadoop releases practical
    ·         Hadoop daemons practical 
    ETL Tool (Pig) Introduction Level-1 (Basics) 
    ·         Pig Introduction
    ·         Why Pig if Map Reduce is there?
    ·         How Pig is different from Programming languages
    ·         Pig Data flow Introduction
    ·         How Schema is optional in Pig
    ·         Pig Data types
    ·         Pig Commands – Load, Store , Describe , Dump
    ·         Map Reduce job started by Pig Commands
    ·         Execution plan 
    ETL Tool (Pig) Level-2 (Complex) 
    ·         Pig- UDFs
    ·         Pig Use cases
    ·         Pig Assignment
    ·         Complex Use cases on Pig
    ·         XML Data Processing in Pig
    ·         Structured Data processing in Pig
    ·         Semi-structured data processing in Pig
    ·         Pig Advanced Assignment
    ·         Real time scenarios on Pig
    ·         When we should use Pig
    ·         When we shouldn’t use Pig
    ·         Live examples of Pig Use cases 
    Hive Warehouse (Introduction to Hive Warehouse and Differentiation between SQL based Datawarehouse and Hive) Level-1 (Basics)
    ·         Hive Introduction
    ·         Meta storage and meta store
    ·         Introduction to Derby Database
    ·         Hive Data types
    ·         HQL
    ·         DDL, DML and sub languages of Hive
    ·         Internal , external and Temp tables in Hive
    ·         Differentiation between SQL based Datawarehouse and Hive 
    Hive Level-2 (Complex)
    ·         Hive releases
    ·         Why Hive is not best solution for OLTP
    ·         OLAP in Hive
    ·         Partitioning
    ·         Bucketing
    ·         Hive Architecture
    ·         Thrift Server
    ·         Hue Interface for Hive
    ·         How to analyze data using Hive script
    ·         Differentiation between Hive and Impala
    ·         UDFs in Hive
    ·         Complex Use cases in Hive
    ·         Hive Advanced Assignment
    ·         Real time scenarios of Hive
    ·         POC on Pig and Hive , With real time data sets and problem statements 
    Map Reduce Level-1 (Basics)
    ·         How Map Reduce works as Processing Framework
    ·         End to End execution flow of Map Reduce job
    ·         Different tasks in Map Reduce job
    ·         Why Reducer is optional while Mapper is mandatory?
    ·         Introduction to Combiner
    ·         Introduction to Partitioner
    ·          Programming languages for Map Reduce
    ·         Why Java is preferred for Map Reduce programming
    ·         POC based on Pig, Hive, HDFS, MR 
    NOSQL Databases and Introduction to HBase Level-1 (Basics)
    ·         Introduction to NOSQL
    ·         Why NOSQL if SQL is in market since several years
    ·         Databases in market based on NOSQL
    ·         CAP Theorem
    ·         ACID Vs. CAP
    ·         OLTP Solutions with different capabilities
    ·         Which Nosql based solution is capable to handle specific requirements
    ·         Examples of companies like Google, Facebook, Amazon, and other clients who are using NOSQL based databases
    ·         HBase Architecture of column families 
    Map Reduce Advanced and HBase Level-2 (Complex)
    ·         How to work on Map Reduce in real time
    ·         Map Reduce complex scenarios
    ·         Introduction to HBase
    ·         Introduction to other NOSQL based data models
    ·         Drawbacks of Hadoop
    ·         Why Hadoop can’t work for real time processing
    ·         How HBase or other NOSQL based tools made real time processing possible on the top of Hadoop
    ·         HBase table and column family structure
    ·         HBase versioning concept
    ·         HBase flexible schema
    ·         HBase Advanced 
    Zookeeper and SQOOP
    ·         Introduction to Zookeeper
    ·          How Zookeeper helps in Hadoop Ecosystem
    ·          How to load data from Relational storage in Hadoop
    ·          Sqoop basics
    ·          Sqoop practical implementation
    ·          Sqoop alternative
    ·         Sqoop connector
    ·          Quick revision of previous classes to fill the gap in your understanding and correct understandings
    Flume , Oozie and YARN
    ·         How to load data in Hadoop that is coming from web server or other storage without fixed schema
    ·         How to load unstructured and semi structured data in Hadoop
    ·         Introduction to Flume
    ·         Hands-on on Flume
    ·         How to load Twitter data in HDFS using Hadoop
    ·         Introduction to Oozie
    ·         How to schedule jobs using Oozie
    ·         What kind of jobs can be scheduled using Oozie
    ·         How to schedule jobs which are time based
    ·         Hadoop releases
    ·         From where to get Hadoop and other components to install
    ·         Introduction to YARN
    ·         Significance of YARN 
    Hue, Hadoop Releases comparison, Hadoop Real time scenarios Level-2 (Complex) 
    ·         Introduction to Hue
    ·         How Hue is used in real time
    ·         Hue Use cases
    ·          Real time Hadoop usage
    ·         Real time cluster introduction
    ·         Hadoop Release 1 vs Hadoop Release 2 in real time
    ·          Hadoop real time project
    ·         Major POC based on combination of several tools of Hadoop Ecosystem
    ·         Comparison between Pig and Hive real time scenarios
    ·         Real time problems and frequently faced errors with solution 
    SPARK and Scala  Level-1 (Basics)
    ·         Introduction to Spark
    ·         Introduction to scala
    ·         Basics Features of SPARK and Scala available in Hue
    ·         Why Spark demand is increasing in market
    ·         How can we use Spark with Hadoop Eco System
    ·         Datasets for practice purpose 
    SPARK and Scala  Level-2 (Complex)
    ·         Spark use cases with  real time scenarios
    ·         Spark Practical with advanced concepts
    ·         Scala platform with complex use cases
    ·         Real time project use cases examples based on Spark and Scala
    ·         How we can reduce 
    Additional Key Features
    ·         This training program contains 3 POCs and one real time projects with problem statements and data sets
    ·         This training is based on 3 Hadoop machines
    ·         We provide you several datasets  which you can use for further practices on Hadoop