Hadoop - Past, Present and Future - v1.1

  • Published on
    21-Apr-2017

  • View
    2.316

  • Download
    5

Transcript

PowerPoint Presentation6/17/14Prepared for:Presented by:Big Data Joe Rossi@bigdatajoerossiHadoopPast, Present and FutureHadoop has become synonymous with Big Data .. While Hadoop is a big part of the Big Data movement .. Hadoop itself is just a platform and the tools1Roadmap~45minsQ&A1- What Makes Up Hadoop 1.x?2- Whats New In Hadoop 2.x?3- The Future Of Hadoop 2What Makes Up Hadoop 1.x?Hadoop 1.0: HDFS + MapReduceNameNodeDataNode / TaskTrackerDataNode / TaskTrackerDataNode / TaskTrackerDataNode / TaskTrackerJobTrackerClient1-11-21-34Hadoop 1.0: HDFS + MapReduceNameNodeDataNode / TaskTrackerDataNode / TaskTrackerDataNode / TaskTrackerDataNode / TaskTrackerJobTrackerClient1-11-21-3ReduceMap2-13-23-34-12-34-22-23-14-3ReduceMap5MapReduce v1 LimitationsScalabilityMaximum cluster size is 4,000 nodes and maximum concurrent tasks is 40,000AvailabilityJobTracker failure kills all queued and running jobsResources Partitioned into Map and ReduceHard partitioning of Map and Reduce slots led to low resource utilizationNo Support for Alternate Paradigms / ServicesOnly MapReduce batch jobs, nothing elseThe architecture of MapReduce came with its limitations. Scalability Even as specs rise on servers to accommodate more load it still couldnt scale passed the max concurrent tasks.AvailabilityJT failure kills all queued and running jobsAfter restart they have to be resubmitted and start from the beginningUnable to start from where it left offCan be a huge problem if you have long running batch jobsResource partitioning of resourcesResources were broken up into distinct map and reduce slots which arent fungible. I love that word. Basically means they werent interchangeable.Map slots might be full while reduce slots remain empty and vice-versa.This needed to be addressed to to ensure the entire system could be used at max capacity for high utilization.Lacks supportYou were stuck using MapReduce6HADOOP 1.0Single Use SystemBatch AppsApache Hadoop 1.0: Single Use SystemHDFS(redundant, reliable storage)MapReduce(cluster resource management and data processing)PigHiveIn Hadoop 1.0, all methods of accessing the data within the cluster were constrained to using MapReduce, Open-source Hadoop projects like Pig, Hive are built on top of MapReduce and even though they make MapReduce more accessible, they still suffer with its limitations.You have seen some distributions move outside of Hadoop ecosystem, like Clouderas Impala, to get around the limitations of MapReduce to improve performance. But then, unfortunately, it isnt community supported and lags behind in features because it doesnt have the backing of the innovative open source committers.The crazy thing is even with these limitations, 90% of the use cases Nick spoke about yesterday are based on this.7Whats New In Hadoop 2.x?YARN Replaces MapReduceYet Another Resource NegotiatorYARNYARN will be the de-facto distributed operating system for Big DataSo, what has Trace3 found out on it journey through Big Data about YARN?Well first of all, we discovered that its not the type of yarn that cats play with.YARN will be the de-facto distributed operating system for Big Data and by the end of this hour you are going to see why we believe it is and why companies like Cloudera, Hortonworks and MapR are banking on this. 9Store DATA in one placeYARN: Taking Hadoop Beyond BatchInteract with that data in MULTIPLE WAYSwith Predictable Performance and Quality of Service Applications Run Natively IN HadoopHDFS2(redundant, reliable storage)YARN(cluster resource management)BATCH(MapReduce)INTERACTIVE(Tez)ONLINE(HBase)STREAMING(DataTorrent)GRAPH(Giraph)YARN is taking Hadoop beyond batchYARN has solved the limitations of MapReduce v1YARN gives you the ability to store all your data in one place and have mixed workloads working with that data and still getting predictable performance and QoS.YARN is moving Hadoop beyond just MapReduce and Batch into Interactive, Online, Streaming, Graph, In-Memory, etc 10Running all on the same Hadoop cluster to give applications access to all the same source data!YARN: ApplicationsMapReduce v2Stream ProcessingMaster-WorkerOnlineIn-MemoryApache StormHere are some of the apps that are making up that compute timeHbase will be deployed on YARNWhich we will talk about more a bit laterMaster-Worker applicationsMapReduce has been moved out to its own application frameworkReal-Time Streaming AnalyticsThis in my opinion is the most promising of the application types. I dont want to steal my associate Rikins thunder, but he will be speaking a lot more in-depth around Real-Time Streaming Analytics in a session later today. Graph ProcessingYARN has enabled the ability to use iterative applications like Apache Giraph within your cluster where previously MapReduce v1 just wasnt a viable option.1120102011201220132014TodayYARN: Moving QuicklyConceived at Yahoo!Alpha Releases 2.0Beta Releases 2.1GA Released 2.2100,000+ nodes, 400,000+ jobs daily10 million+ hours of compute dailyVersion 2.3Version 2.4YARN is fairly new to the scene.But that shouldnt deter you from being confident in it.It was conceived and architected by Yahoo!And has gone through a very quick maturing process due to the open source community putting it through it pacesCurrently YARN is running on over 100,000 nodesResponsible for 400,000+ jobs and 10 million+ hours of compute time daily12YARN: Dr. Evil ApprovedYes, I said 10 millllllion13YARN: What Has Changed?YARNMRv1RMResourceManagerAMApplicationMasterJTJobTrackerSchedulerSchedulerNMNodeManagerTTTaskTrackerContainerMapReduceResourceManagerSchedulerJobTrackerSchedulerNodeManagerApplicationMasterTaskTrackerMapReduceNodeManagerContainerContainerTaskTrackerMapReduceSo, whats changed with YARN for it to be able to accomplish this?YARN splits up the two major functions of the JobTracker into the ResourceManager and ApplicationMasterGlobal ResourceManager handles all of the cluster resourcesSchedulerperforms its scheduling function based on the resource requirements of the applicationsPer-node slave NodeManagerResponsible for launching application containersMonitoring their resource usageAnd reporting the same to the ResourceMangerPer-application ApplicationMaster Responsible for negotiating the appropriate resource containers from the schedulerTracking their statusMonitoring for progressPer-application Container running on a NodeManagerLets see how these all work together14ScaleNew programming models and servicesImproved cluster utilizationAgilityBackwards compatible with MapReduce v1Mixed workloads on the same source of data6 Benefits of YARN6ScaleYARN is no longer limited by 40000 concurrent tasks that MapReduce v1 hadToday YARN is already handling over 10 million hours of compute time on a daily basisNew Programming Models and ServicesYou arent limited to just MapReduceIf your app can benefit from a distributed operating system then you can utilize Improved Cluster UtilizationYARN no longer has a hard partition of resources into map and reduce slots, it utilizes the resource leases aka Containers that arent limited to in functionality.AgilityBy moving MapReduce out and on top of YARN it gives customers more agility to make changes, upgrade and have different versions of their framework running so they dont have to affect the entire cluster.Backwards Compatible What you are currently doing with Hadoop 1.x and MapReduce v1 will work with YARN.Mixed workloads on the same data sourceYou can utilize the data lake architecture and run all your apps while still having perdictable performance and quality of service.15The Future of HadoopProjects and Roadmap16SpeedDeliver interactive query through 100x performance increases as compared to Hive 10.Stinger: Interactive Query for HiveSQLSupport the broadest array of SQL semantics for analytic applications running against Hadoop.ScaleThe only SQL interface to Hadoop designed for queries that scale from Terabytes to Petabytes.One of the projects that Im keeping a close on is the Stinger projectSpeed100x speed increase from Hive10SQLImprove HiveSQL to make it more ANSI SQL-likeScaleAbility to run queries on Terabytes to Petabytes of information17Dynamic ScalingOn-demand cluster size. Increase and decrease the size with load.HOYA: HBase (NoSQL) on YARNEasier DeploymentAPIs to create, start, stop and delete HBase clusters.AvailabilityRecover from Region Server loss with a new container.Another project to watch closely is HBase on YARNDynamic ScalingScales with usage .As load increases, Easier DeploymentHBase cluster deployment can be somewhat complicated, they are looking to correct that be allowing you to do it utilizing builtin APIsAvailabilityWhen a RegionServer is lost, to recover it, its just deploying another container within the cluster.18Machine LearningFramework well suited for building machine learning jobs.Microsoft REEFScalable / Fault TolerantMakes it easy to implement scalable, fault-tolerant runtime environments for a range of computational models.Maintain StateUsers can build jobs that utilize data from where its needed and also maintain state after jobs are done.RetainableEvaluatorExecutionFrameworkThis is a project that lays outside of my wheel-house but from what Ive learned about it, its going to do amazing thing for Machine Learning.Ive also highlighted this project to add even more credibility to YARN by showing you a company like Microsoft is dedicating internal time and resources to build applications to run on YARN.19Heterogeneous Storages in HDFSNameNodeStorageNameNodeSATASSDFusion IOPreviously a NameNode had one classification of storage media available to it.Now NameNodes as of 2.3 have the ability to split up storage media available to it.Adding awareness of storage media can allow HDFS to make better decisions about the placement of block data with input from applications.An application can choose the distribution of replicas based on its performance and durability requirements.20Apache Hadoop 2.4ResourceManager HA / Auto FailoverHDFS Rolling UpgradesApache Hadoop 2.5NodeManager Restart w/o disruptionDynamic Resource ConfigurationHadoop RoadmapRELEASEDEARLYQ2 2014MIDQ2 2014NodeManger Restart allows for a restart of the NM without losing jobs .. They will continue where they left off after restartDynamic Resource Config - Currently containers are static .. They allocate a certain amount of proc / memory to each process. Now processes will have the ability to scale up within a container if resources are available within that NodeManager.21I Know You Have Questions No such thing as a stupid question.Hadoop: Past, Present and Future22SD Big Data Meetup One Last Thing meetup.com/sdbigdata2nd Wednesday Of The MonthNext: July 9st @ 5:45P23Thank You!Hadoop: Past, Present and FutureBig Data Joe Rossihttp://bigdatajoe.io/@bigdatajoerossi24Supporting SlidesSlides with information that may be asked25YARN: How It WorksResourceManagerNodeManagerApplicationMasterNodeManagerNodeManagerNodeManagerSchedulerContainerContainerContainerClientJobs are submitted to the ResouceManager via a public submission protocol and go through an admission control phase during which security credentials are validated and various checks are performed.The RM runs as a daemon on a dedicated machine, and acts as the central authority arbitrating resources for various competing applications in the cluster. Because it has a central and global view of the cluster resources, it can enforce properties such as fairness, capacity, and locality across nodes.Accepted jobs are passed to the scheduler to be run. Once the scheduler has enough resources, the application is moved from accepted to running state. This involves allocating a resource leaseAka as a container (bound JVM) - for the AM and spawning it on a node in the cluster. A record of accepted applications is written to persistent storage and recovered in case of RM failure. The ApplicationMaster is the head of a job, managing all lifecycle aspects including dynamically increasing and decreasing resources consumption, managing the flow of execution and handling faults.By delegating all these functions to AMs, YARNs architecture gains a great deal of scalability, programming model flexibility, and improved upgrading/testing since multiple versions of the same framework can coexist.The RM interacts with a special system daemon running on each node called the NodeManager (NM). Communications between RM and NMs are heartbeat based for scalability. NMs are responsible for monitorng resource availability, reporting faults, and container lifecycle management (e.g., starting, killing). The RM assembles its global view from these snapshots of NM state.26YARN: Example App DeploymentResourceManagerNodeManagerHOYA / HBase MasterNodeManagerNodeManagerNodeManagerSchedulerRegion ServerRegion ServerRegion ServerHOYA ClientJobs are submitted to the ResouceManager via a public submission protocol and go through an admission control phase during which security credentials are validated and various checks are performed.The RM runs as a daemon on a dedicated machine, and acts as the central authority arbitrating resources for various competing applications in the cluster. Because it has a central and global view of the cluster resources, it can enforce properties such as fairness, capacity, and locality across nodes.Accepted jobs are passed to the scheduler to be run. Once the scheduler has enough resources, the application is moved from accepted to running state. This involves allocating a resource leaseAka as a container (bound JVM) - for the AM and spawning it on a node in the cluster. A record of accepted applications is written to persistent storage and recovered in case of RM failure. The ApplicationMaster is the head of a job, managing all lifecycle aspects including dynamically increasing and decreasing resources consumption, managing the flow of execution and handling faults.By delegating all these functions to AMs, YARNs architecture gains a great deal of scalability, programming model flexibility, and improved upgrading/testing since multiple versions of the same framework can coexist.The RM interacts with a special system daemon running on each node called the NodeManager (NM). Communications between RM and NMs are heartbeat based for scalability. NMs are responsible for monitorng resource availability, reporting faults, and container lifecycle management (e.g., starting, killing). The RM assembles its global view from these snapshots of NM state.27Storm Vs. DataTorrentSolution Matrix DataTorrentApache StormAtomic Micro-batch13Events per SecondBillionsThousandsAutomated Parallelism3Dynamic Runtime Changes3Linear Scalability3State Checkpointing3Apache Spark + SharkHDFS2(redundant, reliable storage)YARN(cluster resource management)Apache SparkSharkHive(sql)The Stinger project is tackling the speed portion by utilizing Apache TezTez sits at the layer between MapReduce, Pig and Hive to optimize the execution of the these applications.29Hadoop 2.x YARN + HDFSNameNodeDataNode / NodeManagerDataNode / NodeManagerDataNode / NodeManagerDataNode / NodeManagerStandbyNameNode / ResourceManagerContainerContainerContainerContainerContainerContainerContainerContainerMapReduce Version consisted of 2 daemons / processes.The JobTracker is a master node responsible for managing the cluster resources (map and reduce slots) and job scheduling.The TaskTracker is a per-node agent and manages the map and reduce tasks.30Backwards CompatibleYARN is Backwards Compatible for your existing MapReduce applications. You can get value from it right away.YARN: Key Take-AwaysResource ManagementYARN enables Fine Grained Resource Management for better cluster utilization.One Source of DataYARN allows you to interact with One Source of Data in multiple ways while maintaining Predictable Performance and Quality of Service.Enabling Smart PeopleYARN is a flexible framework that is giving smart people and companies to do amazing things with data.YARN will be the de-facto distributed operating system for Big DataBackwards CompatibleWhatever you are doing with Hadoop 1.0 and MapReduce today, will work with YARN.Even though you dont need all the capabilities of YARN right now, dont hesitate to move to it and as new tools and applications become available on YARN your company will be able to utilize them.One Source Of DataYARN allows you to have that data lake with all of your data applications running against it.While still maintaining predictable performance and quality of serviceResource ManagementYARN accomplishes this by how it manage resources for better cluster utilization which translates to more bang for your buck.Enabling Smart PeopleYARN is an extremely flexible framework that is giving smart people and companies the ability to do amazing things with data.All these benefits add up to YARN will be the de-facto distributed operating system for Big DataWe see the innovation in Big Data happening on YARN andWe want to help you make the right choice now to avoid the headaches and costs that come along with making the wrong choice.31Storm Vs. DataTorrent - DetailedSolution Matrix DataTorrentApache StormProprietary / Open SourceOOSupport for Hadoop 1.x11Support for Hadoop 2.x11Native YARN13Dashboard13Extensible via Modules11Technical Support11Atomic Micro-batch13Events per SecondBillionsThousandsAutomated Parallelism13Dynamic Runtime Changes13High Availability12Prog. Languages SupportedJava, Python, etc.Java, Python, etc.Log Analysis13Site Operations13MapReduce Diagnostics13Open Source Operators Library12Open Source Application Templates13Complex Computations (DAG)13Linear Scalability13Security13CLI and Macros13Configuration Based Specification13State Checkpointing13Users forced to create data system silos for managing mixed workloadsDevelopers forced to abuse very specific MapReduce to fit their use casesThe 1st Generation Of HadoopHadoopHBaseBefore we can understand fully what YARN is solving we need to review what its replacing.Hadoop 1.0The initial design of Hadoop was focused on running massive MapReduce jobs to process web crawl.Although It did end up evolving outside of initial use case and helped solve the data silo problem, it ended up creating a different issue, something called the data system silo problem.Users were forced into creating data system silos due to mixed workloads.HBase ExampleDevelopers were forced to abuse the very specific MapReduce programming model to try to accommodate their user cases.One of the biggest cost to a Hadoop cluster is copying data between the clusters to try to accommodate mixed workloads33Apache SparkHDFS2(redundant, reliable storage)YARN(cluster resource management)Apache SparkSharkHive(sql)Spark StreamingMLib(machine learning)34Project Mgt Committee Members1511PMCs are the people that give oversight for the project roadmap and provide guidance to the committers.One thing to highlight may be that Hortonworks is a spin-off of Yahoo!35Project Committers242411115The committers are the ones who actually submit code to the project.One thing to highlight may be that Hortonworks is a spin-off of Yahoo!36YARN: Why The De-Facto Distributed OSTechnology Adoption100,000 nodes+ - 400,000 jobs - 10m compute hours daily Enables InnovationSmart people and companies to do amazing things to dataFinancial Backing568m+ invested in Hadoop contributing companies, nearly 400m in the 2013 aloneQuestion may arise how I can state that YARN will be the de-facto distributed operating system of Big Data. Here are the arguments for my conclusion / prediction.37Apache Storm TopologyBolt(Filter)SpoutStream(Data Source)SpoutStream(Data Source)Bolt(RDBMS Writes)Bolt(Calculation)Bolt(HDFS Writes)RDBMSHDFS38HDFS Write Data FlowNameNodeClientDataNodeDataNodeDataNode1245673Block BytesBlock BytesBlock BytesBlock Write CompleteAckAckAckABCHDFS Write Data Flow1 7Connect to the NN to establish block placementWriting to the DNsOnce 1 copy of the data is placed, the client gets an acknowledgementThe first DN copies the file to the second DNThe second DN copies to the third DNThe DNs acknowledge to the other DNs the the copy has been completedThe DNs acknowledge to the other DNs the the copy has been completedA, B and COnce the files are written to the DNs the information about the new block is sent in the block report / heartbeat. 39