Month: May 2018

HDFS: Read & Write Commands using Java API

Hadoop comes with a distributed file system called HDFS (HADOOP Distributed File Systems) HADOOP based applications make use of HDFS. HDFS is designed for storing very large data files, running on clusters of commodity hardware. It is fault tolerant, scalable, and extremely simple to expand. Note: When data exceeds the capacity of storage on a single … Continue reading HDFS: Read & Write Commands using Java API

Advertisements
Spark: Programming with RDDs

Spark: Programming with RDDs

A RDD known as Resilient Distributed Dataset in Spark is simply an immutable distributed huge collection of objects sets. Each RDD is split into multiple partitions (a smaller units), which may be computed on different aspects of nodes of the cluster. RDDs can contain any type of languages such as Python, Java, or Scala objects, … Continue reading Spark: Programming with RDDs

Hadoop Multi Node Clusters

Hadoop Multi Node Clusters

Installing Java Syntax of java version command $ java -version Following output is presented. java version "1.7.0_71" Java(TM) SE Runtime Environment (build 1.7.0_71-b13) Java HotSpot(TM) Client VM (build 25.0-b02, mixed mode) Creating User Account System user account on both master and slave systems should be created to use the Hadoop installation. # useradd hadoop # … Continue reading Hadoop Multi Node Clusters

Hadoop: Features, Components, Cluster & Topology

Hadoop: Features, Components, Cluster & Topology

Apache HADOOP is a framework used to develop data processing applications which are executed in a distributed computing environment. Components of Hadoop Features Of 'Hadoop' Network Topology In Hadoop Similar to data residing in a local file system of personal computer system, in Hadoop, data resides in a distributed file system which is called as … Continue reading Hadoop: Features, Components, Cluster & Topology