Arth-Task-4.1 (How to contribute limited storage to the Hadoop Cluster?)
Task Description……………….
đź”·In a Hadoop cluster, find how to contribute limited/specific amount of storage as slave to the cluster?
✴️Hint: Using Linux partitions concepts solve above use case.
Solving this use case follow below steps……….
Step1:- Create partition using Linux partition concepts.
Step2:- Format the partition.
Step3:- Create directory and mount the directory with partition.
Step4:- Update Hadoop configure files.
use above steps solve use case, Use first step to create partitions……….
Step1, in this we go to virtual box and attach the new physical hard-disk.
step1(a):- Go the virtual box operating systems and go to settings and create one partition.
Next step go to controller Sata,
Next step go to plus icon and create partition,
Next step go to create hard-disk and create Disk and follow the next option create disk.
Follow steps to create one disk.
Run command to check disk created or not using command like
#fdisk -l
In this above image show /dev/sdb 8 GiB one disk created. Next step access the disk using command like,
#fdisk /dev/sdb
Above command use to access disk, using some key to create partition like,use p key to show all previous created partitions, n key use to create new partition when create partition ask how much storage you want like 1GiB or 2GiB etc.Use w command to save the created partitions. If want exit in the disk use q key to exit.
#p command
p command use to show al ready created partitions.
#n command
This command use to create new partitions.
Again use p command to create partition and give how much storage need, In this use case we create 1 GiB storage.
run p command to show partition created.
#w command
#fdisk -l command
Use this command show partition information.
Step2:- Format the partition.
In this step we format the partition using command like,
#mkfs.ext4 /dev/sdb2
Use above command format the created partitions.
Step3:- Create directory and mount the directory with partition.
In this step we create one directory and mount the directory on the partition.
#mkdir /datanode2
Use above command to create directory.
#mount /dev/sdb2 /datanode2
Use this command to mount the created partition with created directory.
#df -h
Above command use to show all mount partitions.
Step4:- Update Hadoop configure files.
In this step we update hdfs-site.xml file and update directory name,
Update the file then use command hadoop to stop and start command like,
#hadoop-daemon.sh stop/start datanode
Use this command to stop and start the services of Hadoop.
#jps
This command show datanode/namenode configure node.
#hadoop dfsadmin -report
Using this command to show share storage on the datanode to namenode or target node.
Conclusion:
In this task we learn some basic concepts of linux partitions and hadoop, In this task we create system to share limited storage on the target node or namenode.
Thanks vimal Daga sir give knowldge how to integrate linux with hadoop.
Thanks for reading my articals.