Member since
07-20-2020
11
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6968 | 07-23-2020 06:18 AM |
07-23-2020
06:18 AM
Since the solution is scattered across many posts, I'm posting a short summary of what I did. I am running HDP 2.6.5 image on VirtualBox. Increased my virtual hard disk through Virtual Media Manager In the guest OS, Partitioned the unused space Formatted the new partition as an ext4 file system Mounted the file system Update the /etc/fstab (I couldn't do it, as I did not find that file In Ambari, under DataNode directory config, added the newly mounted file system as a comma separated value Restarted HDFS (my cluster did not have any files, therefore I did not run the below) Thanks to @Shelton for his guidance. sudo -u hdfs hdfs balancer
... View more
07-22-2020
09:24 PM
ambari files view (same PB for Hue File browser) is not the good tool if you want to upload (very) big files. it's running in JVMs, and uploading big files will use more memory (you will hit maximum availaible mem very quickly and cause perfs issues to other users while you are uploading ) BTW it's possible to add other ambari server views to increase perfs (it may be dedicated to some teams/projects ) for very big files prefer Cli tools : scp to EDGE NODE with a big FS + hdfs dfs -put. or distcp or use an object storage accessible from you hadoop cluster with a good network bandwidth
... View more