Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 902 | 06-04-2025 11:36 PM | |
| 1502 | 03-23-2025 05:23 AM | |
| 741 | 03-17-2025 10:18 AM | |
| 2674 | 03-05-2025 01:34 PM | |
| 1776 | 03-03-2025 01:09 PM |
05-17-2018
03:50 PM
@Matthias Tewordt Can you get it to your local machine and maybe trim the last 200 or so lines or use an external website and provide the link to download because in here the format are restricted and the sizes too
... View more
05-17-2018
01:56 PM
1 Kudo
@Guru Kamath You will need explicitly add this privilege because that's the error being thrown out !! grant all privileges on *.* to 'ambari'@'xxxx.com' identified by 'panther';
flush privileges; Please do that and revert
... View more
05-17-2018
01:43 PM
@Guru Kamath That's a permission issue on the ambari database, can you try running the following grant all privileges on *.* to 'ambari'@'xxxx.com' identified by 'ambari_user_password';
flush privileges; Please try that and revert
... View more
05-17-2018
01:22 PM
Back to the basics, did you validate the below points?
Set Up Password-less SSH Enable NTP on the Cluster and on the Browser Host Check DNS and NSCD Configuring iptables Disable SELinux and PackageKit and check the umask Value
Enable NTP on Your Cluster Disable IPTables THP And check the DNS resolution, The manual registration was successful on the Ambari host but failed on the 2 other nodes because it just can't locate them. Can you upload the ambari-server.log ?
... View more
05-17-2018
12:30 PM
Definitely a conne tivity problem. Check your /etc/host entry and the DNS resolution
... View more
05-17-2018
07:48 AM
1 Kudo
@Shailna Patidar The NameNode stores the Metadata (data about data) of all the block locations of the whole cluster. NameNode periodically receives a Heartbeat and a Block report from each of the DataNodes in the cluster. Continuous data node Heartbeat implies that the DataNode is functioning properly. Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not received a heartbeat message from a data node after a certain amount of time, the data node is marked as dead. Since blocks will be under-replicated the system begins replicating the blocks that were stored on the dead DataNode. The NameNode orchestrates the replication of data blocks stored on the failed DataNode to another. The replication data transfer happens directly between DataNode and the data never passes through the NameNode.Once the dead data node has been replaced usually a JBOD on a low-end server (recommissioning) then the cluster admin can run the re-balancer to redistribute the data block to the newly recommissioned data node. $ hdfs balancer --help
Usage: java Balancer
[-policy <policy>] the balancing policy: datanode or blockpool
[-threshold <threshold>] Percentage of disk capacity
[-exclude [-f <hosts-file> | comma-separated list of hosts]]
[-include [-f <hosts-file> | comma-separated list of hosts]] Setting the Proper Threshold Value for the Balancer You can run the balancer command without any parameters, as shown here: $ sudo –u hdfs hdfs balancer The balancer command uses the default threshold of 10 percent. This means that the balancer will balance data by moving blocks from over-utilized to under-utilized nodes, until each DataNode’s disk usage differs by no more than plus or minus 10 percent of the average disk usage in the cluster. Sometimes, you may wish to set the threshold to a different level for example, when free space in the cluster is getting low and you want to keep the used storage levels on the individual DataNodes within a smaller range $ hdfs balancer –threshold 5 This process could take longer depending on the size of the data to replicate. There is a lot of material out there if you want a hands-on walk through the decommissioning and recommissioning of a data node. Hope that explains the usage and usecase
... View more
05-17-2018
07:21 AM
@Shailna Patidar Are you still encountering problems, YES globally its ser the hdfs-site.xml but the default is 3. Your question was to change the replication factor of an existing file I think your question was answered. If you found this answer addressed your question, please take a moment to log in and click the "Accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of question.
... View more
05-17-2018
05:50 AM
@Vaughn Shideler Great that your issue has been resolved If you find one of the answers addressed your question, please take a moment to login and click the "Accept" link on the answer. This will ensure other members who encounters the same issue could use that solution 🙂 Happy Hadooping
... View more
05-16-2018
09:36 PM
1 Kudo
@Jorge Luis Hernandez Olmos This is usually due to AMS Data being corrupt.
Shut down Ambari Monitors, and Collector via Ambari Cleared out the /var/lib/ambari-metrics-collector dir for fresh restart From Ambari -> Ambari Metrics -> Config -> Advanced ams-hbase-site get the hbase.rootdir and hbase-tmp directory Delete or Move the hbase-tmp and hbase.rootdir directories to an archive folder Started AMS. All services will came online and graphs started to display, after a few minutes Hope that helps
... View more
05-16-2018
09:31 PM
1 Kudo
@Vaughn Shideler I don't see why you are logged in as root , usually ambari-qa or Admin I see you should have set it a directory higher, the offending directory permission is on SourceCluster # hdfs dfs -ls -d /apps/falcon/SourceCluster/staging
drwxrwxrwx - root hdfs 0 2018-05-10 03:41 /apps/falcon/SourceCluster/staging So as hdfs user run the below snippet $ hdfs dfs -chown -R falcon /apps/falcon Then proceed as below I have just tried out on my Single node cluster HDP-2.6.2.0 falcon 0.10.0 The Falcon UI will accept Admin, hive or Falcon and NOT prompt for the password so you can log out and log in as admin or falcon. It's important the user has a home under /apps/* so in your case, you can use falcon form the above sequence. Below is the my output. $ hdfs dfs -ls /apps
Found 4 items
drwxrwxrwx - falcon hdfs 0 2018-05-04 17:48 /apps/falcon
drwxr-xr-x - hdfs hdfs 0 2017-10-19 13:43 /apps/hbase
drwxr-xr-x - hdfs hdfs 0 2017-10-19 13:53 /apps/hive
drwxr-xr-x - zeppelin hdfs 0 2017-10-19 19:25 /apps/zeppelin See attached screenshot. Please revert
... View more