Member since
03-06-2017
28
Posts
7
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1344 | 10-11-2016 03:23 PM |
02-19-2018
05:06 AM
In this case the source host (Appmaster) was unable to received any response from destination as this is a faulty routing issue and packet was just lost.
I checked the source code for the package "org.apache.hadoop.ipc.Client" and found that it uses PING utility to check response from destination host and keep trying until it received any responce. This is clearly mentioned in the this link http://grepcode.com/file/repo1.maven.org/maven2/com.ning/metrics.action/0.2.0/org/apache/hadoop/ipc/Client.java
"This class sends a ping to the remote side when timeout on reading. If no failure is detected, it retries until at least a byte is read."
So, due to routing issue and it keep trying until the job was killed. thanks to grepcode.com for facilitating easy way to read source code.
... View more
08-02-2017
05:22 AM
1 Kudo
I found the solution. There was incorrect hostname entry on /etc/hosts file on Resource manager node and as result nodemanager registration failed as resource manager does not accept request from a unauthorized host. Thanks Khireswar
... View more
03-01-2017
08:53 AM
1 Kudo
Check have you given to that user UDF permission on all databases, either by user or by his group. I've just discovered that in HDP-2.5.3 if I give UDF permission to u1 on all databases using his group, then u1 can list all databases, and can even do "use db1" even if he has no "table" permission on db1, but "show tables" returns empty list. When I remove his group from UDF policy then it works as expected.
... View more
10-11-2016
03:23 PM
1 Kudo
I have received e-credit on my examlocal account. Now I will be able reschedule my exam again.
... View more
10-07-2016
08:56 PM
1 Kudo
The regionserver is failing because of this:
2016-10-0703:01:03,102 WARN [master/imp1tvhdpmst1.corp.test.com/172.24.125.130:16000] util.Sleeper:We slept 63817ms instead of 3000ms,thisis likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired Please check your GC settings and tune the GC.
... View more
06-16-2016
10:29 AM
1 Kudo
I got it working on Ambari 2.2.1 1.Create mount points: #mkdir /hadoop/hdfs/data1 /hadoop/hdfs/data2
/hadoop/hdfs/data3 #chown hdfs:hadoop /hadoop/hdfs/data1 /hadoop/hdfs/data2
/hadoop/hdfs/data3 (**We are using
the configuration for test purpose only, so no disks are mounted.) 2.Login to Ambari > HDFS>setting 3.Add datanode directories as shown
below: Datanode>datanode
directories: [DISK]/hadoop/hdfs/data,[SSD]/hadoop/hdfs/data1,[RAMDISK]/hadoop/hdfs/data2,[ARCHIVE]/hadoop/hdfs/data3
Restart hdfs hdfs service. Restart all other afftected services. Create a directory
/cold # su hdfs [hdfs@hdp-qa2-n1 ~]$
hadoop fs -mkdir /cold Set COLD storage policy
on /cold [hdfs@hdp-qa2-n1 ~]$
hdfs storagepolicies -setStoragePolicy -path /cold -policy COLD Set storage policy
COLD on /cold 5. Run get storage
policy: [hdfs@hdp-qa2-n1 ~]$
hdfs storagepolicies -getStoragePolicy -path /cold The storage policy of
/cold: BlockStoragePolicy{COLD:2,
storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}
... View more
06-16-2016
10:21 AM
Following steps working for me:
1.Create mount points: #mkdir /hadoop/hdfs/data1 /hadoop/hdfs/data2
/hadoop/hdfs/data3 #chown hdfs:hadoop /hadoop/hdfs/data1 /hadoop/hdfs/data2
/hadoop/hdfs/data3 (**We are using
the configuration for test purpose only, so no disks are mounted.)
2.Login to Ambari > HDFS>setting 3.Add datanode directories as shown
below: Datanode>datanode
directories: [DISK]/hadoop/hdfs/data,[SSD]/hadoop/hdfs/data1,[RAMDISK]/hadoop/hdfs/data2,[ARCHIVE]/hadoop/hdfs/data3 Restart hdfs hdfs service. Restart all other afftected services. Create a directory
/cold # su hdfs [hdfs@hdp-qa2-n1 ~]$
hadoop fs -mkdir /cold Set COLD storage policy
on /cold [hdfs@hdp-qa2-n1 ~]$
hdfs storagepolicies -setStoragePolicy -path /cold -policy COLD Set storage policy
COLD on /cold 5. Run get storage
policy: [hdfs@hdp-qa2-n1 ~]$
hdfs storagepolicies -getStoragePolicy -path /cold The storage policy of
/cold: BlockStoragePolicy{COLD:2,
storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}
... View more