Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 992 | 06-04-2025 11:36 PM | |
| 1564 | 03-23-2025 05:23 AM | |
| 780 | 03-17-2025 10:18 AM | |
| 2811 | 03-05-2025 01:34 PM | |
| 1853 | 03-03-2025 01:09 PM |
05-06-2019
06:05 AM
1 Kudo
@3nomis Please read this great post from Koji about the Wait/Notify pattern
... View more
05-04-2019
07:23 AM
1 Kudo
@duong tuan anh I can see hiveServer2 also has an issue can you resolve that or what is the problem. It's the TSv2 which is not starting can you share specifically those logs? Change you run the below snippets $ hdfs dfs -chown -R yarn:hadoop /ats Finally $ hdfs dfs -chown -R yarn-ats:hdfs /atsv2/hbase Restart the services and revert HTH
... View more
05-03-2019
05:47 PM
@Mariano Gastaldi That seems strange but maybe explicable just because you have no hdfs client software on your laptop if you want to copy from your laptop unless its part of the cluster e.g an edge node then I don't see how you will succeed. What's happening is you don't have the Hadoop client libraries installed usually if you want to interact with the cluster you install the client software like hdfs,YARN,sqoop,hbase clients etc. Despite having an ubuntu or Linux based client there is no way you can copy those files unless you copy using something like WinSCP to your /home/analyst1 or /tmp in your home directory on a node which is part of the cluster !!!! then and ONLY then can you run the hdfs command from your local directory and not encounter "Unable to load native-hadoop library for your platform " $ hdfs dfs -copyFromLocal file_to_copy.txt hdfs://server1:9000/testing.txt For the below the directory /user/analyst1 should be pre-created with the correct permissions for user analyst1 $ hdfs dfs -copyFromLocal sfile_to_copy.txt /user/analyst1 Another solution is to deploy HDP standalone cluster then remove all the other components and copy the hdfs-site.xml and core-site.xml to your local laptop and initiate the connection. Hope that helps
... View more
05-03-2019
11:05 AM
@duong tuan anh Any updates
... View more
05-03-2019
08:03 AM
1 Kudo
@duong tuan anh Indeed the files are huge can you do a quick solution I saw after reading your logs, Caused by: org.apache.hadoop.security.AccessControlException As the root user switch to hdfs # su - hdfs Change ownership of the mapred directory $ hdfs dfs -chown -R mapred:hadoop /mr-history That should resolve the problem. Keep me posted
... View more
05-03-2019
12:46 AM
1 Kudo
@duong tuan anh Can you also attach the below recent logs hadoop-yarn-resourcemanager-xxxx.log
hadoop-yarn-nodemanager-xxxx.log
hadoop-yarn-root-registrydns-xxxx.log
hbase-yarn-ats-master-xxxx.log Thank you
... View more
05-02-2019
05:56 PM
@PK The name node failover looks a normal process and the edits files are being applied correctly. The KDC looks to be performing also without any Kerberos errors that leaves one culprit! Your application client IP 39.7.48.5 is misconfigured to connect to a specific Namenode so when the failover happens boooouum error [org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby] your client should be configured to use the namespace which works as a DNS for the Namenode rather than hard coding a namenode !! Can you validate my suspicion? HTH
... View more
05-02-2019
08:56 AM
@duong tuan anh Can you share the YARN/MR logs?
... View more
05-02-2019
06:42 AM
@Matthew Alcala Can you share some info about your cluster HDP version, AD/LAP integration, Kerberos, nodes, etc? So I can check if I have a sandbox of the same version
... View more
05-01-2019
06:55 PM
@Matthew Alcala Have you tried setting the policy [all - database, table, column] instead of all-hiveservice which enables a user who has Service Admin permission in Ranger to run the kill query API Reference_Ranger_hive_policy Please that and revert
... View more