Member since
03-06-2020
406
Posts
56
Kudos Received
37
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 392 | 08-29-2025 12:27 AM | |
| 1021 | 11-21-2024 10:40 PM | |
| 977 | 11-21-2024 10:12 PM | |
| 3047 | 07-23-2024 10:52 PM | |
| 2149 | 05-16-2024 12:27 AM |
10-21-2021
10:29 AM
@Kallem Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information @ChethanYM has requested?
... View more
10-18-2021
08:20 AM
Thanks @ChethanYM It really saves my time looking for solution.
... View more
10-16-2021
12:08 AM
Hi, Are you using --password-file option inside your workflow? I have found a bug[1][2] similar to the issue you are facing. If this is the issue can you turn off the uber mode and try the job? <global> <configuration> <property> <name>oozie.launcher.mapreduce.job.ubertask.enable</name> <value>false</value> </property> </configuration> </global> Also are you doing any FileSystem.close() call in your actions? FileSystem objects are automatically cached in Hadoop and thereby if you close the FileSystem object inside from your Java Action Class, which runs in the same JVM as the launcher map task, the singular FileSystem instance itself gets closed. Hence, when the Wrapper/Launcher wishes to exit normally. Regards, Chethan YM Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. [1]. https://issues.apache.org/jira/browse/SQOOP-2997 [2]. https://jira.cloudera.com/browse/CDH-43107
... View more
10-15-2021
08:58 PM
Hi, I have found a another community article that has addressed your concern. Please do check below: That sounds like all is working as designed/implemented since Ranger does not currently (as of HDP 2.4) have a supported plug-in for Spark and knowing that when spark is reading Hive tables that it really isn't going through the "front door" of Hive to actual run queries (it is reading these files from HDFS directly). That said, the underlying HDFS authorization policies (either w/or w/o using Ranger) will be honored if they are in-place. Article: https://community.cloudera.com/t5/Support-Questions/Does-Spark-job-honor-Ranger-hive-policies/td-p/147760 Do mark it resolved if it really helps you. Regards, Chethan YM
... View more
10-15-2021
08:29 PM
1 Kudo
Hi, These are the just health tests of impala daemon processes, Are you facing any query failures due to these tests? I think all these should recover with in a minutes and can be tuned accordingly. Please refer the below cloudera documentation for more details on these health tests. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ht_impala_daemon.html Regards, Chethan YM
... View more
10-10-2021
10:36 PM
@leonid, Has any of replies helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
10-07-2021
08:52 AM
Hi @pvishnu Thanks for the response. So this is a new cluster and ranger was already previously installed on the node so I modified some of the data in the ambari postgres db to make it think that Ranger is already installed. Is there any documentation on what I should do to make sure that everything is synced up? Thanks,
... View more
10-05-2021
10:49 PM
Finally, it's resolved. I made a mistake where create the unix user & group in the wrong location. It should create in the master node instead. Also, it might need to restart the cluster in order to make the changes (for my case I have to restart, else I couldn't view the granted table list in the Hue Manager) Thank you! New lesson learned. 🙂
... View more
10-03-2021
10:10 PM
Hi, Is this your new oozie setup? or was it running fine earlier? Is oozie service up and running? Can you provide the output for below commands to verify job status and its logs:( Replace the workflow id ) a. oozie job -oozie http://<oozie-server-host>:11000 -info <workflow-id> b. oozie job -oozie http://<oozie-server-host>:11000 -log <workflow-id> Regards, Chethan YM
... View more
10-03-2021
09:01 PM
Hi, Yes, we can use both SSL and kerberos in the hadoop cluster. TLS/SSL is for encrypted network communication and kerberos is for user authentication. May be below documentation links are related to your post. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/admin_cm_ha_tls.html https://community.cloudera.com/t5/Support-Questions/Is-it-possible-to-do-kerberos-and-ssl/td-p/120470
... View more