Member since
01-08-2018
133
Posts
31
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17285 | 07-18-2018 01:29 AM | |
3096 | 06-26-2018 06:21 AM | |
5245 | 06-26-2018 04:33 AM | |
2706 | 06-21-2018 07:48 AM | |
2231 | 05-04-2018 04:04 AM |
12-07-2021
03:57 AM
yes you can update
... View more
10-12-2020
10:08 PM
I imported our existing v5.12 workflows via command-line loaddata. They show up in Hue 3 Oozie Editor, but not Hue 4. We are using CDH 5.16. I find the new "everything is document" paradigm confusing and misleading - Oozie workflows, Hive queries, Spark jobs etc. are not physical documents - in the Unix/HDFS sense that normal users would expect, with absolute paths that can be accessed and manipulated directly. The traditional-style Hue 3 UI lets one focus on working with the technology at hand, instead of imposing The Grand Unifying Design on the user.
... View more
08-12-2019
12:12 AM
TLS version mismatch is the issue. The below commands, fixed this issue for me, you can try this as well: echo 'export JAVA_TOOL_OPTIONS="-Dhttps.protocols=TLSv1.2"' >> ~/.bashrc source ~/.bashrc
... View more
11-14-2018
01:57 AM
Hi, I have the same problem, could you please give me some detail guid? Thanks a lot
... View more
10-24-2018
10:17 PM
@Huriye: What does that mean ? Please can you explain....
... View more
10-09-2018
09:04 AM
1 Kudo
Setting the cron job will take this particular error away but eventually, you are bound to run into a lot of other issues. Feel free to try though. Also, let me know your experience after trying that 🙂
... View more
09-27-2018
06:55 AM
@ramarov Thank you for the suggestion! We'll keep it in mind for our future sprint updates.
... View more
09-06-2018
01:53 AM
This is related to the JobHistoryServer log reported earlier. Please ensure/perform the following items for JHS and job completions to thoroughly work: First: Ensure that 'mapred' and 'yarn' are part of the 'hadoop' group in common: ~> hdfs groups mapred ~> hdfs groups yarn Both command must include 'hadoop' in their outputs. If not, ensure they are added to that group name. Second, all files and directories under HDFS /tmp/logs aggregation dir (or whatever you've reconfigured it to use) and /user/history/* have their group set to 'hadoop' and not anything else: ~> hadoop fs -chgrp -R hadoop /user/history /tmp/logs ~> hadoop fs -chmod -R g+rwx /user/history /tmp/logs Note: ACLs suggested earlier are not required to resolve this problem. The group used on these dirs is what matters in the default state, and the group setup described above is how YARN and JHS daemon users share information and responsibilities with each other. You may remove any ACLs set, or leave them be as they are still permissive.
... View more
08-30-2018
02:00 AM
After my research, remove the cluster node to operate up to two at a time, otherwise the data is at risk of being lost. And if the number of copies is insufficient, the system will not complete the removal operation, and finally have to retrieve the assigned role again.
... View more
07-27-2018
01:53 AM
I am using cloudera manager to handle my cluster. I found my problem. It was that I wanted to update a parameter that had already been configured by Cloudera manager team and that is a constant value. Cloudera manager doesn't allow to update some parameter like : io.storefile.bloom.block.size and the others constant parameters you cand find here : https://www.cloudera.com/documentation/other/shared/CDH5-Beta-2-RNs/hbase_jdiff_report-p-cdh4.5-c-cdh5b2/cdh4.5/constant-values.html So my problem is solved. Thank you very much for your help.
... View more