Member since
03-06-2020
406
Posts
56
Kudos Received
37
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1089 | 08-29-2025 12:27 AM | |
| 1631 | 11-21-2024 10:40 PM | |
| 1539 | 11-21-2024 10:12 PM | |
| 5282 | 07-23-2024 10:52 PM | |
| 3017 | 05-16-2024 12:27 AM |
11-02-2021
12:19 AM
1 Kudo
Hi @ighack , As like yarn, there is no direct options for impala admission control pool, As per the cloudera documentation you need to select "default settings" under impala admission control to add users and group. Below is the screenshot of it. After you click on Default Settings you need to choose "Allow these users and groups to submit to this pool" to add users and groups. Regards, Chethan YM
... View more
11-01-2021
04:35 AM
Hi @wert_1311 , Seems to be "load_catalog_in_background" option is set to true in your cluster, This can result in lock contention. The recommended value for this parameter is false. Please change the value to False and see if this helpful. And if you are also seeing any JVM pauses in the catalog logs try to increase the catalog heap size and monitor the cluster. https://docs.cloudera.com/documentation/enterprise/5-16-x/topics/impala_config_options.html Regards, Chethan YM Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
11-01-2021
04:13 AM
Hi @ighack , When click on Submission Access Control tab to specify which users and groups can submit queries by default, anyone can submit queries. To restrict this permission, select the Allow these users and groups option and provide a comma-delimited list of users and groups in the Users and Groups fields respectively. I am attaching the screenshot for your reference. Please check the below Cloudera documentation for the same. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/impala_howto_rm.html#enable_admission_control__d3424603e189 Regards, Chethan YM Was your question answered? Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
10-25-2021
11:11 PM
Hi @pauljoshiva , Sqoop uses local hive which does automatic connect so you would need to modify the information in beeline-site.xml as referenced in below article: https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Usingbeeline-site.xmltoautomaticallyconnecttoHiveServer2 Hence please go to /usr/hdp/<VERSION>/hive/conf then open the beeline-site.xml and modify there. Regards, Chethan YM
... View more
10-15-2021
11:42 PM
Hi @wasimakram , Are you facing any issues/errors while calling the hive script ? can you share your workflow.xml file to have a look? Below is the simple example for it: <workflow-app xmlns = "uri:oozie:workflow:0.4" name = "simple-Workflow"> <start to = "fork_node" /> <fork name = "fork_node"> <path start = "Create_External_Table"/> <path start = "Create_orc_Table"/> </fork> <action name = "Create_External_Table"> <hive xmlns = "uri:oozie:hive-action:0.4"> <job-tracker>xyz.com:8088</job-tracker> <name-node>hdfs://rootname</name-node> <script>hdfs_path_of_script/external.hive</script> </hive> <ok to = "end" /> <error to = "kill_job" /> </action> <kill name = "kill_job"> <message>Job failed</message> </kill> <end name = "end" /> </workflow-app> Regards, Chethan YM
... View more
10-15-2021
08:48 PM
1 Kudo
Hi , I do not think any such operation is present for impala/kudu, Even i do not think kudu has trash/snapshot functions like in hdfs. Regards, Chethan YM
... View more
10-15-2021
08:29 PM
1 Kudo
Hi, These are the just health tests of impala daemon processes, Are you facing any query failures due to these tests? I think all these should recover with in a minutes and can be tuned accordingly. Please refer the below cloudera documentation for more details on these health tests. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_ht_impala_daemon.html Regards, Chethan YM
... View more
10-15-2021
08:22 PM
Hi, Is this is only the spark job failing with OOM error? what was the initial executor and driver memory that you have tried with? Can you also try to into increase the num-executors and executor-cores and run the job? rerun the job by increasing executors and cores and see if it works. Regards, Chethan YM
... View more
10-04-2021
11:01 PM
Hi, 1. Are you able to see the databases outside of hue as a testuser? 2. User and group mapping should be proper and correct to access the databases, please compare the user groups who have access and who doesn't have access. 3. id -Gn <user-id> ( it shows the allocated groups and compare with other users who have access if any groups are missing add the user to that group and give a try ) 4. If testuser user cannot see the databases throgh impala-shell and you have given proper privileges then something might messed up at os level with user group mappings, you can try restarting the SSSD and clear cache of SSSD on all hosts and give a try. Regards, Chethan YM
... View more
10-03-2021
09:58 PM
Hi, 1. Just to isolate the issue have you tried to list the databases outside of Hue may be from impala-shell? are you able to see the databases? do confirm and provide the error stack trace if you find any. 2. Go to Hue -> Security -> Hive Tables -> Browse -> and see if you are able to see the databases. 1. create role test_role; 2. GRANT ALL ON database <db_name> TO ROLE test_role; 3. GRANT ROLE test_role to GROUP <group-name>; Note: Make sure the user is part of this particular group on all the hosts in the cluster. Verify and you can provide the output for below command: a. show grant role role_test; Regards, Chethan YM
... View more