Member since
07-26-2017
23
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3983 | 07-28-2018 02:06 PM |
02-29-2024
07:59 AM
Hi @mohammad_shamim You need to check DB logs and see if you are able to find any error. Else reach-out to your DB team, they should be best point of contact to help you with DB related problem
... View more
08-31-2022
05:05 AM
@mohammad_shamim Did you have Hive HA configured in CDH cluster, in that case, you need to make sure that there are equal number of HS2 instances created in the CDP cluster, because without that HA cannot be attained. Also, make sure that there is no Hiveserver2 instance created under "Hive" service in CDP. It should only be present under Hive on Tez service.
... View more
02-10-2022
02:48 PM
1 Kudo
You need to install openldap-clients Linux package, which includes ldapsearch tool. yum install openldap-clients You should also pay attention to this documentation while you are enabling the Kerberos. https://docs.cloudera.com/documentation/enterprise/6/6.3/topics/cm_sg_intro_kerb.html#xd_583c10bfdbd326ba--6eed2fb8-14349d04bee--76dd
... View more
03-19-2021
10:47 AM
Do you have SPNEGO enabled for browsers? https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.0.1/authentication-with-kerberos/content/authe_spnego_enabling_browser_access_to_a_spnego_enabled_web_ui.html Are you seeing any error on the UI?
... View more
09-27-2020
10:55 PM
Thanks for your reply. I have already tried but still getting the same error.
... View more
07-29-2018
03:15 AM
Good to know @Mohammad Shamim! Please if the issue is solved, I'd kindly ask you to accept the answer. This will help the other HCC users to find the best answer faster and will encourage the other users to keep doing a good job as well 🙂
... View more
01-14-2018
04:17 AM
@Mohammad Shamim The troubleshooting depends on which kind of Job are you running. However if a job is running slow then broadly there may be any of the following reason which needs to be check as First Ad. 1. Please check if there are any slow responding threads or threads that are consuming more CPU cycles? In such scenarios it is better to collect Thread dumps and the CPU metrics to understand the thread level issues/slowness/hung/CPU utilization etc. https://community.hortonworks.com/articles/72319/how-to-collect-threaddump-using-jcmd-and-analyse-i.html . 2. Please check the memory utilization by your Job and to find out if the memory that is allocated for your job execution is sufficient or needs some tuning. In that case based on the nature of your Job please check the following links: https://community.hortonworks.com/articles/22419/hive-on-tez-performance-tuning-determining-reducer.html https://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/ https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_spark-component-guide/content/ch_tuning-spark.html .
... View more
11-24-2017
02:22 PM
2 Kudos
@Mohammad Shamim
Do desc formatted on the table name,this command will display either the table is External (or) Managed and Location of the table. hive# desc formatted <db-name>.<db-table-name>; Check Size of the Database:- bash# hdfs dfs -count -h -v <hdfs-location> Example:- In the above screenshot you can view i ran desc formatted devrabbit table, The table type is Managed table and the location of the table is /user/hdfs/hive if you want to find the size of the above location then do hdfs dfs -count -h -v /user/hdfs/hive It will display the size of the directory.
... View more
07-26-2017
07:53 PM
@Mohammad Shamim On every datanode you will find the relevant logs in the mentioned path. You will need to look at the DataNode host which went down.
... View more