Member since
09-03-2015
34
Posts
3
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1814 | 10-30-2015 08:56 AM | |
5124 | 10-27-2015 09:47 AM | |
1766 | 10-22-2015 01:42 PM | |
1904 | 09-04-2015 08:28 AM |
05-11-2020
12:30 AM
@iamfromsky Did You get any resolution for this? . I am facing the same scenario but no help so far in solving this. Jobs are running form certain tool is not able to connect to HMS and fails with the below error. ERROR org.apache.thrift.transport.TSaslTransport: [pool-5-thread-207]: SASL negotiation failure javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or does not exist: HIVE_DELEGATION_TOKEN
... View more
07-01-2019
07:12 PM
hi,man, did you fixed this problem,i have the same too.
... View more
07-18-2018
02:31 PM
@yassine24, Basic information about how to query and update Service config via python: http://cloudera.github.io/cm_api/docs/python-client/#configuring-services-and-roles Also, I pulled this from the Community ... it shows how to update an hdfs safety valve via REST API: curl -iv -X PUT -H "Content-Type:application/json" -H "Accept:application/json" -d '{"items":[{ "name": "core_site_safety_valve","value": "<property><name>hadoop.proxyuser.ztsps.users</name><value>*</value></property><property><name>hadoop.proxyuser.ztsps.groups</name><value>*</value></property>"}]}' http://admin:admin@10.1.0.1:7180/api/v12/clusters/cluster/services/hdfs/config You can try using the above information to update the safety valve for hbase_service_config_safety_valve. NOTE that when updating the safety valve, what you update will replace what was there. If you want to "add" a property to the safety valve, you need to include all the properties you want as an end result.
... View more
03-02-2017
07:23 AM
2 Kudos
Bit late to reply, but if the cluster is secure, try pointing the hbase configuration to the spark driver and executor classpath explicitly using 'spark.executor.extraClassPath' and 'spark.driver.extraClassPath'. Also make sure that the host from where you are running the spark command has the gateway role added. Example: $ pyspark --jars /opt/cloudera/parcels/CDH/jars/spark-examples-1.6.0-cdh5.7.3-hadoop2.6.0-cdh5.7.3.jar,/opt/cloudera/parcels/CDH/jars/hbase-examples-1.2.0-cdh5.7.3.jar --conf "spark.executor.extraClassPath=/etc/hbase/conf/" --conf "spark.driver.extraClassPath=/etc/hbase/conf/"
... View more
07-15-2016
09:12 AM
2 Kudos
Hi! Good question. Today, Impala is not aware of the heterogeneity and will split the work evenly among all available nodes - regardless of how much cpu/memory those nodes have.
... View more
10-30-2015
08:56 AM
You can either allocate more space to it or lower the monitoring thresholds through CM. Generally, you need 10+ GB free space for the logs. You can lower the monitoring thresholds by going to each service in the CM, and searching for "log space" in the Configuration To allocate more space, you can move the logs to another folder and create a symlink from the existing folder. This way, you don't need to change the log directory location in CM. Note that, you would need to stop each Service before moving the logs.
... View more
10-23-2015
08:10 AM
Thank you.. but i dont have permission. As i am not the root user.
... View more
10-14-2015
07:31 AM
1 Kudo
Hi, Please note that Cloudera Search is being included in CDH 5. As the CDH 5.4.3 parcel looks already being activated, you can simply use it via "Add services" from the CM home page.
... View more
09-04-2015
08:28 AM
Yes, I had to move the directories under /var/lib to the new host as well.
... View more