Member since
04-22-2014
1218
Posts
341
Kudos Received
157
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 26271 | 03-03-2020 08:12 AM | |
| 16423 | 02-28-2020 10:43 AM | |
| 4727 | 12-16-2019 12:59 PM | |
| 4477 | 11-12-2019 03:28 PM | |
| 6682 | 11-01-2019 09:01 AM |
08-16-2018
04:26 PM
This definitely was the issue and the fix for me. I saw both jar files already in the java/lib/security dir and failed to replace them with the downloaded UnlimitedJCEPolicyJDK8 jar files. It took me a long time to get back to replacing the files. After doing so, I restarted CMS successfuly. Thanks for the post!
... View more
08-06-2018
09:23 AM
+ [[ -f /run/cloudera-scm-agent/process/527-hive-HIVESERVER2/ ]] + exec /opt/cloudera/parcels/CDH-5.15.0-1.cdh5.15.0.p0.21/lib/hive/bin/hive --config /run/cloudera-scm-agent/process/527-hive-HIVESERVER2 --service hiveserver2 18/08/06 19:19:40 WARN conf.HiveConf: HiveConf of name hive.server2.idle.session.timeout_check_operation does not exist 18/08/06 19:19:40 WARN conf.HiveConf: HiveConf of name hive.entity.capture.input.URI does not exist this is what i got from recent stderr and this from role log Error starting HiveServer2: could not start ThriftBinaryCLIService
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10000.
... View more
07-29-2018
11:20 PM
@nicusanThis looks promising. I shall take a look for sure. @bgooleyIs there any way to retrieve the scm password for the embedded postgres DB ? Without taking the backup of db.properties file I had ran the script which configured external mysql DB. Now db.properties only shows external mysql DB related info . I am not sure if scm_prepare_database.sh script took the backup of old db.properties file somewhere in the system.
... View more
07-27-2018
06:05 AM
Hi, We have Kerberos configured in our Hadoop cluster. We did a Wizard installation (https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_sg_intro_kerb.html), it works well. We try to have a high level of availability, we have configured a secondary kdc-server (we followed the kerberos documentation). We have a replication of the credentials from the first Kerberos server to the second (like in the topic : https://community.hortonworks.com/articles/92333/configure-two-kerberos-kdcs-as-a-masterslave.html) We set Kerberos configuration on Cloudera Manager to add the secondary kdc server. The configuration generate by Cloudera in /etc/krb5.conf contains : [realms]
XXXXXX.COM = {
kdc = master1.com
admin_server = master1.com
kdc = worker1.com
} We have the following configuration: master1 : Kerberos server + Namenode (active) HDFS worker1 : Kerberos server + Namenode HDFS worker2 : Kerberos client + Datanode HDFS We are testing the replication of Kerberos. Case 1 : stop Kerberos server (kdc + kadmin) on master1 and init user ticket on worker2 with kinit It works well. Case 2 : stop Kerberos server (kdc + kadmin) and Namenode HDFS on master1 (to simulate the crash of the server master1) Normaly, the Namenode on worker1 should be activate. But, there is an error : "This role's process exited. This role is supposed to be started." on worker1. Message in log: PriviledgedActionException as:hdfs/worker1.com@XXXXXX.COM (auth:KERBEROS) cause:java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Connection refused (Connection refused)) Conclusion/Question So my conclusion is that the Namenode on worker1 doesn't use the secondary kdc (there is nothing in the kadmin.log on the worker1). But if I do a kinit manually, that works. So, is not a problem of Kerberos. If the server with the main Kerberos kdc crash, the hadoop services crash too.. This is a big problem. Do you have a solution? Or any suggestion? Thank you, Martin.
... View more
07-27-2018
04:17 AM
The Resource manager starts and shuts down in couple of mins. It shows unpexected exits and there are no error in logs for resource manager. Is RHEL 7.5 m5.xlarge (4CPU, 16GB) machine capable of running cloudera with spark2,oozie,yarn,hue,hive and also cloudera manager? Am i missing something here? Please help.
... View more
07-27-2018
01:53 AM
I am using cloudera manager to handle my cluster. I found my problem. It was that I wanted to update a parameter that had already been configured by Cloudera manager team and that is a constant value. Cloudera manager doesn't allow to update some parameter like : io.storefile.bloom.block.size and the others constant parameters you cand find here : https://www.cloudera.com/documentation/other/shared/CDH5-Beta-2-RNs/hbase_jdiff_report-p-cdh4.5-c-cdh5b2/cdh4.5/constant-values.html So my problem is solved. Thank you very much for your help.
... View more
07-26-2018
11:44 PM
Hello, I changed the jar file name as mysql-connector-java.jar then I get error which password is not suitable for the policy. But removed my redhat machines and created Centos. Now I dont have any error to connecting mysql database. I still dont know what was the problem. Thanks, Huriye
... View more
07-26-2018
12:13 AM
Thanks @bgooley Its running now. I have installed the hbase again and itts working fine now. Thanks for support.
... View more
07-24-2018
11:41 AM
1 Kudo
@Prav, When directories or files show group as "supergroup" that means that noone except superusers would have access via group permission. It is merely a convention. If there is no OS group named supergroup (or LDAP if you are using LDAP groups mapping) then only the "hdfs" user has access. In short, there is no group named "supergroup" by default. If you are a superuser, you would have access to the file/directory anyway since permissions would not apply to a superuser.
... View more
07-24-2018
11:11 AM
1 Kudo
@martinbo, As mentioned by others, there are some options to ease the management of users and groups. Common ones are: 1 - SSSD, IPA, Centrify OS level integration so that application calls to the OS are handled by those apps to make queries to a central LDAP source. This requires a good deal of configuration, but it is a robust, enterprise-grade solution 2 - Manage your group and passwd files with automation tools like puppet, chef, etc. (mod once, "push out" changes to all hosts) 3 - Configure LdapGroupsMapping in HDFS so that hadoop services will do group lookups directly to LDAP. NOTE: If you intend on letting users run jobs directly on YARN, you will still need to create local users on each host with a NodeManager since contains require the os user to be present.
... View more