Member since
04-04-2018
8
Posts
0
Kudos Received
0
Solutions
03-18-2020
07:42 AM
Hi,
I am facing the same exception after restarting the clouder server
Mar 18 09:27:36 xxx.example.com cm-server[8420]: 09:27:36.744 [ScmActive-0] ERROR org.hibernate.engine.jdbc.spi.SqlExceptionHelper - Lock wait timeo...nsaction
I have checked the database and its up and running.
Could you please provide more insight on 'iptables'
Thanks,
Deiveegan
... View more
03-06-2020
10:42 AM
I would like to store sequential file in hdfs
Hive Table A ( Encounter, notes )
Load the table A
Generate the sequential file as key as encounter and value as notes
Also, Can I give reducer number while processing ?
Could you please helps us on the steps ?
... View more
- Tags:
- bulkloader
- HDFS
- Pig
Labels:
07-29-2019
03:36 AM
Hi I am facing the same issue. One node in cluster is not sending heartbeat. MainThread agent ERROR Heartbeating to XXXX:7182 failed. Traceback (most recent call last): File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/agent.py", line 1399, in _send_heartbeat response = self.requestor.request('heartbeat', heartbeat_data) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 141, in request return self.issue_request(call_request, message_name, request_datum) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 254, in issue_request call_response = self.transceiver.transceive(call_request) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 482, in transceive self.write_framed_message(request) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/avro/ipc.py", line 501, in write_framed_message self.conn.request(req_method, self.req_resource, req_body, req_headers) File "/usr/lib64/python2.7/httplib.py", line 1041, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.7/httplib.py", line 1075, in _send_request self.endheaders(body) File "/usr/lib64/python2.7/httplib.py", line 1037, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 881, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 857, in send self.sock.sendall(data) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/SSL/Connection.py", line 351, in write return self._write_bio(data) File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/M2Crypto/SSL/Connection.py", line 330, in _write_bio return m2.ssl_write(self.ssl, data, self._timeout) SSLError: (32, 'Broken pipe') [29/Jul/2019 04:05:42 +0000] 6869 MainThread agent ERROR Heartbeating to XXXX:7182 failed. Traceback (most recent call last): File "/opt/cloudera/cm-agent/lib/python2.7/site-packages/cmf/agent.py", line 1390, in _send_heartbeat Could you help us to resolve the issue.
... View more
01-11-2019
06:42 AM
Here is the log of cdsw status command OK: Sysctl params check Failed to run CDSW Nodes Check. Failed to run CDSW system pods check. Failed to run CDSW application pods check. Failed to run CDSW services check. Failed to run CDSW config maps check. Failed to run CDSW secrets check. Failed to run CDSW persistent volumes check. Failed to run CDSW persistent volumes claims check. Failed to run CDSW Ingresses check. Checking web at url: http://cdswonprem.rush.edu Web is not yet up. Cloudera Data Science Workbench is not ready yet Log of cdsw logs /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 40: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 40: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 40: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 40: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 40: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 40: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 40: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 58: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 62: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 80: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 85: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-pod-logs.sh: line 106: kubectl: command not found Exporting user ids... /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-logs.sh: line 223: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-logs.sh: line 224: kubectl: command not found /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-logs.sh: line 225: kubectl: command not found Checking system logs... Collecting health logs... Exporting metrics... /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/scripts/cdsw-dump-metrics.sh: line 22: kubectl: command not found ERROR:: Unable to get service account credentials. Provide SERVICE_ACCOUNT_SECRET or run on master node.: 2 Note : In worker node, cdsw <status> works fine, wheareas in master, i need to navigate the parcels directory and provide the following command ./cdsw status. If I type only cdsw status in master node, then it will through error message "cdsw command not found" Could someone help to resolve the issue to webui make is up?
... View more
Labels:
04-25-2018
06:29 AM
I am facing the same issue. Please note that I have valid kinit token Here is the output from klist command Ticket cache: FILE:/tmp/krb5cc_10002 Default principal: dpugazhe@RUSH.EDU Valid starting Expires Service principal 04/25/2018 06:27:16 04/25/2018 16:27:16 krbtgt/RUSH.EDU@RUSH.EDU renew until 05/02/2018 06:27:16 Please find the connection string and output command. beeline> !connect jdbc:hive2://localhost:10000/default;principal=hive/_HOST@.RUSH.EDU scan complete in 1ms Connecting to jdbc:hive2://localhost:10000/default;principal=hive/_HOST@.RUSH.EDU 18/04/25 08:25:23 [main]: ERROR transport.TSaslTransport: SASL negotiation failure javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:204) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:169) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:146) at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:211) at org.apache.hive.beeline.Commands.connect(Commands.java:1526) at org.apache.hive.beeline.Commands.connect(Commands.java:1421) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52) at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1135) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1174) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1010) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:922) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:518) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7)) at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 35 more Caused by: KrbException: Server not found in Kerberos database (7) at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:70) at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251) at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262) at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308) at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126) at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458) at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693) ... 38 more Caused by: KrbException: Identifier doesn't match expected value (906) at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140) at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65) at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60) at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55) ... 44 more Unknown HS2 problem when communicating with Thrift server. Error: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000/default;principal=hive/_HOST@.RUSH.EDU: GSS initiate failed (state=08S01,code=0)
... View more
04-17-2018
08:29 AM
Thanks Gekas. So I can delete the log files under the name of cloudera-scm-% in the /var/log folder It wont cause any issue in UI right ? This is error message I see in cloudera manager This role's Log Directory is on a filesystem with less than 5.0 GiB of its space free. /var/log/cloudera-scm-firehose (free: 4.0 GiB (8.02%), capacity: 50.0 GiB) The "proper" way is to change "Max Log Size" and "Maximum Log File Backups" in Cloudera Manager, for each service is running on this machine. Its set to 200 MB . Please advice
... View more
04-17-2018
06:51 AM
Disk space is increasing in cloudera management server. Could some one help us to delete the correct files to free some space in root file system Here is my fize greater than +100 MB -sh-4.2$ sudo find / -xdev -type f -size +100M -exec ls -ltrh {} \; | sort -nk 5 -rw-r----- 1 cloudera-scm cloudera-scm 1.9G Jan 18 15:23 /opt/cloudera/parcel-repo/CDH-5.13.1-1.cdh5.13.1.p0.2-el7.parcel -rw-r----- 1 cloudera-scm cloudera-scm 3.7G Jan 30 14:16 /opt/cloudera/parcel-repo/STREAMSETS_DATACOLLECTOR-3.0.3.0-el7.parcel -rw-rw-r-- 1 root root 3.8G Dec 13 15:21 /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/images/cdsw_1.2.2_2cbfa5b.tar.gz -rw-r----- 1 cloudera-scm cloudera-scm 3.9G Jan 20 18:35 /opt/cloudera/parcel-repo/CDSW-1.2.2.p1.216803-el7.parcel -rw-r--r--. 1 root root 102M Jan 18 12:30 /usr/lib/locale/locale-archive -rw-r--r-- 1 root root 104M Nov 9 12:38 /opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/jars/spark-assembly-1.6.0-cdh5.13.1-hadoop2.6.0-cdh5.13.1.jar -rw-r--r-- 1 root root 106M Nov 9 12:41 /opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/jars/avro-tools-1.7.6-cdh5.13.1.jar -rw------- 1 root root 117M Apr 17 08:27 /var/log/sssd/ldap_child.log-20180227 -rw-r--r-- 1 root root 118M Nov 14 16:32 /usr/share/cmf/cloudera-navigator-server/wars/nav-core-webapp-2.12.1.war -rw-r----- 1 cloudera-scm cloudera-scm 120M Apr 17 08:34 /var/lib/cloudera-scm-navigator/solr/nav_elements/data/tlog/tlog.0000000000000018862 -rwxr-xr-x 1 root root 133M Dec 13 15:20 /opt/cloudera/parcels/CDSW-1.2.2.p1.216803/kubernetes/bin/kubelet -rw-r----- 1 cloudera-scm cloudera-scm 147M Apr 17 08:41 /var/log/cloudera-scm-firehose/mgmt-cmf-mgmt-SERVICEMONITOR-ruduv-kmmgmt001.rush.edu.log.out -rw-r--r-- 1 root root 151M Nov 9 12:39 /opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/jars/hbase-indexer-mr-1.5-cdh5.13.1-job.jar -rw-r----- 1 cloudera-scm cloudera-scm 160M Apr 17 08:17 /var/lib/cloudera-scm-headlamp/hdfs/nameservice1/index/_t.fdt -rw-r----- 1 cloudera-scm cloudera-scm 172M Apr 17 08:17 /var/lib/cloudera-scm-headlamp/hdfs/nameservice1/index/_u.fdt -rw-r----- 1 cloudera-scm cloudera-scm 173M Jan 19 11:39 /opt/cloudera/parcel-repo/SPARK2-2.2.0.cloudera2-1.cdh5.12.0.p0.232957-el7.parcel -rw-r----- 1 cloudera-scm cloudera-scm 201M Feb 25 13:15 /var/log/cloudera-scm-firehose/mgmt-cmf-mgmt-SERVICEMONITOR-ruduv-kmmgmt001.rush.edu.log.out.4 -rw-r----- 1 cloudera-scm cloudera-scm 201M Mar 11 11:58 /var/log/cloudera-scm-firehose/mgmt-cmf-mgmt-SERVICEMONITOR-ruduv-kmmgmt001.rush.edu.log.out.2 -rw-r----- 1 cloudera-scm cloudera-scm 201M Mar 20 06:27 /var/log/cloudera-scm-firehose/mgmt-cmf-mgmt-SERVICEMONITOR-ruduv-kmmgmt001.rush.edu.log.out.1 -rw-r----- 1 cloudera-scm cloudera-scm 201M Mar 9 14:24 /var/log/cloudera-scm-firehose/mgmt-cmf-mgmt-SERVICEMONITOR-ruduv-kmmgmt001.rush.edu.log.out.3 -rw-r--r-- 1 root root 202M Nov 9 12:37 /opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/debug/usr/lib/impala/sbin-debug/impalad.debug -rw-r--r-- 1 root root 251M Nov 9 12:37 /opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/debug/usr/lib/impala/sbin-retail/impalad.debug -rw------- 1 root root 299M Apr 16 03:44 /var/log/sssd/sssd_rush.edu.log-20180416 -rw-r----- 1 cloudera-scm cloudera-scm 342M Mar 28 10:11 /var/lib/cloudera-scm-eventserver/v3/_43pe.fdt -rw-r----- 1 cloudera-scm cloudera-scm 346M Jan 18 15:23 /opt/cloudera/parcel-repo/KUDU-1.4.0-1.cdh5.12.2.p0.8-el7.parcel -rw-r----- 1 cloudera-scm cloudera-scm 347M Mar 5 13:47 /var/lib/cloudera-scm-eventserver/v3/_2e3g.fdt
... View more
Labels:
04-04-2018
07:17 AM
I am trying to distcp operation to trasnfer file from ADLS to cluster hadoop distcp -Dfs.adl.oauth2.access.token.provider.type=ClientCredential-Dhadoop.security.credential.provider.path=jceks://hdfs/adlskeyfilenew.jceks adl://rush.azuredatalakestore.net/src hdfs:///projects/test I have provided the HADOOP_CREDSTORE_PASSWORD in command line. here is the error message I am getting Error: java.io.IOException: Configuration problem with provider path. at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2048) at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:1967) at org.apache.hadoop.fs.adl.AdlFileSystem.getPasswordString(AdlFileSystem.java:993) at org.apache.hadoop.fs.adl.AdlFileSystem.getConfCredentialBasedTokenProvider(AdlFileSystem.java:278) at org.apache.hadoop.fs.adl.AdlFileSystem.getAccessTokenProvider(AdlFileSystem.java:257) at org.apache.hadoop.fs.adl.AdlFileSystem.initialize(AdlFileSystem.java:162) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2816) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:217) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: Keystore was tampered with, or password was incorrect at com.sun.crypto.provider.JceKeyStore.engineLoad(JceKeyStore.java:865) at java.security.KeyStore.load(KeyStore.java:1445) at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.locateKeystore(AbstractJavaKeyStoreProvider.java:335) at org.apache.hadoop.security.alias.AbstractJavaKeyStoreProvider.<init>(AbstractJavaKeyStoreProvider.java:88) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:49) at org.apache.hadoop.security.alias.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:41) at org.apache.hadoop.security.alias.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:100) at org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:63) at org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2028) ... 21 more Caused by: java.security.UnrecoverableKeyException: Password verification failed Note : Able to retrive the file . It fails only when i do distcp . Is there any additional command i need to provide for distcp
... View more