Member since
12-11-2015
244
Posts
31
Kudos Received
32
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 338 | 07-22-2025 07:58 AM | |
| 948 | 01-02-2025 06:28 AM | |
| 1582 | 08-14-2024 06:24 AM | |
| 3116 | 10-02-2023 06:26 AM | |
| 2385 | 07-28-2023 06:28 AM |
03-16-2020
01:13 AM
Can you share the full stack trace of the exception to understand further
... View more
03-16-2020
01:09 AM
At present Navigator doesn't support gathering lineage from NiFi - however within nifi there is lineage of flowfile. You can get the steps from this link https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.1.1/getting-started-with-apache-nifi/content/lineage-graph.html
... View more
03-10-2020
07:57 PM
49213 open("/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8", O_RDWR|O_CREAT|O_EXCL, 0666) = -1 EACCES (Permission denied) During this step, the script is trying to open and get file descriptor for this directory and it was denied access /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8 So far we have inspected its parent directories and haven't seen any issues with. Can we get details of this directory too ls -ln /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8
stat /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8
id yarn
... View more
03-10-2020
08:57 AM
This is much clear now On server side the request was rejected as the client was initiating non-ssl connection Caused by: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection? Client side it was unable to trust the server certs as it was not configured to use a truststore Caused by: com.cloudera.hiveserver2.support.exceptions.GeneralException: [Cloudera][HiveJDBCDriver](500164) Error initialized or created transport for authentication: [Cloudera][HiveJDBCDriver](500169) Unable to connect to server: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. You got to add few more properties to your connection string jdbc:hive2://vdbdgw01dsy.dsone.3ds.com:10000/default;AuthMech=1;KrbAuthType=1;KrbHostFQDN=vdbdgw01dsy.dsone.3ds.com;KrbRealm=DSONE.3DS.COM;KrbServiceName=hive;LogLevel=6;LogPath=d:/TestPLPFolder/hivejdbclog;SSL=1;SSLTrustStore=<path_to_truststore>;SSLTrustStorePwd=<password to truststore>
If you dont have password to your truststore you can omit the parameter SSLTrustStorePwd
... View more
03-09-2020
10:25 PM
The error usually happens when you try to connect to ssl enabled hs2 with plaintext connection. a.Which version of CDH/HDP are you using? b. Can you check in HS2 logs exactly during the timestamp the error "Unable to connect to server: Invalid status 21" was reported on client. The error you notice on server side will give further clues c. Do you have SSL enabled on HS2 ?
... View more
03-09-2020
09:45 AM
Per https://serverfault.com/questions/647569/reverse-ssh-connection-time-out-during-banner-exchange this error may indicate network issues. If you see a socket is established using netstat on both the server and the client, there may be a firewall or packet inspection device that is preventing the SSH connection from being established Are you able to manually ssh from oozie server machine to the remote <host> as <user>
... View more
03-07-2020
11:25 PM
@rda3mon Currently the feature you are looking for is not available, but for future version there is a jira in place - https://issues.apache.org/jira/browse/HDFS-11242
... View more
03-06-2020
11:15 PM
Hi san_t_o Thanks for adding more context. When cleaning the directories indicated, the libraries are copied automatically or it is necessary to copy them manually? --> /var/log/hadoop-yarn/nodemanager/recovery-state/yarn-nm-state/* and also on /var/lib/ambari-agent/tmp/ I was testing in my local cluster. Apologies, I meant to clear /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/ and sorry for the typo in my previous comment. Even before clearing off these directories or altering the location it would be best to review with strace once. It traces all system level calls and reviewing the last call prior to failure could give us more clues. To install strace - you can run yum -y install strace export HADOOP_LIBEXEC_DIR=/usr/hdp/3.0.1.0-187/hadoop/libexec
strace -f -s 2000 -o problematic_node /usr/hdp/3.0.1.0-187/hadoop-yarn/bin/yarn --debug --config /usr/hdp/3.0.1.0-187/hadoop/conf --daemon start nodemanager
export HADOOP_LIBEXEC_DIR=/usr/hdp/3.0.1.0-187/hadoop/libexec
strace -f -s 2000 -o good_node /usr/hdp/3.0.1.0-187/hadoop-yarn/bin/yarn --debug --config /usr/hdp/3.0.1.0-187/hadoop/conf --daemon start nodemanager The file problematic_node and good_node would have the traces and can you attach/paste them here.
... View more
03-06-2020
01:29 AM
Yes @Mondi This page https://community.cloudera.com/t5/Product-Announcements/bd-p/RelAnnounce gets updated everytime a product of cloudera gets released and the latest on CDH is 6.3.3
... View more
03-06-2020
12:59 AM
Hi @Mondi The pre upgrade, upgrade and post upgrade steps are covered in these doc respectively https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_cdh6_pre_migration.html https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_cdh_upgrade.html https://docs.cloudera.com/documentation/enterprise/upgrade/topics/ug_cdh6_post_migration.html
... View more