Member since
12-11-2015
206
Posts
30
Kudos Received
30
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
481 | 08-14-2024 06:24 AM | |
1475 | 10-02-2023 06:26 AM | |
1317 | 07-28-2023 06:28 AM | |
8575 | 06-02-2023 06:06 AM | |
654 | 01-09-2023 12:20 PM |
03-22-2020
08:16 PM
Just a correction The document suggest to tune property dfs.datanode.balance.max.concurrent.moves and not dfs.datanode.ec.reconstruction.xmits.weight Regarding the question of dfs.datanode.balance.max.concurrent.moves is already present in Datanode and balancer so why to add again. The doc says "Add the following code to the configuration field, for example, setting the value to 50." i.e 50 is just a example number and the document doesnt mandate setting this value to 50. You can tune it to any value of your requirement. Then why to add in both balancer and datanode? Setting it on HDFS Balancer(client) will give the flexibility to change this value on the client side at runtime i.e you can set this property to a value lesser or equal to what you have configured on the datanode side. Reason why we set this on server side is to impose a limit till what value the property can be configured. If you configure a value greater than what you have set on the Datanode(server), the datanodes fails it
... View more
03-22-2020
06:32 AM
The error suggests the DFSClient is unable to read the blocks due to connection failure. Either the ports are blocked or unreachable from the node From the node in which you are running the code snippet/From the node in which the executor ran, try reading the file using hdfs commands in debug mode which can give further clues on what node/service the client was trying to reach prior to connect timeout export HADOOP_ROOT_LOGGER=DEBUG,console
hdfs dfs -cat hdfs://ec2-18-234-71-106.compute-1.amazonaws.com:8020/dataset/Tech.csv
... View more
03-22-2020
06:09 AM
@erkansirin78 Let me make sure I understand the issue correctly. By this " Before restart, I saw totally different properties added." Did you mean the property dfs.datanode.ec.reconstruction.xmts.weight getting added? If yes, then its not getting added instead the preview page is just showing the extra lines prior to the property that you added, only the lines with + sign matters.
... View more
03-16-2020
01:44 AM
Yeah, thats right. Unfortunately there is no feature available to gather Nifi Lineage in Navigator.
... View more
03-16-2020
01:22 AM
Right, In CDH its not available but with CDP you have options to install Atlas which already have integration with NiFi https://docs.cloudera.com/cdpdc/7.0/overview/topics/cdpdc-overview.html Data Engineering Ingest, transform, and analyze data. Services: HDFS, YARN, YARN Queue Manager, Ranger, Atlas, Hive Metastore, Hive on Tez, Spark, Oozie, Hue, and Data Analytics Studio Data Mart Browse, query, and explore your data in an interactive way. Services: HDFS, YARN, YARN Queue Manager, Ranger, Atlas, Hive Metastore, Impala, and Hue Operational Database Low latency writes, reads, and persistent access to data for Online Transactional Processing (OLTP) use cases. Services: HDFS, Ranger, Atlas, and HBase https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.1.1/installing-hdf/content/configure_nifi_for_atlas_integration.html
... View more
03-16-2020
01:13 AM
Can you share the full stack trace of the exception to understand further
... View more
03-16-2020
01:09 AM
At present Navigator doesn't support gathering lineage from NiFi - however within nifi there is lineage of flowfile. You can get the steps from this link https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.4.1.1/getting-started-with-apache-nifi/content/lineage-graph.html
... View more
03-10-2020
07:57 PM
49213 open("/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8", O_RDWR|O_CREAT|O_EXCL, 0666) = -1 EACCES (Permission denied) During this step, the script is trying to open and get file descriptor for this directory and it was denied access /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8 So far we have inspected its parent directories and haven't seen any issues with. Can we get details of this directory too ls -ln /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8
stat /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-6110205147654050510.8
id yarn
... View more
03-10-2020
08:57 AM
This is much clear now On server side the request was rejected as the client was initiating non-ssl connection Caused by: org.apache.thrift.transport.TTransportException: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection? Client side it was unable to trust the server certs as it was not configured to use a truststore Caused by: com.cloudera.hiveserver2.support.exceptions.GeneralException: [Cloudera][HiveJDBCDriver](500164) Error initialized or created transport for authentication: [Cloudera][HiveJDBCDriver](500169) Unable to connect to server: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. You got to add few more properties to your connection string jdbc:hive2://vdbdgw01dsy.dsone.3ds.com:10000/default;AuthMech=1;KrbAuthType=1;KrbHostFQDN=vdbdgw01dsy.dsone.3ds.com;KrbRealm=DSONE.3DS.COM;KrbServiceName=hive;LogLevel=6;LogPath=d:/TestPLPFolder/hivejdbclog;SSL=1;SSLTrustStore=<path_to_truststore>;SSLTrustStorePwd=<password to truststore>
If you dont have password to your truststore you can omit the parameter SSLTrustStorePwd
... View more
03-09-2020
10:25 PM
The error usually happens when you try to connect to ssl enabled hs2 with plaintext connection. a.Which version of CDH/HDP are you using? b. Can you check in HS2 logs exactly during the timestamp the error "Unable to connect to server: Invalid status 21" was reported on client. The error you notice on server side will give further clues c. Do you have SSL enabled on HS2 ?
... View more