Member since
11-04-2015
261
Posts
44
Kudos Received
33
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 9122 | 05-16-2024 03:10 AM | |
| 4205 | 01-17-2024 01:07 AM | |
| 3641 | 12-11-2023 02:10 AM | |
| 7051 | 10-11-2023 08:42 AM | |
| 4086 | 09-07-2023 01:08 AM |
05-27-2022
09:44 AM
I would try to clean up everything from the /appN/yarn/nm directory (at least with root user try to move out the "filecache", "nmPrivate" and "usercache" to an external directory), maybe there are some files which NM cannot clean up for some reason. If that still does not help, then I can imagine that the ResourceManager statestore (in zookeeper) keeps track of some old job details and the NM tries to clean up after those old containers. Is this cluster a prod cluster? If not, then you could stop all the YARN applications, then stop the YARN service and then with formatting the RM state store there should be a clean state. https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YarnCommands.html In CM there is an action for it under the YARN service.
... View more
05-27-2022
02:17 AM
Hi @marcosrodrigues , the message says: 2022-05-26 13:15:58,296 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread DeletionService #0:
java.lang.NullPointerException: path cannot be null
...
at org.apache.hadoop.fs.FileContext.delete(FileContext.java:768)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.deletion.task.FileDeletionTask.run(FileDeletionTask.java:109)
... which means that the NM on those nodes tried to delete some "empty"/null paths. It is not clear from where do these null paths come from, and I haven't found any known YARN bug releted to this. Are these NodeManagers configured the same way as all the others? Are the YARN NodeManager local disks ("NodeManager Local Directories" - "yarn.nodemanager.local-dirs") exist and readable/writable by the "yarn" user? Are those directories completely empty? Thanks Miklos Szurap Customer Operations Engineer
... View more
05-25-2022
02:59 AM
Hi @tallamohan , I see that the "Load data inpath" statement is failing with a NPE: Caused by: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.ddl.DDLSemanticAnalyzerFactory.<clinit>(DDLSemanticAnalyzerFactory.java:79)
...
at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:210) Has this worked before in your cluster? Is this a new integration of Teradata with Hive / CDP? The NPE happens at a phase where the DDLSemanticAnalyzerFactory is searching for subclasses of BaseSemanticAnalyzer (which extend BaseSemanticAnalyzer) under the "org.apache.hadoop.hive.ql.ddl" package. Do you have custom analyzer classes under that package? Do they have the "@DDLType" annotation? See https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseAnalyzer.java#L35 as an example. That annotation is likely missing, causing an NPE. If not, please check the classpath of HS2 for old / custom jars, which may still have some classes under the "org.apache.hadoop.hive.ql.ddl" package. Hope this helps, Best regards Miklos Szurap Customer Operations Engineer
... View more
05-19-2022
06:42 AM
Hi! Sorry, but this seems some R specific usage problem in which I cannot help. What you can do is to enable DEBUG/TRACE level logging on the ODBC driver side (please check the ODBC Driver documentation how to do it), maybe there you can find further clues.
... View more
05-03-2022
04:08 AM
Thanks for checking. Is the connection successful using other clients, like impala-shell, beeline and other JDBC clients?
... View more
05-02-2022
08:36 AM
Hi @gfragkos, thanks for checking. Let's step back then. Is the Impala service TLS/SSL enabled at all? Can you verify that with openssl tools, like: echo | openssl s_client -connect cdp-tdh-de3-master0.cdp-tdh.u5te-1stu.cloudera.site:21050 -CAfile /var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts.pem
... View more
04-28-2022
08:24 AM
Hello Gozde @gfragkos , Have you checked whether the connectivity works with the given sslTrustStore file with a Java based client? (for example with beeline) As I see your application tries to use unixODBC to connect to a CDP / Impala service. However from the shared connection details I see that the truststore is a Java keystore file (JKS), and since the "nanodbc.cpp" is not a Java based application, it probably cannot recognize that as a valid truststore file. Please try to use a "pem" format trustrstore file instead. Please also review the Impala ODBC Driver documentation: https://downloads.cloudera.com/connectors/impala_odbc_2.6.14.1016/Cloudera-ODBC-Connector-for-Impala-Install-Guide.pdf Thanks Miklos
... View more
04-27-2022
12:54 AM
Hi @jarededrake , that's a good track, the issue currently seems to be that the cluster has Kerberos enabled, and that needs an extra configuration. In the workflow editor, in the right upper corner of the Spark action you will find a cogwheel icon for advanced settings. There on the Credentials tab enable the "hcat" and "hbase" credentials to let the Spark client obtain delegation tokens for the Hive (Hive metastore) and HBase services - in case the spark application wants to use those services (Spark does not know this in advance, so it obtains those DTs). You can disable this behavior too if you are sure that the Spark applicatino will not connect to Hive (using Spark SQL) or HBase, just add the following to the Spark action option list: --conf spark.security.credentials.hadoopfs.enabled=false --conf spark.security.credentials.hbase.enabled=false --conf spark.security.credentials.hive.enabled=false but it's easier to just enable these credentials in the settings page. For similar Kerberos related issues in other actions, please see the following guide: https://gethue.com/hadoop-tutorial-oozie-workflow-credentials-with-a-hive-action-with-kerberos/
... View more
04-26-2022
05:09 AM
Hi @jarededrake , sorry for the delay, I was away for a couple of days. You should use your thin jar (application only - without the dependencies) in the target directory ("SparkTutorial-1.0-SNAPSHOT.jar"). The NoClassDefFoundError for the SparkConf suggests that you've tried a Java action. It is highly suggested to use a Spark action in Oozie workflow editor when running a Spark application to make sure that the environment is set up properly for the application.
... View more
04-14-2022
09:16 AM
So is it "/tmp/kbr5cc_dffe" or "krb5cc_cldr"? Or where do you see the "KRB5CCNAME=/tmp/kbr5cc_dffe"? The "krb5cc_cldr" is used for all (? not sure, but all which I've quickly verified had that) services - we can say it's hardcoded - it is anyways "private" to the process itself, that holds the kerberos ticket cache which only that process is using (and renewing if needed).
... View more