Member since
11-04-2015
260
Posts
44
Kudos Received
33
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2661 | 05-16-2024 03:10 AM | |
1565 | 01-17-2024 01:07 AM | |
1573 | 12-11-2023 02:10 AM | |
2309 | 10-11-2023 08:42 AM | |
1609 | 09-07-2023 01:08 AM |
05-31-2022
01:08 AM
Hi, the "hdfs dfs -du" for that path should return the summary of the disk usage (bytes, kbytes, megabytes, etc..) for that given path. Are you sure there are "no lines returned"? Have you checked the "du" output for a smaller subpath (which has less files underneith), does that return results? Can you also clarify where have you checked the block count before and after the deletion? ("the block count among data nodes did not decrease as expected")
... View more
05-30-2022
11:02 AM
Be careful with starting processes as root user, as that may leave some files and directories around owned as root - and then the ordinary "yarn" user (the process stareted by CM) won't be able to write it. For example log files under /var/log/hadoop-yarn/... Please verify that.
... View more
05-30-2022
10:37 AM
Hello @andrea_pretotto , This typically happens if you have snapshots on the system. Even though the "current" files are deleted from HDFS, they may be still hold by one ore more snapshots (which are exactly useful against accidental data deletions, as you can recover data from the snapshots if needed). Please check which HDFS directories are snapshottable: hdfs lsSnapshottableDir and then check how many snapshots do you have under those directories: hdfs dfs -ls /snapshottable_path/.snapshot Probably you can also verify it with checking the output of the "du" which includes the snapshots' sizes: hdfs dfs -du -h -v -s /snapshottable_path vs the same which excludes the snapshots from the calculation: hdfs dfs -du -x -h -v -s /snapshottable_path https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/FileSystemShell.html#du Best regards Miklos Customer Operations Engineer, Cloudera
... View more
05-30-2022
05:44 AM
Have you reviewed the classpath of the HS2 and all the jars? $JAVA_HOME/bin/jinfo <hs2_pid> | grep java.class.path Do they have some classes under the "org.apache.hadoop.hive.ql.ddl" package? The attached code does not work on my cluster (it is missing some tez related configs). What configuration does it require?
... View more
05-27-2022
09:44 AM
I would try to clean up everything from the /appN/yarn/nm directory (at least with root user try to move out the "filecache", "nmPrivate" and "usercache" to an external directory), maybe there are some files which NM cannot clean up for some reason. If that still does not help, then I can imagine that the ResourceManager statestore (in zookeeper) keeps track of some old job details and the NM tries to clean up after those old containers. Is this cluster a prod cluster? If not, then you could stop all the YARN applications, then stop the YARN service and then with formatting the RM state store there should be a clean state. https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YarnCommands.html In CM there is an action for it under the YARN service.
... View more
05-27-2022
02:17 AM
Hi @marcosrodrigues , the message says: 2022-05-26 13:15:58,296 WARN org.apache.hadoop.util.concurrent.ExecutorHelper: Caught exception in thread DeletionService #0:
java.lang.NullPointerException: path cannot be null
...
at org.apache.hadoop.fs.FileContext.delete(FileContext.java:768)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.deletion.task.FileDeletionTask.run(FileDeletionTask.java:109)
... which means that the NM on those nodes tried to delete some "empty"/null paths. It is not clear from where do these null paths come from, and I haven't found any known YARN bug releted to this. Are these NodeManagers configured the same way as all the others? Are the YARN NodeManager local disks ("NodeManager Local Directories" - "yarn.nodemanager.local-dirs") exist and readable/writable by the "yarn" user? Are those directories completely empty? Thanks Miklos Szurap Customer Operations Engineer
... View more
05-25-2022
02:59 AM
Hi @tallamohan , I see that the "Load data inpath" statement is failing with a NPE: Caused by: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.ddl.DDLSemanticAnalyzerFactory.<clinit>(DDLSemanticAnalyzerFactory.java:79)
...
at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:210) Has this worked before in your cluster? Is this a new integration of Teradata with Hive / CDP? The NPE happens at a phase where the DDLSemanticAnalyzerFactory is searching for subclasses of BaseSemanticAnalyzer (which extend BaseSemanticAnalyzer) under the "org.apache.hadoop.hive.ql.ddl" package. Do you have custom analyzer classes under that package? Do they have the "@DDLType" annotation? See https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseAnalyzer.java#L35 as an example. That annotation is likely missing, causing an NPE. If not, please check the classpath of HS2 for old / custom jars, which may still have some classes under the "org.apache.hadoop.hive.ql.ddl" package. Hope this helps, Best regards Miklos Szurap Customer Operations Engineer
... View more
05-19-2022
06:42 AM
Hi! Sorry, but this seems some R specific usage problem in which I cannot help. What you can do is to enable DEBUG/TRACE level logging on the ODBC driver side (please check the ODBC Driver documentation how to do it), maybe there you can find further clues.
... View more
05-18-2022
05:34 AM
Hi @roshanbi , The query itself seems incomplete to me, I do not see where is the alias "a" defined in the a.SUB_SERVICE_CODE_V=b.SUB_SERVICE_CODE_V part. Also it is not clear which is the database name, which is a table, and is there any complex types involved here. Can you run a select on the "cbs_cubes.TB_JDV_CBS_NEW"? (assuming that's a "database.table") Can you run a simple update on it? Are you using the latest Cloudera Impala JDBC driver version? Is the affected table a Kudu based table? Thanks, Miklos
... View more
05-03-2022
04:08 AM
Thanks for checking. Is the connection successful using other clients, like impala-shell, beeline and other JDBC clients?
... View more