Member since
11-17-2021
1117
Posts
253
Kudos Received
28
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 224 | 10-16-2025 02:45 PM | |
| 475 | 10-06-2025 01:01 PM | |
| 442 | 09-24-2025 01:51 PM | |
| 399 | 08-04-2025 04:17 PM | |
| 480 | 06-03-2025 11:02 AM |
03-24-2023
10:35 AM
@Me Yes, that is the solution to your post, thank you for coming back with the fix. This will help more users in the future. Thanks for your contribution!
... View more
03-23-2023
03:30 PM
1 Kudo
Hi @dearvenkat , Sorry I was thinking that you just want to change the value, but if you are using RENAME PARTITION sql command it will depend on which type of table you have If it's EXTERNAL the data in HDFS will remain in the same old directory. If your table is MANAGED it should change the directory name to the new one. Please refer to this article: https://community.cloudera.com/t5/Support-Questions/Partition-rename-in-Hive-HDFS-path/td-p/193283 Apache Hive SQL Reference: https://cwiki.apache.org/confluence/display/hive/languagemanual+ddl#LanguageManualDDL-RenamePartition
... View more
03-21-2023
10:16 AM
1 Kudo
Thank you for your help, after I tried to change the Java version, the service started normally, because the Java version I used was too high for my installed CDH, thank you for your reply to my post!!!
... View more
03-20-2023
05:23 PM
@Confluent61 Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our Kafka experts @paras and @rki_ who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
03-20-2023
09:18 AM
@annakim Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our Zeppelin experts @Scharan and @iks who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
03-20-2023
06:21 AM
Visit the Future of Data Meetup group page to join a local group & register for events!
178 new support questions
7 new community articles
517 new members
Community Article
Author
Components/ Labels
Open Data Lakehouse powered by Apache Iceberg on Apache Ozone
Saketa Chalamchala @saketa
Apache Hive
Apache Impala
Apache Spark
Cloudera Data Platform (CDP)
Cloudera Data Platform Private Cloud (CDP-Private)
Installing Django in Cloudera Machine Learning (CML)
Ryan Cicak @RyanCicak
Cloudera Machine Learning (CML)
How to Spark Roll Event Log Files in CDP
Ranga Reddy @RangaReddy
Apache Spark
Cloudera Data Platform (CDP)
CDSW - CML || How to import a private repository from Github, gitlab
Girish Vaggar @gireeshn
Cloudera Data Science Workbench (CDSW)
Cloudera Machine Learning (CML)
Navigator Metadata Service is not starting in CDH6.3.4
Amararam Gehlot @agehlot
Cloudera Manager
Cloudera Navigator
We would like to recognize the below community members and employees for their efforts over the last month to provide community solutions.
See all our top participants at Top Solution Authors leaderboard and all the other leaderboards on our Leaderboards and Badges page.
@MattWho @steven-matison @Mike @rki_ @paras @SAMSAL @CRISSAEGRIM @ChuckE @iamfromsky @mbigelow
Share your expertise and answer some of the below open questions. Also, be sure to bookmark the unanswered question page to find additional open questions.
Unanswered Community Post
Components/ Labels
Nifi: Kafka Producer with Avro format in both key and value
Apache Kafka
Apache NiFi
Schema Registry
How to include wait time between spark streaming application retry attempts in event of job failure?
Apache Spark
Python updates
Cloudera Data Platform (CDP)
Impala Metadata Sync Issue Disk I/O error datanode.fqdn:22000: Failed to open HDFS file
Apache Impala
HDP3.1.4 Inconsistent HBase region status results in the region server service downtime
Apache HBase
... View more
03-17-2023
06:03 AM
2 Kudos
Assuming that there would be a way to check for flow files which went through failure, how would you extract them? Manual search within the Data Provenance Menu from within the GUI? Or using the REST API? Now, what I do know is that you cannot identify those files that easy, because they do not write any specific lines within the Data Provenance. If you have a processor right after the failure queue (or if you terminate the failure queue in the processing processor) you can query using that Component ID and identify the type = DROP, meaning that those files have been "processed" (Indicates a provenance event for the conclusion of an object's life for some reason other than object expiration). More about the types can be found here: https://docs.cloudera.com/HDPDocuments/HDF3/HDF-3.1.2/bk_user-guide/content/provenance_events.html
... View more
03-16-2023
10:22 AM
Hey, it looks like you're running Hadoop on Windows! I'm guessing that this is not a Cloudera package and that you're attempting to run some other flavour of Hadoop on a Windows host. The java.lang.UnsatisfiedLinkError suggests you're missing the hadoop.dll library from your path.
... View more
03-16-2023
06:38 AM
2023-03-16 16:56:02,552 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for (auth:KERBEROS)
2023-03-16 16:56:02,552 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol
2023-03-16 16:56:03,064 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node2.MkTempDir is closed by libhdfs3_client_random_129521979_count_145_pid_7924_tid_139642140587776
2023-03-16 16:56:03,436 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node1.MkTempDir is closed by libhdfs3_client_random_1964951034_count_146_pid_7924_tid_139642325227264
2023-03-16 16:56:05,049 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node3.MkTempDir is closed by libhdfs3_client_random_1359402376_count_147_pid_7924_tid_139642283263744
2023-03-16 16:56:06,863 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node0.MkTempDir is closed by libhdfs3_client_random_1823718439_count_148_pid_7924_tid_139642409154304
2023-03-16 16:56:07,339 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node2.Mkdirs is closed by libhdfs3_client_random_129521979_count_145_pid_7924_tid_139642140587776
2023-03-16 16:56:07,638 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node1.Mkdirs is closed by libhdfs3_client_random_1964951034_count_146_pid_7924_tid_139642325227264
2023-03-16 16:56:07,976 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node0.Mkdirs is closed by libhdfs3_client_random_1823718439_count_148_pid_7924_tid_139642409154304
2023-03-16 16:56:08,044 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node3.Mkdirs is closed by libhdfs3_client_random_1359402376_count_147_pid_7924_tid_139642283263744
2023-03-16 16:56:08,418 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1176805433_103102715, replicas=192.168.8.4:1004, 192.168.8.1:1004, 192.168.8.5:1004 for /data/test/datasync/test.txt
2023-03-16 16:56:08,446 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node0.Copy is closed by libhdfs3_client_random_1823718439_count_148_pid_7924_tid_139642409154304
2023-03-16 16:56:08,516 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node1.Copy is closed by libhdfs3_client_random_1964951034_count_146_pid_7924_tid_139642325227264
2023-03-16 16:56:08,516 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node2.Copy is closed by libhdfs3_client_random_129521979_count_145_pid_7924_tid_139642140587776
2023-03-16 16:56:23,479 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1176805442_103102724, replicas=192.168.8.1:1004, 192.168.8.3:1004, 192.168.8.6:1004 for /data/test/datasync/test.txt
2023-03-16 16:56:38,545 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1176805447_103102729, replicas=192.168.8.3:1004, 192.168.8.6:1004, 192.168.8.2:1004 for /data/test/datasync/test.txt
2023-03-16 16:56:53,608 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1176805456_103102738, replicas=192.168.8.2:1004, 192.168.8.5:1004, 192.168.8.6:1004 for /data/test/datasync/test.txt
2023-03-16 16:57:08,677 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1176805462_103102744, replicas=192.168.8.5:1004, 192.168.8.6:1004 for /data/test/datasync/test.txt
2023-03-16 16:57:23,742 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1176805467_103102749, replicas=192.168.8.6:1004 for /data/test/datasync/test.txt
java.io.IOException: File /data/test/datasync/test.txt could only be written to 0 of the 1 minReplication nodes. There are 6 datanode(s) running and 6 node(s) are excluded in this operation.
2023-03-16 16:57:39,289 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node3.Copy is closed by libhdfs3_client_random_1359402376_count_147_pid_7924_tid_139642283263744
2023-03-16 16:57:39,863 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node3.Dirmeta is closed by libhdfs3_client_random_1359402376_count_147_pid_7924_tid_139642283263744
2023-03-16 16:57:40,593 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node0.Dirmeta is closed by libhdfs3_client_random_1823718439_count_148_pid_7924_tid_139642409154304
2023-03-16 16:57:40,605 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node2.Dirmeta is closed by libhdfs3_client_random_129521979_count_145_pid_7924_tid_139642140587776
2023-03-16 16:57:40,651 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/0/node1.Dirmeta is closed by libhdfs3_client_random_1964951034_count_146_pid_7924_tid_139642325227264
2023-03-16 16:57:41,347 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/node0.complete is closed by libhdfs3_client_random_1823718439_count_148_pid_7924_tid_139642409154304
2023-03-16 16:57:41,349 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/node2.complete is closed by libhdfs3_client_random_129521979_count_145_pid_7924_tid_139642140587776
2023-03-16 16:57:41,392 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/node1.complete is closed by libhdfs3_client_random_1964951034_count_146_pid_7924_tid_139642325227264
2023-03-16 16:57:41,504 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /data/test/datasync/.aws-datasync/task-036cf039adbe9c036/node3.complete is closed by libhdfs3_client_random_1359402376_count_147_pid_7924_tid_139642283263744
... View more