Member since
11-17-2021
1117
Posts
253
Kudos Received
28
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 233 | 10-16-2025 02:45 PM | |
| 499 | 10-06-2025 01:01 PM | |
| 453 | 09-24-2025 01:51 PM | |
| 406 | 08-04-2025 04:17 PM | |
| 488 | 06-03-2025 11:02 AM |
07-25-2023
01:00 PM
@novice_tester Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-25-2023
12:50 PM
@kanchanDesai Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
07-21-2023
11:08 AM
Hi. Thank you for responding. Replying on behalf of @idodds. Both of the nodes report same/similar errors as below: Jul 21, 8:33:30.310 AM INFO org.apache.hadoop.hdfs.qjournal.server.Journal
Updating lastPromisedEpoch from 172 to 173 for client /x.y.z.30
Jul 21, 8:33:30.312 AM INFO org.apache.hadoop.hdfs.qjournal.server.Journal
Scanning storage FileJournalManager(root=/dfs/journal-edits/nutch-nameservice1)
Jul 21, 8:33:30.329 AM INFO org.apache.hadoop.hdfs.qjournal.server.Journal
Latest log is EditLogFile(file=/dfs/journal-edits/nutch-nameservice1/current/edits_inprogress_0000000000256541217,first=0000000000256541217,last=0000000000256541842,inProgress=true,hasCorruptHeader=false)
Jul 21, 8:33:30.339 AM INFO org.apache.hadoop.hdfs.qjournal.server.Journal
getSegmentInfo(256541217): EditLogFile(file=/dfs/journal-edits/nutch-nameservice1/current/edits_inprogress_0000000000256541217,first=0000000000256541217,last=0000000000256541842,inProgress=true,hasCorruptHeader=false) -> startTxId: 256541217 endTxId: 256541842 isInProgress: true
Jul 21, 8:33:30.340 AM INFO org.apache.hadoop.hdfs.qjournal.server.Journal
Prepared recovery for segment 256541217: segmentState { startTxId: 256541217 endTxId: 256541842 isInProgress: true } lastWriterEpoch: 38 lastCommittedTxId: 256541843
Jul 21, 8:33:30.358 AM INFO org.apache.hadoop.hdfs.qjournal.server.Journal
getSegmentInfo(256541217): EditLogFile(file=/dfs/journal-edits/nutch-nameservice1/current/edits_inprogress_0000000000256541217,first=0000000000256541217,last=0000000000256541842,inProgress=true,hasCorruptHeader=false) -> startTxId: 256541217 endTxId: 256541842 isInProgress: true
Jul 21, 8:33:30.358 AM INFO org.apache.hadoop.hdfs.qjournal.server.Journal
Synchronizing log startTxId: 256541217 endTxId: 256541843 isInProgress: true: old segment startTxId: 256541217 endTxId: 256541842 isInProgress: true is not the right length
Jul 21, 8:33:30.358 AM WARN org.apache.hadoop.ipc.Server
IPC Server handler 1 on 8485, call org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocol.acceptRecovery from x.y.z.30:37022 Call#17 Retry#0
java.lang.IllegalAccessError: tried to access method com.google.common.collect.Range.<init>(Lcom/google/common/collect/Cut;Lcom/google/common/collect/Cut;)V from class com.google.common.collect.Ranges
at com.google.common.collect.Ranges.create(Ranges.java:76)
at com.google.common.collect.Ranges.closed(Ranges.java:98)
at org.apache.hadoop.hdfs.qjournal.server.Journal.txnRange(Journal.java:872)
at org.apache.hadoop.hdfs.qjournal.server.Journal.acceptRecovery(Journal.java:806)
at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.acceptRecovery(JournalNodeRpcServer.java:206)
at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.acceptRecovery(QJournalProtocolServerSideTranslatorPB.java:261)
at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25435)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
... View more
07-20-2023
01:29 PM
@saraDdeveloper If you are still experiencing the issue, can you provide the information @SAMSAL has requested? Thanks.
... View more
07-20-2023
01:27 PM
@Aimen If you are still experiencing the issue, can you provide the information @steven-matison has requested? Thanks.
... View more
07-20-2023
08:54 AM
@SAMSAL my bad !!! I start to be old i need better glasses 🙂 You are right it works fantastically !!! THANKS!!!!!
... View more
07-20-2023
03:10 AM
We verified the same in the CDP environment, as we are uncertain about the Databricks Spark environment. As we have mixed of managed and external tables , extracted the necessary information through HWC. >>> database=spark.sql("show tables in default").collect()
23/07/20 10:04:45 INFO rule.HWCSwitchRule: Registering Listeners
23/07/20 10:04:47 WARN conf.HiveConf: HiveConf of name hive.masking.algo does not exist
Hive Session ID = e6f70006-0c2e-4237-9a9e-e1d19901af54
>>> desiredColumn="name"
>>> tablenames = []
>>> for row in database:
... cols = spark.table(row.tableName).columns
... listColumns= spark.table(row.tableName).columns
... if desiredColumn in listColumns:
... tablenames.append(row.tableName)
...
>>>
>>> print("\n".join(tablenames))
movies
tv_series_abc
cdp1
tv_series
spark_array_string_example
>>>
... View more
07-19-2023
01:08 PM
Hi @Dataengineer1 , This has been asked before , please refer to : https://community.cloudera.com/t5/Support-Questions/NIFI-Is-it-possible-to-make-a-x-www-form-urlencoded-POST/m-p/339398 Thanks
... View more
07-19-2023
11:30 AM
@mohanm Welcome to the Cloudera Community! To help you get the best possible solution, I have tagged our NiFi expert @steven-matison who may be able to assist you further. Please keep us updated on your post, and we hope you find a satisfactory solution to your query.
... View more
07-19-2023
06:07 AM
Can we have the full stacktrace of this exception. If you can upload the logs I can get the same. Generally it means manifest file is not present for the parcel or CM has trouble accessing the manifest file in the repo. Are you able to manually browse through the repo in browser? Can you attach the screenshot of the content you see in the repo from browser?
... View more