Member since
11-12-2018
218
Posts
179
Kudos Received
35
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1259 | 08-08-2025 04:22 PM | |
| 1642 | 07-11-2025 08:48 PM | |
| 2589 | 07-09-2025 09:33 PM | |
| 1543 | 04-26-2024 02:20 AM | |
| 2151 | 04-18-2024 12:35 PM |
07-03-2022
10:22 PM
@dfdf, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information @jagadeesan has requested?
... View more
06-30-2022
07:27 AM
Hi @suri789 these both are different values, I didn't see any duplicate in these. so plainfield s plainfiled Also from the output, I didn't see any duplicate values, all are distinct by the values..! +----------------+ | value | +----------------+ | s plaindield| | n plainfield| | west home land| | newyork| | so plainfield| |north plainfield| +----------------+ Please note: "n plainfield & north plainfield or s plainfield & so plainfield" are different values, because we didn't write any custom logic like 'n' means 'north' or 's' means 'so'.
... View more
06-29-2022
11:21 AM
1 Kudo
Thank you for providing the info @jagadeesan I emailed the Cloudera Certification group. Best, Sruthi Kumar
... View more
06-28-2022
04:11 PM
Hi @ajaybabum, Yes we can able run Spark in local mode against the Kerberized cluster. For a quick test, can you directly open spark-shell to try reading the CSV file from the HDFS location and show the output of the contents to verify whether do you have any issue in the Cluster / Spark configuration or if it's more on your application code? >> Will it possible in local mode without run kinit command before spark-submit. -- By passing --keytab --principal details in your spark-submit, you don't need to run kinit command before spark-submit. Thanks
... View more
06-27-2022
09:33 AM
@haze5736 Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks
... View more
06-27-2022
07:46 AM
Hi @ds_explorer, it seems because the edit log is too big and cannot be read by NameNode completely on the default/configured timeout. 2022-06-25 08:32:24,872 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.namenode.EditLogInputException: Error replaying edit log at offset 554705629. Expected transaction ID was 60366342312 Recent opcode offsets: 554704754 554705115 554705361 554705629 ..... Caused by: java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:203) at org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LengthPrefixedReader.decodeOpFrame(FSEditLogOp.java:4488) To fix this, can you add the below parameter and value (if you already have then kindly increase the value) HDFS > Configuration > JournalNode Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml hadoop.http.idle_timeout.ms=180000 And then restart the required services.
... View more
02-23-2021
01:44 AM
Thanks @adrijand for sharing your updates, it's highly appreciated.
... View more
02-09-2021
04:57 AM
Hi @joyabrata I think you are looking in the Data Lake tab which is a different one, you can go to the Summary tab, then scroll down to FreeIPA session then click Actions and get Get FreeIPA Certificate from the drop-down menu. Hope this will help you.
... View more
06-06-2020
09:15 PM
Glad to hear that you have finally found the root cause of this issue. Thanks for sharing @Heri
... View more
05-28-2020
09:07 PM
2 Kudos
Hi @Karan1211, User 'admin' does not have access to create a directory under /user. Because the /user/ directory is owned by "hdfs" with 755 permissions. As a result, only hdfs can write to that directory. So you would need to do this: If you want to create a home directory for root so you can store files in this directory, do: sudo -u hdfs hdfs dfs -mkdir /user/admin sudo -u hdfs hdfs dfs -chown admin /user/admin Then as admin you can do hdfs dfs -put file /user/admin/ NOTE: If you get below authentication error, either from your user account, you do not have enough permission to run the above command, so try with sudo or try with first sudo to hdfs user and then execute chown command as hdfs user. su: authentication failure I hope this helps.
... View more
- « Previous
- Next »