Member since
08-18-2016
53
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6554 | 01-17-2017 06:19 AM |
03-22-2024
08:04 AM
1 Kudo
Updating the appropriate Docs Link: For CDP Installation: https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cdpdc-os-requirements.html For CDSW Installation: https://docs.cloudera.com/cdsw/1.10.5/installation/topics/cdsw-application-block-device-or-mount-point.html HTH.
... View more
05-02-2023
03:51 AM
Please add zkcli command to login in znode and remove directory. Hope you understand. zookeeper-client -server <zookeeper-server-host>:2181 (May use sudo if permission issue or login from HDFS User) ls / or ls /hadoop-ha (If you don't see any znode /hadoop-ha in ZK znode list, skip the step below) rmr /hadoop-ha/nameservice1
... View more
01-24-2022
02:38 AM
Hi, when I run the hive query it showing the below error Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask But this error is not showing all the time it got succeed with some of the users some times it got failed. Could you please suggest the reason and how to overcome this. need urgent. could you please help us.
... View more
10-03-2019
12:05 AM
There are many reasons for not connecting with the server or any website and you are getting this issue “ERR_SSL_PROTOCOL_ERROR” error but you can fix it with this guide. https://www.clickssl.net/blog/fix-err_ssl_protocol_error-for-google-chrome
... View more
03-06-2019
11:26 PM
While I'm doubtful these three directories are the very best answer to this problem, but the old "three directories for the NN metadata" came about long before a solid HA solution was available and as https://twitter.com/LesterMartinATL/status/527340416002453504 points out, it was (and actually still is) all about disaster recovery. The old adage was to configure the NN to write to three different disks (via the directories) -- two local and one off the box such as a remote mount point. Why? Well... as you know that darn metadata is the keys to the whole file system and if it ever gets lost then ALL of your data is non-recoverable!! I personally think this is still valuable even with HA as the JournalNodes are focused on the edits files and do a great job of having that information on multiple machines, but the checkpoint image files only exist on the two NN nodes in HA configuration and, well... I just like to sleep better at night. Good luck and happy Hadooping!
... View more
09-16-2018
11:11 AM
Did you find a solution ? We had the same issue after a kafka cluster reboot. The spark streaming could not start. It could not read in kafka, with the same error. Our environment : HDP 2.6.2/Kafka 0.10.1/Spark Streaming 2.1 Kafka direct with commitAsync
... View more
01-17-2017
06:19 AM
It working now with 'PARQUET.COMPRESSION'='SNAPPY'
... View more
12-22-2016
06:32 AM
2 Kudos
@Yukti Agrawal
On accessing the RM UI and click on the applicationId and corresponding logs link. It will take you to a screen something as below. Here you would see a file dag_*.dot, this should give you the query/MR graph that is being executed. Other option would be if the execution engine is TEZ, then you can leverage TEZ UI to view the actual query as well.
... View more
12-13-2016
03:54 PM
HWX has pig tuturial with come with data and script. I recommend you try this http://hortonworks.com/hadoop-tutorial/how-to-process-data-with-apache-pig/
... View more
03-31-2017
05:36 PM
Even i have been noticing this error. But the job did not fail in my case. i see that only around 1 or 2 mappers out of 20 or so are failing after waiting for 1800 secs. The resource manager attempts the failed mappers again and they run to success. How can i understand why those 1 or 2 mappers are failing? i could only see this message on the log java.lang.Exception: Container is not yet running. Current state is NEW
... View more