Member since
07-16-2015
177
Posts
28
Kudos Received
19
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
14252 | 11-14-2017 01:11 AM | |
60652 | 11-03-2017 06:53 AM | |
4333 | 11-03-2017 06:18 AM | |
13578 | 09-12-2017 05:51 AM | |
1998 | 09-08-2017 02:50 AM |
11-08-2023
04:03 AM
1 Kudo
Hello Vidya, would you support me in this as well? regards, Mahrous Badr
... View more
04-24-2022
01:25 AM
No it is not. You get the exact same error as before. Does anybody know how to actually fix this broken program?
... View more
11-05-2020
09:25 AM
We have opened a ticket to the cloudera support. They told below After reviewing the HS2 logs we see GC Pauses which go up to a minute which is causing hive zookeeper sessions to get timed out.The GC messages are likely triggering a ZooKeeper bug condition discussed here - https://issues.apache.org/jira/browse/ZOOKEEPER-2323
... View more
10-04-2019
12:02 PM
I am having the same error but I didn't understand the solution, can you please explain it. Thank you.
... View more
07-26-2019
02:18 PM
> Is there any option to find empty directory using HDFS command Directly? You can get a list/find empty directories using the 'org.apache.solr.hadoop.HdfsFindTool'. And using the hdfs tool to check/test if _a_ directory is empty, you can use -du or -test; please see the FileSystemShell [0] test
Usage: hadoop fs -test -[defsz] URI
Options:
-d: f the path is a directory, return 0.
-e: if the path exists, return 0.
-f: if the path is a file, return 0.
-s: if the path is not empty, return 0.
-r: if the path exists and read permission is granted, return 0.
-w: if the path exists and write permission is granted, return 0.
-z: if the file is zero length, return 0.
Example:
hadoop fs -test -e filename du
Usage: hadoop fs -du [-s] [-h] [-x] URI [URI ...]
Displays sizes of files and directories contained in the given directory or the length of a file in case its just a file.
Options:
The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files. Without the -s option, calculation is done by going 1-level deep from the given path.
The -h option will format file sizes in a “human-readable” fashion (e.g 64.0m instead of 67108864)
The -x option will exclude snapshots from the result calculation. Without the -x option (default), the result is always calculated from all INodes, including all snapshots under the given path.
The du returns three columns with the following format:
size disk_space_consumed_with_all_replicas full_path_name
Example:
hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1
Exit Code: Returns 0 on success and -1 on error. [0] https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/FileSystemShell.html
... View more
06-06-2019
11:09 PM
I added above values and that was causing https to shutdown. After deleting those values , it started and working fine now. Thanks @Harsh J for your reply.
... View more
05-31-2019
10:29 AM
Not helpful yet, but promising... PIVOT keyword is reserved for future use! https://www.cloudera.com/documentation/enterprise/6/6.2/topics/impala_reserved_words.html
... View more
04-08-2019
11:03 AM
i am also facing same error , may i know where you increased the memory
... View more
08-13-2018
02:19 PM
This issue was resolved by following the instructions in this site: http://vijayjt.blogspot.com/2016/02/how-to-connect-to-kerberised-chd-hadoop.html We need to copy the Java JCE unlimited strength policy files and the krb5.conf file under jdk/jre/lib/security folder where SQL Developer is installed. After this the Hive connection via Kerberos was successful.
... View more
07-11-2018
02:08 PM
how did you stored logs in local? can you please provide me the way you did it.
... View more