Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1963 | 07-09-2019 12:53 AM | |
| 11819 | 06-23-2019 08:37 PM | |
| 9103 | 06-18-2019 11:28 PM | |
| 10061 | 05-23-2019 08:46 PM | |
| 4494 | 05-20-2019 01:14 AM |
06-01-2016
04:41 AM
Thanks @Harsh J, Appreciate your help. Just as a feedback on part of Cloudera Documentation should also include an example with Sqoop1 and Sqoop2 and that would have been very helpful too for all the users who deploy Sqoop.
... View more
05-13-2016
12:34 AM
1 Kudo
Hi All, Thanks for your help. Heap memory was not sized as per recommendation given in below link and we had increased the memory and restarted the Hive metastore server which also did not help. http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hiveserver2_configure.html Looks like there were some process which was holding/blocking memory and we had to restart the Complete cluster to resolve this problem. Thank you once again for input. Regards, Ajay chaudhary
... View more
05-10-2016
09:34 AM
Kindly let me know what you mean when you mention that you too are facing similar issue. As pointed by Harsh and as I have already mentioned, I see this is a non-issue and I have moved on.
... View more
05-05-2016
02:51 AM
1 Kudo
I'm afraid there's no easy way to recover out of this if you've not taken HDFS snapshots prior to this either. If you've stopped the entire cluster immediately to prevent further disk usage, you can perhaps try to run ext-level disk recovery tools to recover the deleted blocks, and then rollback your NN to start from the pre-deletes checkpoint, and that may give back some fraction of your data.
... View more
05-02-2016
08:15 AM
Hi Harsh, I have created a new thread with topic Exception in doCheckpoint
... View more
04-24-2016
09:57 AM
1 Kudo
The HDFS client reads your input and sends packets of data (64k-128k chunks at a time) which are sent along with their checksums over the network, and the DNs involved in the write verify these continually as they receive them, before writing them to disk. This way you wouldn't suffer from network corruptions, and what's written onto the HDFS would match precisely what the client intended to send.
... View more
04-21-2016
12:01 AM
As Harsh suggested, for a new (empty) table without any 'split' defined, the number of reducers will always be 1. If you pre-split the table before import, you'd get that many number of reducers (Of course pre-splitting needs a good idea of how your row keys are designed and is broad topic in itself. See HBase The Definitive Guide > Chapter 11 > Optimizing Splits and Compactions >Presplitting Regions ) Example: When the table is pre-splitted with 6 regions: hbase(main):002:0>create 'hly_temp2', {NAME => 't', VERSIONS => 1}, {SPLITS => ['USW000138290206', 'USW000149290623', 'USW000231870807', 'USW000242331116', 'USW000937411119']} # hadoop jar /usr/lib/hbase/hbase-server.jar importtsv -Dimporttsv.bulk.output=/user/hac/output/2-4 -Dimporttsv.columns=HBASE_ROW_KEY,t:v01 hly_temp2 /user/hac/input/2-1 ... .... Job Counters Launched map tasks=1 Launched reduce tasks=6 <<<
... View more
04-18-2016
09:58 AM
Some more questions based on this thread Once storage configuration is defined and SSDs/ Disks are identified by HDFS, does all drives (SSDs+ DIsks) are used and single virtual storage ? if yes does it mean while running jobs/queries some data blocks would be fetched from Disks while others from SSDs? or two different virtual storage hot and cold?? If Yes, while copying/generating data in HDFS, will there be 3 copies of data across disks+storage or 3 copies in Disks and 3 copies in SSDs ; total 6 copies? how do I force data to be used from SSDs only or DISKs only; while submitting any Jobs/queries using various tools(hive, Impala, spark etc)
... View more
04-04-2016
01:32 AM
Thanks bodivijay! I have use another way to import the hive, will try your tools when I have time to revisit the same issue!
... View more
03-21-2016
04:25 AM
1 Kudo
If you've previously set manual overrides to MaxPermSize options in your configurations, you can remove them away safely having switched to using JDK8. If you still have parts that use JDK7, leave them be and ignore the warning. The warnings themselves do not pose a problem, as JDK8 will simply note that it won't be using them anymore and continue to start up normally. However, a future Java version (JDK9 or 10) may choose to interpret it as an invalid command and fail. Read more on this JDK level change at https://dzone.com/articles/java-8-permgen-metaspace
... View more