Member since
01-20-2014
578
Posts
102
Kudos Received
94
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5711 | 10-28-2015 10:28 PM | |
2707 | 10-10-2015 08:30 PM | |
4734 | 10-10-2015 08:02 PM | |
3534 | 10-07-2015 02:38 PM | |
2323 | 10-06-2015 01:24 AM |
09-10-2014
11:38 PM
Thanks for your advice. Currently, I'm getting myself familiar by building the cluster with several VMs on a Linux host. I will get the copy of the books you mentioned! Now, hope I my understanding is correct, in production environment, each node corresponds to a physical server in a rack. If I want to setup a 4-node cluster, I will probably have 4 1U servers on my rack. It seems I'd better go for AWS or Google Cloud first. Is there any good option? I just wonder when we use AWS, we are actually using VMs. Thanks.
... View more
09-08-2014
09:32 PM
Thank you for the feedback.
... View more
09-08-2014
08:15 PM
Nevermind, the log storage setting for HBase is available on CM. I didn't realilze those arrows next to category label can be expanded. And there will be 'Log' category which I can update the value.
... View more
09-08-2014
06:21 AM
Hi Gautam, Thank's for quick response. As i told you before we have problem with HDFS block, currently our HDFS block reach to 58 millions with storage occupied equal to 500TB it's not quite ideal. NN memory capacity is about 64GB and we set 32GB for NN heap. Last time we change dfs.namenode.checkpoint.txns at the same time with dfs.image.compress from default 44K to 1000K because, we thought when the system often do checkpoint thats lead namenode service become bad as CM report through email. About your question JT pause only began when you turned on compression of fsimage?. We not sure about that cause the mapreduce pause never like now until 10 minutes, whether it happens or not we do not notice. Does only increase NN heap memory or there are other alternatives that we can tune related with hadoop parameters will reduce load HDFS and will bring back mapreduce to normal during checkpoint ? regards, i9um0
... View more
09-08-2014
12:14 AM
1 Kudo
The map task's local output is not stored within HDFS, rather in temporary directories on that specific node (see property mapreduce.cluster.local.dir) written using standard file I/O https://hadoop.apache.org/docs/r2.2.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
... View more
09-05-2014
07:47 AM
Glad you solved this! Keep in mind that when using the symlink, you may need to re-create it whenever you upgrade your cloudera-scm-server-db package in the future, since the symlink confuses the packaging code. Thanks, Darren
... View more
09-02-2014
08:39 PM
Hello Gautam, Thanks a lot for your clarification. It's clear now.
... View more
09-02-2014
10:48 AM
To be clear, you wil also only get the column families, not the columns within those. You didn't define those at create-time either, but just to be complete 🙂
... View more
09-01-2014
11:20 PM
Thx! yeah it works. It's now writing to the cloudera-scm-server.log
... View more
- « Previous
- Next »