Member since
12-18-2014
19
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4141 | 05-21-2015 09:38 PM | |
2635 | 12-19-2014 10:51 PM |
10-06-2016
06:10 AM
I already downgraded my cluster to CDH 5.8.0 and I tried of what you said on the previous. Yes you are correct, event though the job is completed, when I did HoS query and didn't quit the hive shell, it's still use YARN resources. Thanks for pointing out the matter.
... View more
09-25-2016
09:20 AM
Cloudera Manger 5.8.2 CDH 5.8.2-1.cdh5.8.2.p0.3 Hive on Spark Completed Job got stuck on YARN. After the hive query finished and got the result, the spark process is still running on YARN (even though when we see the the spark process in ApplicationManager it's already in COMPLETED status). Temp solution, we have to kill the job manually or else we can't submit another spark job because YARN things the resources is still being used. What I did is to downgrade back my cluster to CDH 5.8.0
... View more
Labels:
01-20-2016
02:57 AM
1 Kudo
Our initial test server for Hadoop cluster is: 1 Namenode (64GB ram + 24 core) + 2 hdd 1 for os, 1 for hdfs storage. 3 Datanode (each 32GB ram + 16 core) + 2 hdd 1 for os, 1 for dfs storage. - the datanode is also used for: zookeeper, kafka, spark, YARN/mapreduce, Impala and Pig/Hive gateway. As the best practice to run hadoop environment, all server should be a bare metal and not VM. IMHO, maybe you could make the namenode server smaller like 32GB of ram with less core. But for the datanode sides, I don't recommend to have less specs than that, especially the minimum memory.
... View more
07-14-2015
11:41 PM
ps aux | grep -i kafka kafka 3376 12.3 19.4 5444664 1184292 ? Sl Jun17 4843:42 /usr/lib/jvm/java-7-oracle-cloudera/bin/java -Xmx1G -Xms1G -server -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true -Xloggc:/run/cloudera-scm-agent/process/2769-kafka-KAFKA_BROKER/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9393 -Dkafka.logs.dir=/run/cloudera-scm-agent/process/2769-kafka-KAFKA_BROKER -Dlog4j.configuration=file:/run/cloudera-scm-agent/process/2769-kafka-KAFKA_BROKER/log4j.properties -cp :/opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka/bin/../core/build/dependant-libs-2.10.4*/*.jar:/opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka/bin/../examples/build/libs//kafka-examples*.jar:/opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka/bin/../contrib/hadoop-consumer/build/libs//kafka-hadoop-consumer*.jar:/opt/cloudera/parcels/KAFKA-0.8.2.0-1.kafka1.3.0.p0.29/lib/kafka/bin/../contrib/hadoop-producer/build/libs//kafka-hadoop-producer*.jar:/......etc how do change that java heap memory for kafka in the proper way with CDH? (because i didn't see it on the cdh config) (CDH 5.4.x)
... View more
Labels:
05-21-2015
09:38 PM
Now the cluster that previously didn't see cdh 5.4.1 parcels got 5.4.2-1.cdh5.4.2.p0.2 parcel update available. I believe this is probably the cause. (cloudera defer the 5.4.1 and waited for the 5.4.2 being pushed instead, because the release time is quite close) Look: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_rn_new_in_542.html
... View more
05-20-2015
07:33 PM
@Tgrayson I have upgraded the CM to the latest 5.4.1 on both cluster. A cluster => didn't show cdh 5.4.1 parcels (cant upgrade) B cluster => showed cdh 5.4.1 and successfully upgraded. As you can see on my first post, the version of CM is 5.4.1 (#197 built by jenkins on 20150509-0042 git: 003e06d761f80834d39c3a42431a266f0aaee736) and both have the same parcel repo urls
... View more
05-20-2015
06:51 PM
This is strange because on my other cloudera cluster, I have no problem upgrading CDH from 5.4.0 to 5.4.1 via CM. So this is probably a bug? (ss on my other cloudera cluster)
... View more
05-20-2015
06:16 PM
As you can see, on this particular cluster. There is no option to download CDH 5.4.1. 1. Yes I already updated to the latest cloudera manager that I stated on the first post. 2. The parcels repo url is on default config. 3. I have another cluster that can upgrade just fine from 5.4.0 to 5.4.1 (they are quite identical and using the same cm version)
... View more
05-20-2015
08:38 AM
Ok Sartner, but did you had the same problem as me when trying to upgrade from 5.4 to 5.4.1? I'm hoping if Cloudera Staff can verify if the solution provided by Sartner is acceptable for "production" cluster. And I have concern why this particular cluster can't see the cdh 5.4.1 parcels from cloudera manager gui. (even though they have the exact same CM version)
... View more
05-20-2015
12:15 AM
Cloudera Manager Version: Cloudera Express 5.4.1 (#197 built by jenkins on 20150509-0042 git: 003e06d761f80834d39c3a42431a266f0aaee736) I have two Cloudera Cluster (A and B) When I'm trying to get the latest 5.4.1 parcels, I can't see the 5.4.1 parcels on one of the servers (B). Both have the same exact repo parcels and both have the same cloudera manager version. Does anyone have the same experience?
... View more
Labels: