Member since
07-29-2013
162
Posts
8
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7156 | 05-06-2015 06:52 AM | |
3102 | 06-09-2014 10:51 PM | |
5087 | 01-30-2014 10:40 PM | |
3744 | 08-22-2013 12:28 AM | |
5161 | 08-18-2013 11:23 PM |
05-26-2015
01:54 AM
Hi, thanks for the reply I found these properties in HBase service configuration: Enable HBase Canary=true HBase Region Health Canary=true >(in Service Monitor) Didn't find there someting related to HBase
... View more
05-20-2015
01:35 AM
Hi, hbase-master says that I have 1300-1500 requests per second for the whole cluster Cloudera Manager 5.2.1 says that I have: 4500 read requests per second 500 write request per second. I see that graphic on HBase service page. Who Is lying?
... View more
Labels:
- Labels:
-
Apache HBase
05-06-2015
06:52 AM
Ha, looks like Camus runs local job runner, that is the problem... Need to inform camus that we have yarn here.
... View more
05-06-2015
05:59 AM
Thanks for the reply. There are several myctical problems: 1. Here is what ResourceManager conf says: http://my.resource.manager.ru:8088/conf <property> <name>mapred.child.java.opts</name> <value>-Xmx200m</value> <source>mapred-default.xml</source> </property> I can't find any mapred-default.xml, only inside hadoop-core.jar which is in cloudera parcels 3. Here is running app: Job configuration on NodeManerr UI says: mapred.child.java.opts -Xmx200m -Djava.net.preferIPv4Stack=true -Xmx9448718336 but ps -ef | grep java says: yarn 54070 53908 99 15:55 ? 00:08:20 /usr/java/jdk1.7.0_55/bin/java -Xmx1000m -Dhadoop.log.dir=/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/cloudera/parcels/CDH-5.3.2-1.cdh5.3.2.p0.10/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar camus-tool.jar com.linkedin.camus.etl.kafka.CamusJob -P camus.properties NOw we get xmx as 1000m, which is still not enough, but we don't have such property...
... View more
05-06-2015
04:05 AM
Hi, I'm running mapreduce job using hadoop jar command The problem is that hadoop-core.jar contains mapred-default.xml with -Xmx200m for mapreduce. I have correct client conf in /etc/hadoop/conf/mapred-site.xml xmx is big enough there. When job started, mapred.child.java.opts -Xmx200m -Djava.net.preferIPv4Stack=true -Xmx9448718336 property is merged. -Xmx200m comes from bundled mapred-default.xml and -Djava.net.preferIPv4Stack=true -Xmx9448718336 comes from my config. Job uses -Xmx200m for mappres and fails What is the right way to exclude -Xmx200m and leave only -Xmx9448718336 from mapred-site.xml?
... View more
Labels:
04-29-2015
12:50 PM
Thanks for reply. I read this note: http://archive.cloudera.com/cdh5/cdh/5/spark-1.3.0-cdh5.4.0.releasenotes.html We are using jobserver to submit jobs for eavry 5 mins, and that issue stated "This is a blocker for 24/7 running applications like Spark Streaming apps." Jobserver holds a job context to share context among other jobs? Am I right?
... View more
04-29-2015
03:44 AM
Hi, does some cloudera distirbution provides https://issues.apache.org/jira/browse/SPARK-5967 backport? Can't find it, even in CDH 5.4
... View more
Labels:
- Labels:
-
Apache Spark
02-15-2015
05:54 AM
Hi, sorry, no luck. Still suffering from MR2/YARN. I have no idea how it works. Right now I'm getting deadlock several times a day. I have single user which submits jpb. It has huge pool (32*8 mem and 4*CPU) and It has limit for 8 applications at once. Suddenly everything stops. What does it mean? Who can I get the idea of what's went wrong?
... View more
11-27-2014
02:09 AM
I've took org.apache.flume.sink.solr.
morphline.UUIDInterceptor$Builder as an example My custom interceptor takes event body and stores it in event header. Then SolrSink takes this header by default and sendt it to Solr for indexing. it works NB: solr schema.xml should have matching field declaration.
... View more