Member since
01-25-2017
396
Posts
28
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
822 | 10-19-2023 04:36 PM | |
4344 | 12-08-2018 06:56 PM | |
5437 | 10-05-2018 06:28 AM | |
19789 | 04-19-2018 02:27 AM | |
19811 | 04-18-2018 09:40 AM |
01-31-2017
08:25 PM
ImpalaJDBC4.jar
... View more
01-31-2017
11:23 AM
I will ready did, but new written blocks still alerting on under replication
... View more
01-30-2017
06:26 PM
Anyone who can help here
... View more
01-30-2017
06:21 PM
This is my default.url and impala.url in the interpreter settings: jdbc:impala://xxxxxx:21050/;SID=fawze;
... View more
01-30-2017
02:27 AM
Hi, When i Run fsck on my cluster i got that several blocks under replicated and the target replication is 3 even i changed the dfs.replication to NN and DNs to 2. My cluster status Live Nodes : 3 (Decommissioned: 1) Total size: 1873902607439 B Total dirs: 122633 Total files: 117412 Total blocks (validated): 119731 (avg. block size 15650939 B) Minimally replicated blocks: 119731 (100.0 %) Over-replicated blocks: 68713 (57.38948 %) Under-replicated blocks: 27 (0.022550551 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 2 Average block replication: 2.5738947 Corrupt blocks: 0 Missing replicas: 27 (0.011274004 %) Number of data-nodes: 3 Number of racks: 1 FSCK ended at Mon Jan 30 04:59:23 EST 2017 in 2468 milliseconds NN and DNs hdfs.site.xml: <property> <name>dfs.replication</name> <value>2</value> </property> The only change i did that i deco one of the servers and it's now in decomissioned state, even i set replication factor for all HDFS manually to 2 but still see the new written blocks are alerted on target replica as 3, also i ensure that the mapred submit replica also 2 in JT: <property> <name>mapred.submit.replication</name> <value>2</value> </property> Any insights?
... View more
Labels:
- Labels:
-
HDFS
01-30-2017
12:25 AM
Yes, in the oozie it's 4GB, you are right com.hadoop.platform.cleaner.CleanerJob -Xmx4096m
... View more
01-30-2017
12:00 AM
The job is a cleaner job which running with only 1 mapper, and it's oozie launcher, Does the default for the oozie launcher is different from the job? oozie:launcher:T=java:W=hdfs-cleaner-wf:A=hdfs-cleaner:ID=0568638-160809023957851-oozie-clou-W More piece of the log: Application application_1484466365663_87038 failed 2 times due to AM Container for appattempt_1484466365663_87038_000002 exited with exitCode: -104 For more detailed output, check application tracking page: http://avor-mhc102.lpdomain.com:8088/proxy/application_1484466365663_87038/Then, click on links to logs of each attempt. Diagnostics: Container [pid=7448,containerID=container_e29_1484466365663_87038_02_000001] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 6.6 GB of 6.3 GB virtual memory used. Killing container. Dump of the process-tree for container_e29_1484466365663_87038_02_000001 : |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE |- 7448 7446 7448 7448 (bash) 2 2 108650496 304 /bin/bash -c /jdk8//bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=//hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Djava.net.preferIPv4Stack=true -Xmx825955249 -Djava.net.preferIPv4Stack=true -Xmx4096m -Xmx4608m -Djava.io.tmpdir=./tmp org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001/stdout 2>/hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001/stderr |- 7613 7448 7448 7448 (java) 22034 2726 6976090112 788011 /jdk8//bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/hadoop/log/hadoop-yarn/container/application_1484466365663_87038/container_e29_1484466365663_87038_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Djava.net.preferIPv4Stack=true -Xmx825955249 -Djava.net.preferIPv4Stack=true -Xmx4096m -Xmx4608m -Djava.io.tmpdir=./tmp org.apache.hadoop.mapreduce.v2.app.MRAppMaster Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 Failing this attempt. Failing the application. Maps Total: 1 - - Total Tasks: 1 -
... View more
01-29-2017
07:27 PM
Hi, I have a mapper reduce job failed on out of memory. Log: Application application_1484466365663_87038 failed 2 times due to AM Container for appattempt_1484466365663_87038_000002 exited with exitCode: -104 Diagnostics: Container [pid=7448,containerID=container_e29_1484466365663_87038_02_000001] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 6.6 GB of 6.3 GB virtual memory used. Killing container. Dump of the process-tree for container_e29_1484466365663_87038_02_000001 : When i'm checking the memory configured for map task and for Application master in cloudera manager it's 2 GB. Checked the job configuration in YARN and see it's 2 GB. mapreduce.map.memory.mb = 2GB I have 2 question: 1- How i know if this container is the AM container or the mapper container, does the above error indicated the AM memory exceeded? 2- Why it's alerting on 3GB while all my configuration is 2 GB. The solution is clear for me that i need to increase the memory.
... View more
Labels:
- Labels:
-
Apache YARN
01-26-2017
11:46 AM
Anyone who can help with this please.
... View more
01-25-2017
07:38 PM
Hi saranvisa, I got your point, but currently i don't have any authentication that i'm using. How i can enforce the queries running through specific impala JDBC to run with specific user. Maybe i'm missing something in the concept.
... View more
- « Previous
- Next »