Member since
03-28-2016
194
Posts
18
Kudos Received
0
Solutions
05-11-2016
12:20 PM
I need to update the below parameters in my cluster. Hence updating hdfs-site.xml and yarn-site.xml on Ambari server is enough ? Will the below change will update on all the NN's and DN's after service restart ? NOTE: DO i need to restart the entire cluster components or just HDFS and Yarn is enough dfs.namenode.safemode.threshold-pct = 0.999f ha.zookeeper.acl = sasl:nn:rwcda dfs.datanode.max.transfer.threads = 16384 yarn.nodemanager.vmem-pmem-ratio = 2.1 yarn.resourcemanager.recovery.enabled = true yarn.resourcemanager.work-preserving-recovery.enabled = true Please advice
... View more
05-11-2016
08:59 AM
H Predrag, I also need to add dfs.namenode.replication.min=1 property right (which i dont have on my clueter )
... View more
05-10-2016
02:50 PM
Shall i update the below configuration in ambari hdfs-site.xml file ? which will update on HA NN and all DN's correct
dfs.namenode.safemode.threshold-pct=0.999f dfs.namenode.replication.min=1 any suggetions. I don't chef or puppet.
... View more
05-10-2016
02:23 PM
Correct, But i have to change the value to 0.999f, currently i have value 1 for
dfs.namenode.safemode.threshold-pct=1 And i don't have this value on my cluster dfs.namenode.replication.min=1, so i have to add that as well.. that is why i have raised that question.
... View more
05-10-2016
02:11 PM
Thanks a lot. So i need to update above parameter on hdfs-site.xml. so do i need to stop, HIVE, HBASE & other components in cluster.. or just stopping HDFS and YARN , updating the property and restarting will work. Please advice
... View more
05-10-2016
01:49 PM
2 Kudos
Hi Team, Iam Implementing Smart Sense on my Cluster. I need your help before i start I have to change the below parameters before i implement Smart sense. Can someone please explain the dependency property also require for the parent property Value needs to change for the Property dfs.namenode.safemode.threshold-pct Dependency Property which is not in my cluster dfs.namenode.replication.min. should this property is must ?
... View more
Labels:
04-26-2016
10:53 AM
Iam facing the same issue in my production which is having Ambari 2.1.2. I have a question if it is python kerbores issue. why we are editing OOZIE python file. Can you explain please
... View more
03-29-2016
06:58 AM
HI team, How do i configure Ranger for Hbase column leave security. Can someone provide me the document where i can learn about ranger in detailed way. regars BK
... View more
Labels:
- Labels:
-
Apache Ranger
03-29-2016
06:55 AM
Any one aware of this error.
[Reducer 17]
killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Map 16,
vertexId=vertex_1457392972594_12162_1_16, diagnostics=[Vertex received Kill
while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE,
failedTasks:0 killedTasks:5, Vertex vertex_1457392972594_12162_1_16 [Map 16]
killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to
VERTEX_FAILURE. failedVertices:1 killedVertices:11 (state=08S01,code=2)
Vertex did not succeed due to
OWN_TASK_FAILURE, failedTasks:1 killedTasks:45, Vertex
vertex_1457392972594_12162_1_21 [Reducer 2] killed/failed due
to:OWN_TASK_FAILURE]Vertex killed,
4:32
AM
regards bk
... View more
03-28-2016
10:05 AM
4 Kudos
HIVE job failed while using Tez as the engine Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 ____________________________________________________ TaskAttempt 3 failed, info=[Container container_1457392972594_9109_01_000505
finished with diagnostics set to [Container failed, exitCode=-104. Container [pid=75464,containerID=container_1457392972594_9109_01_000505]
is running beyond physical memory limits. Current usage: 4.0 GB of 4 GB
physical memory used; 7.4 GB of 8.4 GB virtual memory used. Killing container. Dump of the process-tree for container_1457392972594_9109_01_000505 : Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:250,
Vertex vertex_1457392972594_8881_1_25 [Reducer 12] killed/failed due
to:OWN_TASK_FAILURE] ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:1
killedVertices:11 Error: Error while processing statement: FAILED: Execution Error, return code 2
from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex re-running,
vertexName=Map 27, vertexId=vertex_1457392972594_8881_1_00Vertex re-running, vertexName=Map
25, vertexId=vertex_1457392972594_8881_1_05Vertex re-running, vertexName=Map 7,
vertexId=vertex_1457392972594_8881_1_11Vertex re-running, vertexName=Map 16,
vertexId=vertex_1457392972594_8881_1_16Vertex re-running, vertexName=Reducer
17, vertexId=vertex_1457392972594_8881_1_17Vertex re-running, vertexName=Map 5,
vertexId=vertex_1457392972594_8881_1_12Vertex re-running, vertexName=Map 23,
vertexId=vertex_1457392972594_8881_1_03Vertex re-running, vertexName=Map 14,
vertexId=vertex_1457392972594_8881_1_23Vertex re-running, vertexName=Map 27,
vertexId=vertex_1457392972594_8881_1_00Vertex re-running, vertexName=Map 20,
vertexId=vertex_1457392972594_8881_1_02Vertex re-running, vertexName=Map 9,
vertexId=vertex_1457392972594_8881_1_09Vertex re-running, vertexName=Map 18,
vertexId=vertex_1457392972594_8881_1_13Vertex re-running, vertexName=Map 5,
vertexId=vertex_1457392972594_8881_1_12Vertex re-running, vertexName=Map 18,
vertexId=vertex_1457392972594_8881_1_13Vertex re-running, vertexName=Map 27,
vertexId=vertex_1457392972594_8881_1_00Vertex re-running, vertexName=Map 23,
vertexId=vertex_1457392972594_8881_1_03Vertex re-running, vertexName=Map 14,
vertexId=vertex_1457392972594_8881_1_23Vertex re-running, vertexName=Map 24,
vertexId=vertex_1457392972594_8881_1_04Vertex re-running, vertexName=Reducer 8,
vertexId=vertex_1457392972594_8881_1_14Vertex failed, vertexName=Reducer 12,
vertexId=vertex_1457392972594_8881_1_25, diagnostics=[Task failed,
taskId=task_1457392972594_8881_1_25_000011, diagnostics=[TaskAttempt 0 failed,
info=[Error: Failure while running task:java.lang.RuntimeException:
java.lang.OutOfMemoryError: Java heap space Caused by: java.lang.OutOfMemoryError: Java heap space TaskAttempt 1 failed, info=[Error: Failure while running
task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1
killedTasks:250, Vertex vertex_1457392972594_8881_1_25 [Reducer 12]
killed/failed due to:OWN_TASK_FAILURE] [Task failed, taskId=t ask_1457392972594_8881_1_25_000011, diagnostics=[TaskAttempt 0
failed, info=[Error: Failure while running task:java.lang.RuntimeException:
java.lang.OutOfMemoryError: Java heap space Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 ]], TaskAttempt 3 failed, info=[Container
container_1457392972594_9109_01_000505 finished with diagnostics set to
[Container failed, exitCode=-104. Container
[pid=75464,containerID=container_1457392972594_9109_01_000505] is running beyond
physical memory limits. Current usage: 4.0 GB of 4 GB physical memory used; 7.4
GB of 8.4 GB virtual memory used. Killing container. set hive.execution.engine=tez;
--set hive.execution.engine=mr; set role admin; set hive.support.sql11.reserved.keywords=false; set hive.vectorized.execution.enabled=false; Can someone please help
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Tez
- « Previous
- Next »