Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Hive - tez , vertex failed during map phase - HDP 3.1.4

avatar
Explorer

Hi there,

 

I'm running hive with HDP 3.1.4, and with kerberos enabled.

Currently there is a strange issue. Some sql could be success only if we run it with the system user `hive`, otherwise it fails with map error.

 

A simple case is as below:

1. Create a table named `test` with columns (id int, val string), and data [(1, '1'), (2, '2')].

2. Run sql with user hive: `select * from test t join test t1 on t.id=t1.id;`, it succeeds as below:

BrightLiao_0-1611042112169.png

3. Run the same sql with some other user (e.g. test): `select * from test t join test t1 on t.id=t1.id;`, it fails with a vertex issue:

```

INFO : Compiling command(queryId=hive_20210119153558_48453ef9-4b06-48ed-b9f6-f64f82992d81): select * from test t join test t1 on t.id=t1.id
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:t.id, type:int, comment:null), FieldSchema(name:t.val, type:string, comment:null), FieldSchema(name:t1.id, type:int, comment:null), FieldSchema(name:t1.val, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20210119153558_48453ef9-4b06-48ed-b9f6-f64f82992d81); Time taken: 0.211 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=hive_20210119153558_48453ef9-4b06-48ed-b9f6-f64f82992d81): select * from test t join test t1 on t.id=t1.id
INFO : Query ID = hive_20210119153558_48453ef9-4b06-48ed-b9f6-f64f82992d81
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Subscribed to counters: [] for queryId: hive_20210119153558_48453ef9-4b06-48ed-b9f6-f64f82992d81
INFO : Session is already open
INFO : Dag name: select * from test t join test ...t.id=t1.id (Stage-1)
INFO : Setting tez.task.scale.memory.reserve-fraction to 0.30000001192092896
INFO : Tez session was closed. Reopening...
INFO : Session re-established.
INFO : Session re-established.
INFO : Status: Running (Executing on YARN cluster with App id application_1610695610027_0489)

----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
----------------------------------------------------------------------------------------------
Map 2 .......... container SUCCEEDED 1 1 0 0 4 0
Map 1 container RUNNING 1 0 1 0 0 0
----------------------------------------------------------------------------------------------
VERTICES: 01/02 [=============>>-------------] 50% ELAPSED TIME: 5.98 s
----------------------------------------------------------------------------------------------
ERROR : Status: Failed
ERROR : Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00
ERROR : Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00
ERROR : Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00
ERROR : Vertex failed, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00, diagnostics=[Vertex vertex_1610695610027_0489_1_00 [Map 2] killed/failed due to:OWN_TASK_FAILURE, Vertex vertex_1610695610027_0489_1_00 [Map 2] failed as task task_1610695610027_0489_1_00_000000 failed after vertex succeeded.]
ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : NUM_FAILED_TASKS: 4
INFO : NUM_KILLED_TASKS: 1
INFO : NUM_SUCCEEDED_TASKS: 4
INFO : TOTAL_LAUNCHED_TASKS: 5
INFO : AM_CPU_MILLISECONDS: 3300
INFO : AM_GC_TIME_MILLIS: 0
INFO : org.apache.hadoop.hive.ql.exec.tez.HiveInputCounters:
INFO : GROUPED_INPUT_SPLITS_Map_1: 1
INFO : GROUPED_INPUT_SPLITS_Map_2: 1
INFO : INPUT_DIRECTORIES_Map_1: 1
INFO : INPUT_DIRECTORIES_Map_2: 1
INFO : INPUT_FILES_Map_1: 1
INFO : INPUT_FILES_Map_2: 1
INFO : RAW_INPUT_SPLITS_Map_1: 1
INFO : RAW_INPUT_SPLITS_Map_2: 1
ERROR : FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00Vertex failed, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00, diagnostics=[Vertex vertex_1610695610027_0489_1_00 [Map 2] killed/failed due to:OWN_TASK_FAILURE, Vertex vertex_1610695610027_0489_1_00 [Map 2] failed as task task_1610695610027_0489_1_00_000000 failed after vertex succeeded.]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0
INFO : Completed executing command(queryId=hive_20210119153558_48453ef9-4b06-48ed-b9f6-f64f82992d81); Time taken: 12.207 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00Vertex re-running, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00Vertex failed, vertexName=Map 2, vertexId=vertex_1610695610027_0489_1_00, diagnostics=[Vertex vertex_1610695610027_0489_1_00 [Map 2] killed/failed due to:OWN_TASK_FAILURE, Vertex vertex_1610695610027_0489_1_00 [Map 2] failed as task task_1610695610027_0489_1_00_000000 failed after vertex succeeded.]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0 (state=08S01,code=2)

```

 

From the above log, we can see the first Map task failed 4 times, and thus failed the query. Since I cannot upload file, so I didnt attach the full applicaiton log.

 

Will be really appreciated if anyone could help have a look. Thanks!

 

 

1 ACCEPTED SOLUTION

avatar
Explorer

Finally, I managed to solve this issue. I'd like to add some details for others might be lost in the same trap.

 

The root cause is that the user UIDs in our LDAP clashes with the UIDs of hdp internal user UIDs(e.g. hive, spark, infra-solr, hbase...).

 

To be more specific in the above case, the user test has UID 1002 in our LDAP configuration, while the hdp internal user 'infra-solr' has the same UID.

 

From the log, we can see some INPUT_READ_ERROR, the error is raised by some downstream task complaining that it cannot get data from the upstream task.

The reason behind it is that when the downstream task try to get upstream task data from shuffle service (which runs on yarn node manager as an aux-service), a lot of HTTP 500 error raised.

 

Why shuffle service raises 500? It's because it checks the permission (owner) of data file sent out by upstream task and found the user name does not match, so it raised an error.

 

When we look at the logs from node manager, we can see bunch of errors like below: 

 

Caused by: java.io.IOException: Owner 'infra-solr' for path /xxx/hadoop/yarn/local/usercache/xxx/appcache/application_1610695610027_0576/output/attempt_1610695610027_0576_2_00_000017_0_10002/file.out.index did not match expected owner 'test'

 

The related source code:

https://github.com/apache/hadoop/blob/master/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop... 

https://github.com/apache/tez/blob/master/tez-plugins/tez-aux-services/src/main/java/org/apache/tez/... 

 

The lesson here is: when we configure uid/gid in LDAP, we should be more careful about the potential conflicts.

 

View solution in original post

2 REPLIES 2

avatar
Explorer

I'd like to add more information after some dig into the logs. I found some debug log below with INPUT_READ_ERROR_EVENT triggered (from log file: syslog_dag_1610695610027_0484_1):

BrightLiao_0-1611045650316.png

It's from a similar sql with some aggregation, but raised the same error from hive shell.

 

It looks when the Reducer was trying to read from the result of the Mapper, some error occurred and the data cannot be passed from Mapper to Reducer.

 

The question still remains: why the sql (in tez) succeeds for hive user, but failed for other users?

 

 

avatar
Explorer

Finally, I managed to solve this issue. I'd like to add some details for others might be lost in the same trap.

 

The root cause is that the user UIDs in our LDAP clashes with the UIDs of hdp internal user UIDs(e.g. hive, spark, infra-solr, hbase...).

 

To be more specific in the above case, the user test has UID 1002 in our LDAP configuration, while the hdp internal user 'infra-solr' has the same UID.

 

From the log, we can see some INPUT_READ_ERROR, the error is raised by some downstream task complaining that it cannot get data from the upstream task.

The reason behind it is that when the downstream task try to get upstream task data from shuffle service (which runs on yarn node manager as an aux-service), a lot of HTTP 500 error raised.

 

Why shuffle service raises 500? It's because it checks the permission (owner) of data file sent out by upstream task and found the user name does not match, so it raised an error.

 

When we look at the logs from node manager, we can see bunch of errors like below: 

 

Caused by: java.io.IOException: Owner 'infra-solr' for path /xxx/hadoop/yarn/local/usercache/xxx/appcache/application_1610695610027_0576/output/attempt_1610695610027_0576_2_00_000017_0_10002/file.out.index did not match expected owner 'test'

 

The related source code:

https://github.com/apache/hadoop/blob/master/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop... 

https://github.com/apache/tez/blob/master/tez-plugins/tez-aux-services/src/main/java/org/apache/tez/... 

 

The lesson here is: when we configure uid/gid in LDAP, we should be more careful about the potential conflicts.