Member since
02-16-2017
13
Posts
0
Kudos Received
0
Solutions
12-28-2018
07:59 AM
Hi @Jalender , @subhash parise I am able to solve this issue. Based on the log from 'yarn logs -applicationId applicatioon_1545806970486_****', the main issue was following. "java.lang.Exception: java.util.concurrent.ExecutionException: java.lang.VerifyError: Bad return type" This error have caused as the jar I complied had all the dependency library package in it (I am using maven). There were 2 jar file generated when I used "clean install package" in maven. One jar had all the dependency library with file size of ~77mb. Another jar file had only class definition (file sze ~100kb) . I was using the jar will all the dependency library in the Hive cluster. It may have caused this error "java.lang.Exception: java.util.concurrent.ExecutionException: java.lang.VerifyError: Bad return type" as suggested in this post "https://stackoverflow.com/questions/100107/causes-of-getting-a-java-lang-verifyerror". So using the second jar with only class definition, the Insert query with custom UDF was working. Thank you all for the suggestion.
... View more
12-27-2018
10:42 AM
Hi @kerra I am also having the issue when I get vertex failed error when I tried to insert into table with 'Tez' engine in hive. Could you please mention what is the user permission that needs to be set. I am running hive with hdfs user. Thank you.
... View more
12-27-2018
09:35 AM
Hi, The issue is when I use custom UDF (error i posted is from one of the custom UDF. With other custom UDF also there is the same vertex fail error) to insert into the hive table. The UDF is okay as it works when I switch to 'MR' engine. Also query such as "select count(*) from table" work even if the engine is set to 'Tez'. I tried setting parameter, before running the query, as suggested in other post like set hive.execution.engine=tez; set hive.auto.convert.join=true; set hive.auto.convert.join.noconditionaltask=true; set hive.auto.convert.join.noconditionaltask.size=405306368; set hive.vectorized.execution.enabled=true; set hive.vectorized.execution.reduce.enabled =true; set hive.cbo.enable=true; set hive.compute.query.using.stats=true; set hive.stats.fetch.column.stats=true; set hive.stats.fetch.partition.stats=true; set hive.merge.mapfiles =true; set hive.merge.mapredfiles=true; set hive.merge.size.per.task=134217728; set hive.merge.smallfiles.avgsize=44739242; set mapreduce.job.reduce.slowstart.completedmaps=0.8; But it did not work. So is there some other specific 'Tez' parameter that needs to be tuned for the query to work?
... View more
12-27-2018
05:28 AM
Hi I check the log file. The entire log if is big so i copied the error from the log and attached here. The entire log is full of the error as attached here in the log file log-hive.txt
... View more
12-26-2018
11:28 AM
Hi, I am getting the error when try to run hive insert query with UDF. This is the error I get "Vertex failed, vertexName=Map 1, vertexId=vertex_1545806970486_0001_1_00, diagnostics=[Task failed, taskId=task_1545806970486_0001_1_00_000097, diagnostics=[TaskAttempt 0 failed, info=[Container container_e13_1545806970486_0001_01_000107 finished with diagnostics set to [Container completed. ]], TaskAttempt 1 killed, TaskAttempt 2 failed, info=[Container container_e13_1545806970486_0001_01_000136 received a STOP_REQUEST], TaskAttempt 3 failed, info=[Container container_e13_1545806970486_0001_01_000158 finished with diagnostics set to [Container completed. ]], TaskAttempt 4 failed, info=[Container container_e13_1545806970486_0001_01_000254 finished with diagnostics set to [Container completed. ]]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:230, Vertex vertex_1545806970486_0001_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]
Vertex killed, vertexName=Reducer 2, vertexId=vertex_1545806970486_0001_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:418, Vertex vertex_1545806970486_0001_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]
DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1545806970486_0001_1_00, diagnostics=[Task failed, taskId=task_1545806970486_0001_1_00_000097, diagnostics=[TaskAttempt 0 failed, info=[Container container_e13_1545806970486_0001_01_000107 finished with diagnostics set to [Container completed. ]], TaskAttempt 1 killed, TaskAttempt 2 failed, info=[Container container_e13_1545806970486_0001_01_000136 received a STOP_REQUEST], TaskAttempt 3 failed, info=[Container container_e13_1545806970486_0001_01_000158 finished with diagnostics set to [Container completed. ]], TaskAttempt 4 failed, info=[Container container_e13_1545806970486_0001_01_000254 finished with diagnostics set to [Container completed. ]]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:230, Vertex vertex_1545806970486_0001_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1545806970486_0001_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:418, Vertex vertex_1545806970486_0001_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1" "The error does not say much except for the vertex failed" I have already checked the other post which mentioned similar issue but it did not help. https://community.hortonworks.com/questions/48549/hive-vertex-issue.html https://community.hortonworks.com/questions/24730/hive-job-failed-on-tez.html https://community.hortonworks.com/questions/90648/hive-error-vertex-failed.html https://community.hortonworks.com/questions/140266/hive-query-error-with-vertex-failed-on-partitioned.html https://community.hortonworks.com/questions/141485/tez-vertex-failed-due-to-its-own-failuredag-did-no.html https://community.hortonworks.com/questions/222722/hive-query-fails-in-tez-runs-in-mr-mode.html I tried to set configuration as mentioned in one of the post set hive.execution.engine=tez; set hive.auto.convert.join=true; set hive.auto.convert.join.noconditionaltask=true; set hive.auto.convert.join.noconditionaltask.size=405306368; set hive.vectorized.execution.enabled=true; set hive.vectorized.execution.reduce.enabled =true; set hive.cbo.enable=true; set hive.compute.query.using.stats=true; set hive.stats.fetch.column.stats=true; set hive.stats.fetch.partition.stats=true; set hive.merge.mapfiles =true; set hive.merge.mapredfiles=true; set hive.merge.size.per.task=134217728; set hive.merge.smallfiles.avgsize=44739242; set mapreduce.job.reduce.slowstart.completedmaps=0.8; But this also did not work. My table structure is as follows: CREATE TABLE table_name(id string, record ARRAY<ARRAY<string>>)
PARTITIONED BY (dateonly string, place string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '#'
COLLECTION ITEMS TERMINATED BY ','
MAP KEYS TERMINATED BY '!'
LINES TERMINATED BY '\n'
STORED AS SEQUENCEFILE; My query structure is as follows: INSERT OVERWRITE TABLE table_name PARTITION (dateonly='some_date', place ='some_place')
select id, UDF(id1,id2.......) as record from other_table where dateonly = 'some_date' and place = 'some_place' group by id; The thing is if I change execution engine to 'mr' it work but for 'tez' execution engine this error keeps coming. Please kindly help if anyone has any solution for this issue. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
12-12-2018
01:34 AM
Thank you very much for the explanation regarding the work around to delete the row.
... View more
12-10-2018
04:30 AM
Hi, Table is stored as TEXTFILE. It is not in ORC format or bucket enable. For such case is there a way around to delete the rows from such tables
... View more
12-07-2018
09:33 AM
I am trying to delete some of the rows from my
hive table which has partitions. This is what I did. delete
from <table_name> where <condition>; However,
I am getting following error. FAILED:
SemanticException [Error 10294]: Attempt to do update or delete using
transaction manager that does not support these operations. Please anyone
suggest why the query is not working. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
09-11-2017
12:13 PM
Hi I have Ambari Version 2.5.2.0 with 4 Nodes. 1 master and 3 slave. In HDFS config property I have set Block replication = 3 dfs.replication.max = 3 (Default value was 50) If I did not set dfs.replication.max to 3 I was getting some under replicated blocks. With dfs.replication.max = 3, there was no under replicated blocks. Now the Issue Part. I have another old hadoop system (hadoop 1.2.1) with 10 nodes. I want to copy the data from the old hadoop system to HDP using 'distcp'. But when I try to copy I keep on getting error as 'Requested replication 10 exceeds maximum 3'. Why is this error generated when try to use distcp. Please help solve this issue. Regards, Saurav Ranjit
... View more
- Tags:
- Distcp
- replication
Labels:
- Labels:
-
Apache Hadoop
02-18-2017
11:04 AM
I could solve the ambari start issue by removing my existing postgres and using the postgres of ambari itself. I started my ambari and try to configure it from the web browser to depolying HDP. But now i get another issue. In step three "Confirm Host" the status is installing. However after running it for 2 hour there is no change. The status is same installing. Also there no ambari-agent log folder created inside /var/log/ . I have attached the picture of my status. This is taking too long time and no result. How to check if the agent are being installed in the host machine. Thank you
... View more
02-17-2017
07:54 AM
It did not work. However i uninstalled my existing postgressql and used the postgres from the ambari installation itself. Doing this solved the problem and the server could start. But not i get another issue while lunching ambari install wizard. In the "Confirm Host" my set up get failed. I have this waring message as well "The following hostnames are not valid FQDNs:" but it allows me to next page. Failed message in Confirm Host is only " ==========================
Creating target directory...
==========================
Command start time 2017-02-17 16:45:13 " Is there any log file to view the why it failed
... View more
02-17-2017
02:48 AM
I check my postgressql status with "sudo systemctl enable postgresql-9.4" Postgresql seems to be running. Here is the result of the command. ● postgresql-9.4.service - PostgreSQL 9.4 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-9.4.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2017-02-17 10:48:21 JST; 52min ago
Main PID: 1089 (postgres)
CGroup: /system.slice/postgresql-9.4.service
├─1089 /usr/pgsql-9.4/bin/postgres -D /var/lib/pgsql/9.4/data
├─2284 postgres: logger process
├─2304 postgres: checkpointer process
├─2305 postgres: writer process
├─2306 postgres: wal writer process
├─2307 postgres: autovacuum launcher process
├─2308 postgres: stats collector process
└─6416 postgres: ambari ambaridatabase [local] idle However the issue still remains. I tried to setup the server again but still there is the same issue.
... View more
02-16-2017
01:12 PM
Hi, I am trying to use ambari to for HDP. So far I have installed the ambari server successfully. however when i start the server I get the following error. " Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
No errors were found.
ERROR: Exiting with exit code 1.
REASON: Database check failed to complete. Please check /var/log/ambari-server/ambari-server.log and /var/log/ambari-server/ambari-server-check-database.log for more information. " OS: Centos 7 Java : Oracle java 1.7 Database: Postgressql 9.4 ambari-server.txt ambari-server-check-database.txt I have attached the log file of the server. Thank you for the help.
... View more
Labels:
- Labels:
-
Apache Ambari