Member since
05-29-2017
408
Posts
123
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2785 | 09-01-2017 06:26 AM | |
1696 | 05-04-2017 07:09 AM | |
1459 | 09-12-2016 05:58 PM | |
2060 | 07-22-2016 05:22 AM | |
1625 | 07-21-2016 07:50 AM |
09-01-2017
06:26 AM
I have resolved it by setting up following property: SET mapred.input.dir.recursive=true; Root Cause: Actually we ran insert overwrite which replaced all part files to same name dir and created files under those dirs. And as we are aware mr execution does not search recursively and thats why it was not returning any result in case of mr execution engine. [s0998dnz@m1.hdp22 ~]$ hadoop fs -ls hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/
Found 50 items
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000000_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000001_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000002_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:08 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000003_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000004_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000005_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000006_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:09 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000007_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000008_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000009_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000010_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:10 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000011_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:11 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000012_0
drwxr-x--- - dmcraig hdfs 0 2017-08-23 12:11 hdfs://m1.hdp22:8020/apps/hive/warehouse/test_db.db/table1/000013_0
... View more
08-31-2017
01:16 PM
I got stuck into one weird error. When I run a select statement via setting set hive.execution.engine=mr; then select * from table is not returning any rows in beeline but when I run it in tez then it is returning result. So can someone please help me to understand and solve this issue. Note: I checked HS2 logs and I do not see any error except following threads: 2017-08-31 09:02:02,239 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: select * from table1 limit 25
2017-08-31 09:02:02,241 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(855)) - 3: get_table : db=test_db tbl=table1 2017-08-31 09:02:02,241 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(406)) - ugi=saurkumaip=unknown-ip-addrcmd=get_table : db=test_db tbl=table1 2017-08-31 09:02:02,260 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(855)) - 3: get_table : db=test_db tbl=table1 2017-08-31 09:02:02,260 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(406)) - ugi=saurkumaip=unknown-ip-addrcmd=get_table : db=test_db tbl=table1 2017-08-31 09:02:02,269 INFO [HiveServer2-HttpHandler-Pool: Thread-104]: ql.Driver (Driver.java:getSchema(253)) - Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:table1.cus_id, type:int, comment:null), FieldSchema(name:table1.prx_nme, type:char(15), comment:null), FieldSchema(name:table1.fir_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_1_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_2_nme, type:char(15), comment:null), FieldSchema(name:table1.mid_3_nme, type:char(15), comment:null), FieldSchema(name:table1.lst_nme, type:char(30), comment:null), FieldSchema(name:table1.sfx_nme, type:char(5), comment:null), FieldSchema(name:table1.gen_nme, type:char(10), comment:null), FieldSchema(name:table1.lic_st_abr_id, type:char(2), comment:null), FieldSchema(name:table1.dsd_idc, type:char(1), comment:null)], properties:null) 2017-08-31 09:02:02,271 INFO [HiveServer2-Background-Pool: Thread-161143]: ql.Driver (Driver.java:execute(1411)) - Starting command(queryId=hive_20170831090202_3dbbdf1c-c061-4289-b4dd-a2934cbec04d): select * from table1 limit 25
2017-08-31 09:02:02,278 INFO [Atlas Logger 2]: hook.HiveHook (HiveHook.java:registerProcess(697)) - Skipped query select * from table1 limit 25 for processing since it is a select query
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Hive
-
Apache Tez
08-22-2017
12:52 PM
I am not able to perform any hive operation like select count on one table and getting following error. Error: Error while compiling statement: FAILED: SemanticException [Error 10265]: This command is not allowed on an ACID table sample_table with a non-ACID transaction manager. Failed command: null (state=42000,code=10265) When I check table properties then I saw transactional'='true' set to true so can someone please help me to resolve it.
... View more
Labels:
- Labels:
-
Apache Hive
08-05-2017
06:58 AM
@Eyad Garelnabi Thanks for your help. We will wait for it.
... View more
08-04-2017
11:59 AM
I am using atlas Tag sync and Data lineage functionality under hdp2.6.1, then I have got a requirement to work on Tag carry forward. So I am just wondering whether we have this functionality or not ? Actually if we have table a where we have linked tag to column of this table and later we create another table b then all linked tag will be carry forward or not ?
... View more
Labels:
- Labels:
-
Apache Atlas
05-15-2017
08:51 AM
Hello @Nixon Rodrigues, I checked application.log and found following error Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userService': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: private org.apache.atlas.web.dao.UserDao org.apache.atlas.web.service.UserService.userDao; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'userDao': Invocation of init method failed; nested exception is java.lang.RuntimeException: org.apache.atlas.AtlasException: /usr/hdp/current/atlas-server/conf/users-credentials.properties not found in file system or as class loader resource and /usr/hdp/current/atlas-server/conf/policy-store.txt not found in file system or as class loader resource. So actually these files were not present, I don't know whether its a behavior of Atlas that we manually have to create them and update accordingly or it was just specific to my env due to some issue. But once I have created and updated these two files then atlas started working.
... View more
05-13-2017
06:58 AM
I see following logs in those files. netstat -tnlpa | grep 21000 (No info could be read for "-p": geteuid()=4569 but you should be root.) tcp 0 0 0.0.0.0:21000 0.0.0.0:* LISTEN - cat atlas.20170512-081144.out {metadata.broker.list=<server>:6667, request.timeout.ms=30000, client.id=atlas, security.protocol=PLAINTEXT} cat atlas.20170512-081144.err log4j:WARN Continuable parsing error 37 and column 14 log4j:WARN The content of element type "appender" must match "(errorHandler?,param*,rollingPolicy?,triggeringPolicy?,connectionSource?,layout?,filter*,appender-ref*)". log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender. log4j:WARN No such property [maxFileSize] in org.apache.log4j.DailyRollingFileAppender.
... View more
05-13-2017
06:54 AM
Thanks @Jay SenSharma
... View more
05-12-2017
04:49 PM
I do agree @Jay SenSharma and I did that one only, But my point is it should not be this way. There should be some Finish or finalized or some other button.
... View more
05-12-2017
10:28 AM
When I was upgrading my cluster via ambari 2.5.3 and upgrade was successfully completed 100% and wanted to go back to my dashboard then I did not see any OK or Finished button. Is it due to some config missing in my ambari or its already part of any enhancement request ?
... View more
Labels:
- Labels:
-
Apache Ambari