Member since
04-03-2019
962
Posts
1743
Kudos Received
146
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
11417 | 03-08-2019 06:33 PM | |
4863 | 02-15-2019 08:47 PM | |
4148 | 09-26-2018 06:02 PM | |
10540 | 09-07-2018 10:33 PM | |
5585 | 04-25-2018 01:55 AM |
07-11-2017
01:01 AM
3 Kudos
@Rahul P This is known issue. To resolve this: Please take backup of your metastore DB before doing any changes. On the Hive Metastore installed node: 1) As the error mentions "Initialization script hive-schema-2.1.1000.mysql.sql", check /usr/hdp/current/hive-server2-hive2/scripts/metastore/upgrade/mysql/hive-*schema-2.1.0.mysql.sql For example: cd /usr/hdp/current/hive-server2-hive2/scripts/metastore/upgrade/mysql/
grep 'CREATE INDEX' hive-*schema-2.1.0.mysql.sql
hive-schema-2.1.1000.mysql.sql:CREATE INDEX PCS_STATS_IDX ON PART_COL_STATS (DB_NAME,TABLE_NAME,COLUMN_NAME,PARTITION_NAME) USING BTREE;
hive-schema-2.1.1000.mysql.sql:CREATE INDEX `CONSTRAINTS_PARENT_TABLE_ID_INDEX` ON KEY_CONSTRAINTS (`PARENT_TBL_ID`) USING BTREE;
hive-txn-schema-2.1.0.mysql.sql:CREATE INDEX HL_TXNID_IDX ON HIVE_LOCKS (HL_TXNID); 2) Delete above indexes from MySQL: Make sure you are using the database that is specified in log above. Connecting to jdbc:mysql://localhost/hive?createDatabaseIfNotExist=true use <hivedB>
drop index CONSTRAINTS_PARENT_TABLE_ID_INDEX on KEY_CONSTRAINTS;
drop index PCS_STATS_IDX on PART_COL_STATS;
drop index HL_TXNID_IDX on HIVE_LOCKS; 3) Add "IF NOT EXISTS" in the following tables by editing hive-txn-schema-2.1.0.mysql.sql file: grep 'CREATE TABLE' hive-*schema-2.1.0.mysql.sql | grep -v 'IF NOT EXISTS'
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE TXNS (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE TXN_COMPONENTS (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE COMPLETED_TXN_COMPONENTS (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE NEXT_TXN_ID (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE HIVE_LOCKS (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE NEXT_LOCK_ID (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE COMPACTION_QUEUE (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE COMPLETED_COMPACTIONS (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE NEXT_COMPACTION_QUEUE_ID (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE AUX_TABLE (
hive-txn-schema-2.1.0.mysql.sql:CREATE TABLE WRITE_SET ( For example: sed -i.bak 's/^CREATE TABLE \([^I][^F]\)/CREATE TABLE IF NOT EXISTS \1/g' hive-txn-schema-2.1.0.mysql.sql 4) Starting Hive Metastore from Ambari 5) After starting the metastore, if the following error happens, remove the conflicting row and start again. 0: jdbc:mysql://sandbox.hortonworks.com/hive> INSERT INTO VERSION (VER_ID, SCHEMA
_VERSION, VERSION_COMMENT) VALUES (1, '2.1.0', 'Hive release version 2.1.0')
Error: Duplicate entry '1' for key 'PRIMARY' (state=23000,code=1062)
... View more
07-11-2017
12:37 AM
1 Kudo
@MyeongHwan Oh I do not see any issues unless these tools interfere with same HDP ports. You would need to ensure that you allocate resources to nodemanagers by considering RAM/CPUs needed for running these tools in order to avoid OOM Killer killing your important applications.
... View more
07-11-2017
12:21 AM
2 Kudos
To resolve this issue - Please remove org.apache.falcon.metadata.MetadataMappingService out from *.application.services in falcon startup.properties and restart falcon server
... View more
07-05-2017
11:53 PM
1 Kudo
@Sami Ahmad Port number should be 10000. I see that your Hiveserver2 is listening on 10000. Can you please double check? Doc Ref - https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients Hope this helps.
... View more
07-05-2017
11:49 PM
2 Kudos
@Zhenwei Liu I agree with @Jay SenSharma. Can you try uninstalling snoopy and try? More references - https://issues.apache.org/jira/browse/YARN-5546 [ Not a BUG though ] https://github.com/a2o/snoopy/issues/39 https://stackoverflow.com/questions/44922588/hadoop-nodemanager-killed-by-sigsegv
... View more
07-05-2017
11:44 PM
3 Kudos
@JT Ng Please modify these properties as below in core-site.xml via Ambari --> Restart required services --> It should fix your issue.
hadoop.proxyuser.root.groups=*
hadoop.proxyuser.root.hosts=* Please do let me know if you have any questions.
... View more
07-05-2017
11:38 PM
1 Kudo
@Paramesh malla You need to overwrite file with python(like cp -f) in order to get it working. Please refer below link: https://stackoverflow.com/questions/2424000/read-and-overwrite-a-file-in-python Hope this helps.
... View more
07-05-2017
11:34 PM
1 Kudo
@sindhu penugonda Can you please try below feed def? I have edited it and corrected. I think you should mention customer properties like below. Ref - https://falcon.apache.org/EntitySpecification.html <feed xmlns='uri:falcon:feed:0.1'name='hcat-in-web'description='input'>
<groups>hcatinputnew2</groups>
<frequency>minutes(15)</frequency>
<timezone>UTC</timezone>
<clusters>
<cluster name='hcat-local' type='source'>
<validity start='2013-01-01T00:00Z'end='2030-01-01T00:00Z'/>
<retention limit='hours(2)' action='delete'/>
<table uri='catalog:abc:abc_table#cpd_mnth_id=2017*);cpd_dt=${YEAR}-${MONTH}-${DAY}'/>
</cluster>
</clusters>
<table uri='catalog:abc:abc_table#cpd_mnth_id=2017*);cpd_dt=${YEAR}-${MONTH}-${DAY}'/>
<ACL owner='falcon' group='hadoop'permission='0755'/>
<schema location='/schema/log/log.format.csv' provider='csv'/>
<properties>
<property name="queueName" value="default"/>
<property name="jobPriority" value="NORMAL"/>
<property name="parallel" value="3"/>
<property name="maxMaps" value="8"/>
</properties>
</feed> Hope this helps! Please mark this answer as accepted if it helped.
... View more
07-05-2017
11:14 PM
1 Kudo
@Bin Ye Can you please check if there is any firewall which is blocking connections on port 22? You can also try to register agents manually by installing ambari-agent on each machine --> editing /etc/ambari-agent/conf/agent.ini --> point it to correct ambari-server hostname --> select manual registration at the time of agent registration in Ambari. Hope this helps.
... View more
06-29-2017
07:09 PM
@Matt Clarke You already answered my next question 🙂 For all above mentioned directories, it stores data in round robin manner like Datanode stores blocks on local disks. Got it.
... View more