Member since
11-17-2015
53
Posts
32
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4986 | 08-30-2016 03:38 PM | |
3244 | 08-09-2016 07:13 PM | |
2605 | 06-14-2016 03:25 PM | |
6311 | 02-26-2016 03:34 PM |
08-11-2016
09:55 PM
For postgres, I needed slightly different steps. psql
create database grafana;
create user grafana with password 'grafana';
GRANT ALL PRIVILEGES ON DATABASE grafana to grafana;
connect to grafana db
\c grafana
create session table
CREATE TABLE session (
key CHAR(16) NOT NULL,
data bytea,
expiry INT NOT NULL,
PRIMARY KEY (key)
);
Edited /var/lib/pgsql/data/pg_hba.conf to add the following lines: host all grafana 0.0.0.0/0 trust
local all grafana trust
In Ambari, under “Advanced ams-grafana-in” the content was changed to use postgres: #################################### Database ####################################
[database]
# Either "mysql", "postgres" or "sqlite3", it's your choice
type = postgres
host = YOURSERVER.EXAMPLE.COM:5432
name = grafana
user = grafana
password = grafana
# For "postgres" only, either "disable", "require" or "verify-full"
ssl_mode = disable
# For "sqlite3" only, path relative to data_path setting
;path = grafana.db
Hope this helps someone!
... View more
08-09-2016
07:13 PM
1 Kudo
Found the issue in the hivemetastore.log: 2016-08-09 13:21:08,123 ERROR [pool-5-thread-199]: server.TThreadPoolServer (TThreadPoolServer.java:run(296)) - Error occurred during processing of message.
java.lang.IllegalArgumentException: Illegal principal name serviceaccount@MY.REALM.EXAMPLE.COM: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to serviceaccount@MY.REALM.EXAMPLE.COM
at org.apache.hadoop.security.User.<init>(User.java:50)
at org.apache.hadoop.security.User.<init>(User.java:43)
at org.apache.hadoop.security.UserGroupInformation.createProxyUser(UserGroupInformation.java:1283)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:672)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No rules applied to serviceaccount@MY.REALM.EXAMPLE.COM
at org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:389)
at org.apache.hadoop.security.User.<init>(User.java:48)
... 7 more
Turns out the Hive Metastore was missed in the list of services to be restarted after updating our realm rule mapping (hadoop.security.auth_to_local). TDCH is working fine now.
... View more
08-09-2016
06:42 PM
@mqureshi I want to leave hive.server2.enable.doAs set to false since we'll have other users accessing hive and need to keep the data in HDFS secure. I feel like my service account should have the ability to read from the hive metastore already.
... View more
08-09-2016
06:10 PM
@mqureshi We have hive.server2.enable.doAs set to false. I am expecting that if TDCH runs any hive queries they would run as the service account but the data in HDFS would be accessed by the hive user still. I don't see anything show up as denied in the Ranger audit log either.
... View more
08-08-2016
08:28 PM
I'm able to run a TDCH export from my HDP 2.3 cluster to Teradata using the following command: hadoop jar ${USERLIBTDCH} com.teradata.hadoop.tool.TeradataExportTool \
-libjars ${LIB_JARS} \
-url ${FULL_TD_URL} \
-username ${TD_USER} \
-password ${TD_PW} \
-jobtype hive \
-fileformat orcfile \
-method batch.insert \
-nummappers 10 \
-sourcedatabase ${HIVE_DB} \
-sourcetable ${HIVE_TABLE} \
-sourcefieldnames "${TABLE_COLUMN_NAMES}" \
-stagedatabase ${TD_STAGING_DB} \
-errortabledatabase ${TD_STAGING_DB} \
-targettable ${TD_TABLE} \
-targetfieldnames "${TABLE_COLUMN_NAMES}"
Everything works fine when I run my script as the hive user. I'm switching the scripts over to use a service account but I get the following error when running the same script: 16/08/08 14:48:43 INFO tool.ConnectorExportTool: ConnectorExportTool starts at 1470685723042
16/08/08 14:48:43 INFO common.ConnectorPlugin: load plugins in file:/tmp/hadoop-unjar6402968921427571136/teradata.connector.plugins.xml
16/08/08 14:48:43 INFO hive.metastore: Trying to connect to metastore with URI thrift://our-fqdn:9083
16/08/08 14:48:44 INFO hive.metastore: Connected to metastore.
16/08/08 14:48:44 INFO processor.TeradataOutputProcessor: output postprocessor com.teradata.connector.teradata.processor.TeradataBatchInsertProcessor starts at: 1470685724079
16/08/08 14:48:44 INFO processor.TeradataOutputProcessor: output postprocessor com.teradata.connector.teradata.processor.TeradataBatchInsertProcessor ends at: 1470685724079
16/08/08 14:48:44 INFO processor.TeradataOutputProcessor: the total elapsed time of output postprocessor com.teradata.connector.teradata.processor.TeradataBatchInsertProcessor is: 0s
16/08/08 14:48:44 INFO tool.ConnectorExportTool: com.teradata.connector.common.exception.ConnectorException: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376)
at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453)
at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435)
at org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.hadoop.hive.thrift.TFilterTransport.readAll(TFilterTransport.java:62)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:1218)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:1204)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.tableExists(HiveMetaStoreClient.java:1274)
at com.teradata.connector.hive.processor.HiveInputProcessor.inputPreProcessor(HiveInputProcessor.java:85)
at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:116)
at com.teradata.connector.common.tool.ConnectorExportTool.run(ConnectorExportTool.java:62)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.teradata.hadoop.tool.TeradataExportTool.main(TeradataExportTool.java:29)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:140)
at com.teradata.connector.common.tool.ConnectorExportTool.run(ConnectorExportTool.java:62)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.teradata.hadoop.tool.TeradataExportTool.main(TeradataExportTool.java:29)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
16/08/08 14:48:44 INFO tool.ConnectorExportTool: job completed with exit code 10000
I figure this has to be some sort of permissions issue because it works as hive but not my service account. What other permissions should I check? TDCH 1.4.1. Kerberized HDP 2.3 cluster.
... View more
Labels:
- Labels:
-
Apache Hive
07-13-2016
09:12 PM
Thanks @Terry Padgett! This worked and we were able to start zookeeper after adding this entry in ambari. Looks like we'll need to follow up with networking to see about opening up UDP.
... View more
07-11-2016
07:36 PM
We have made it through most of the kerberos wizard but got stuck on the last step where it is attempting to start services. The Zookeeper status check fails and we've found out that zookeeper server is not starting up. The error in zookeeper.log is: 2016-07-11 14:12:12,565 - INFO [main:FourLetterWordMain@43] - connecting to localhost 2181
2016-07-11 14:13:34,001 - ERROR [main:QuorumPeerMain@89] - Unexpected exception, exiting abnormally
java.io.IOException: Could not configure server because SASL configuration did not allow the ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Receive timed out
at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:207)
at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:130)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:111)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
I've done some research and found this helpful page about Kerberos errors. Running through the list of possible causes and I am at a loss because we were able to progress through the rest of the wizard OK. All the principals were created by Ambari in Active Directory OK. I can also become the zookeeper user, kinit using zk.service.keytab, and klist perfectly fine. It seems to me that network issues are the most likely... but shouldn't kinit rule out any firewall or hostname issues with kerberos? Is there a config somewhere I'm missing???? We are using Ambari 2.2.2.0 and HDP 2.3.2.0.
... View more
Labels:
06-17-2016
08:42 PM
Thank you @gkesavan, here is the entry which worked for me: <mirror>
<id>hw_central</id>
<name>Hortonworks Mirror of Central</name>
<url>http://repo.hortonworks.com/content/groups/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>
... View more
06-14-2016
03:25 PM
@Sagar Shimpi I think I found the issue. There is an old view named HDFS_BROWSE from Ambari 2.1 and a new view named AUTO_FILES_INSTANCE in Ambari 2.2. I was using the old view which doesn't work anymore. I can delete files fine using the new view.
... View more
06-14-2016
02:12 PM
@Sagar Shimpi
1) All the files I've tried through the Ambari HDFS View have had this issue.
2) I can delete the file from the CLI so I do think it is only a problem with the ambari view.
... View more