Member since
02-22-2016
25
Posts
8
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1306 | 08-18-2019 11:01 PM |
08-18-2019
11:01 PM
Issue was resolved. No need to configure cross-realm trust rather, try logging in to Ambari URL via browser or API using curl. This will generate a OAuth token for the user which will then be used for authentication to access ADLS. So Kerberos token for hadoop authentication and Oauth token needed for ADLS authentication. Thus each time , when you create HDInsight cluster, ensure you create token for the user to access ADLS from Ambari API.
... View more
09-14-2018
10:03 AM
We are working with HDInsight Spark cluster with ADLS as its primary storage . Now, we need to join HDInsight cluster to a AD domain for user authentciation and make it enterprise ready.
Read that HDInsight only allows domain joining via Azure ADDS. Our onprem enterprise AD domain domainA.com is already in sync with Azure AD using Azure connect and ADDS was created in Azure for HDInsights with a custom domain- domainB.com , enabled password hash sync for Kerberos.
We were able to join the cluster to newly created ADDS domain domainB.com successfully and all hadoop services are running and in good health. We are able to login to cluster using onprem AD credentials in domainA.com as they are in sync with azure ad.
But the issue is, we are able to access hadoop services including HDFS,Hive,etc only when logged into cluster as users created in Azure ADDS domain domainB.com and same access is not available for users in enterprise AD domainA.com though they are synced to Azure AD.
So the issue is not due to ADLS store connectivity, because adls is accessible for users in azure AD / ADDS domain and not for enterprise AD users in different domain.
When tried to access ADLS using
hadoop fs -ls / or
hdfs dfs -ls adl:/// or
hadoop fs -ls adl://home or
hadoop fs -ls adl://datalakestorename.azuredatalake.net/ ,
the error thrown is as follows:
ERROR: secure.AbstractCredentialServiceCaller: Token does not exist in Tokenmanager(Response code 404) ls: Error fetching access token
Is this can happen due to difference in two domains- Azure ADDS and onprem AD. Do we need to configure anything like cross realm trust , in this PaaS manually to make it work. We are totally stuck with this issue.
Please help ASAP if anyone has encountered similar issues.
... View more
Labels:
02-09-2017
06:17 AM
@slachterman : Please find the response to your queries: hadoop.kms.authentication.type = simple output after giving verbose option -vvv with curl: [root@hdp-dn02 ~]# curl -vvv -u keyadmin:keyadmin1 -X GET http://<KMSip>:9292/kms/v1/keys/names
* About to connect() to <KMSip> port 9292 (#0) * Trying <KMSip>... connected * Connected to <KMSip> port 9292 (#0)
* Server auth using Basic with user 'keyadmin' >GET /kms/v1/keys/names HTTP/1.1 > Authorization: Basic a2V5YWRtaW46a2V5YWRtaW4x
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: <KMSip> :9292
> Accept: */* > < HTTP/1.1 401 Unauthorized
< Server: Apache-Coyote/1.1 < WWW-Authenticate: PseudoAuth < Set-Cookie: hadoop.auth=; HttpOnly < Content-Type: text/html;charset=utf-8
< Content-Language: en < Content-Length: 997
< Date: Thu, 09 Feb 2017 06:11:26 GMT <
* Connection #0 to host <KMSip> left intact * Closing connection #0
<html><head><title>Apache Tomcat/7.0.68 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 401 - Authentication required</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>Authentication required</u></p><p><b>description</b> <u>This request requires HTTP authentication.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/7.0.68</h3></body></html>
... View more
02-07-2017
05:45 AM
What is your HDP version? ==> 2.5.3.0 Is your cluster kerberized? ==> It is not kerberised now. Can you please paste the complete command used for import-hive.sh? ==>./usr/hdp/2.5.3.0-37/atlas/hook-bin/import-hive.sh Also, can you attach the atlas application log?
... View more
02-06-2017
01:56 PM
Yes, Kafka listed the topic but with a warning. [root@hdp-dn02 bin]# ./kafka-topics.sh --list --zookeeper <ip>:2181
[2017-02-06 19:18:04,781] WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/usr/hdp/current/kafka-broker/config/kafka_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn) ATLAS_ENTITIES
ATLAS_HOOK ambari_kafka_service_check The cluster was kerberised earlier and now it is disabled.Hence I removed the contents from the configured jaas.conf.
... View more
02-06-2017
01:34 PM
Hi , I tired running import.sh present in atlas-pkg/hook-bin to import hive metadata to atlas. But it got failed with following exception: Exception in thread "main" org.apache.atlas.AtlasServiceException: Metadata service API UPDATE_ENTITY_PARTIAL failed with status 400 (Bad Request) Response Body ({"error":"java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Batch containing 1 record(s) expired due to timeout while requesting metadata from brokers for ATLAS_ENTITIES-0",............................ .................................................................... Failed to import Hive Data Model!!!
Please advice a solution ASAP.
... View more
- Tags:
- Atlas
- error
- Governance & Lifecycle
- metadata
- Upgrade to HDP 2.5.3 : ConcurrentModificationException When Executing Insert Overwrite : Hive
Labels:
02-02-2017
09:18 AM
Can you please share any documents to install solr in HDP-2.5.3 and also the steps to configure it with ranger and atlas.
... View more
02-02-2017
07:20 AM
we havent installed solr. is it mandatory to install solr if we are not using search?
... View more
02-02-2017
07:19 AM
Following is the atlas-application-properties content: atlas.audit.hbase.tablename=ATLAS_ENTITY_AUDIT_EVENTS
atlas.audit.hbase.zookeeper.quorum=hortonworks.example.com,hdp-dn02.example.com,hdp-dn03.example.com
atlas.audit.zookeeper.session.timeout.ms=1000
atlas.auth.policy.file=/etc/atlas/conf/policy-store.txt
atlas.authentication.keytab=/etc/security/keytabs/atlas.service.keytab
atlas.authentication.method.file=true
atlas.authentication.method.file.filename=/etc/atlas/conf/users-credentials.properties
atlas.authentication.method.kerberos=false
atlas.authentication.method.ldap=false
atlas.authentication.method.ldap.ad.base.dn= atlas.authentication.method.ldap.ad.bind.dn=
atlas.authentication.method.ldap.ad.bind.password= atlas.authentication.method.ldap.ad.default.role=ROLE_USER atlas.authentication.method.ldap.ad.domain= atlas.authentication.method.ldap.ad.referral=ignore atlas.authentication.method.ldap.ad.url= atlas.authentication.method.ldap.ad.user.searchfilter=(sAMAccountName={0}) atlas.authentication.method.ldap.base.dn=
atlas.authentication.method.ldap.bind.dn= atlas.authentication.method.ldap.bind.password= atlas.authentication.method.ldap.default.role=ROLE_USER
atlas.authentication.method.ldap.groupRoleAttribute=cn atlas.authentication.method.ldap.groupSearchBase= atlas.authentication.method.ldap.groupSearchFilter= atlas.authentication.method.ldap.referral=ignore atlas.authentication.method.ldap.type=none
atlas.authentication.method.ldap.url= atlas.authentication.method.ldap.user.searchfilter=
atlas.authentication.method.ldap.userDNpattern=uid= atlas.authentication.principal= atlas
atlas.authorizer.impl=simple atlas.cluster.name=HSBC atlas.enableTLS=false atlas.graph.index.search.backend=solr5 atlas.graph.index.search.solr.mode=cloud atlas.graph.index.search.solr.zookeeper-url=hdp-dn02.example.com:2181,hdp-dn03.example.com:2181,hortonworks.example.com:2181 atlas.graph.storage.backend=hbase
atlas.graph.storage.hbase.table=atlas_titan atlas.graph.storage.hostname=hortonworks.example.com,hdp-dn02.example.com,hdp-dn03.example.com
atlas.kafka.auto.commit.enable=false atlas.kafka.bootstrap.servers=hortonworks.example.com:6667 atlas.kafka.hook.group.id=atlas atlas.kafka.zookeeper.connect=hdp-dn02.example.com:2181,hdp-dn03.example.com:2181,hortonworks.example.com:2181
atlas.kafka.zookeeper.connection.timeout.ms=200
atlas.kafka.zookeeper.session.timeout.ms=400 atlas.kafka.zookeeper.sync.time.ms=20 atlas.lineage.schema.query.hive_table=hive_table where __guid='%s'\, columns atlas.lineage.schema.query.Table=Table where __guid='%s'\, columns
atlas.notification.create.topics=true atlas.notification.embedded=false
atlas.notification.replicas=1 atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES atlas.rest.address=http://hdp-dn02.example.com:21000 atlas.server.address.id1=hdp-dn02.example.com:21000 atlas.server.bind.address=hdp-dn02.example.com atlas.server.ha.enabled=false
atlas.server.http.port=21000
atlas.server.https.port=21443
atlas.server.ids=id1
atlas.solr.kerberos.enable=false
... View more
02-02-2017
07:15 AM
Hi, The cluster is not kerberised but installed ranger and atlas. As we are not having plan of search in atlas, we haven't installed solr. Atlas UI is not accessible in our cluster. We could found the following errors in application log: 2017-02-02 12:07:06,497 WARN - [main:] ~ FAILED o.e.j.w.WebAppContext@63d75942{/,file:/usr/hdp/2.5.3.0-37/atlas/server/webapp/atlas/,STARTING}{/usr/hdp/current/atlas-server/server/webapp/atlas}: java.lang.ExceptionInInitializerError (AbstractLifeCycle:212)
java.lang.ExceptionInInitializerError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
at org.apache.atlas.ApplicationProperties.getClass(ApplicationProperties.java:115)
Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.solr.Solr5Index
at com.thinkaurelius.titan.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:55)
at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:421)
Caused by: org.apache.solr.common.SolrException: Cannot connect to cluster at hdp-dn02.example.com:2181: cluster not found/not ready
at org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:290)
at org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:467) Please help.
... View more
Labels:
02-02-2017
05:30 AM
Hi , We dont have kerberos in our cluster but ranger and ranger KMS are installed.While trying the rest API command, it is throwing following exception: command used: curl -u keyadmin:keyadmin1 -X GET http://<ranger-KMS-server>:9292/kms/v1/keys/names exception: HTTP Status 401 - Authentication required Please advice a solution. Also we would like to know whether it is mandatory to enable kerberos inorder to configure ranger KMS?
... View more
Labels:
02-02-2017
05:21 AM
thank you,my issue has been resolved with negotiate option.
... View more
02-01-2017
10:54 AM
@slachterman thank you it worked with negotiate. Now when I disabled the kerberos and tried the same rest api command , same exception recreated. command: curl -u keyadmin:keyadmin1 -X GET http://<ranger-KMS-server>:9292/kms/v1/keys/names Exception: Authentication required-This request requires HTTP authentication. Please advice
... View more
01-30-2017
06:17 AM
@vperiasamy I tried kiniting the keyadmin principal. But still facing the same authentication error.
... View more
01-27-2017
02:32 PM
1 Kudo
Hi, While executing the following Ranger KMS rest API command, we have encountered the exception: command: curl -u admin:admin -X GET http://<ranger-KMS-server>:9292/kms/v1/keys/names Exception: Authentication required-This request requires HTTP authentication. We have created the keyadmin principal with the password keyadmin1 as configured in kms-properties. We can create keys and list keys via Ranger KMS UI. Please advice a solution ASAP.
... View more
Labels:
01-12-2017
10:01 AM
@Michael Young Yes you were correct. The folder was not having proper permissions.Now Atlas UI is up.But it is not proper.Please find the png file attached atlasui.png I could see following logs in atlas application.log: INFO -Couldn't find JAX-B element for class javax.ws.rs.core.Response (WadlGeneratorJAXBGrammarGenerator:508)
INFO - Audit: UNKNOWN/<atlasip>.-<atlasip>performed request http://<atlasip>:21000/api/atlas/types/hive_process?doAs=ambari-qa (<atlasip>) at time 2017-01-12T09:34Z (AUDIT:100) INFO - ~ Audit: UNKNOWN/<atlasip>-<atlasip> performed request http://<atlasip>:21000/api/atlas/admin/status (atlasip) at time 2017-01-12T09:34Z (AUDIT:100)
... View more
01-12-2017
09:29 AM
@anaik Please find the content of /etc/atlas/conf/application.properties
atlas.audit.hbase.tablename=ATLAS_ENTITY_AUDIT_EVENTS atlas.audit.zookeeper.session.timeout.ms=1000 atlas.authentication.keytab=/etc/security/keytabs/atlas.service.keytab
atlas.authentication.method=simple atlas.authentication.principal=atlas
atlas.cluster.name=HDP24 atlas.enableTLS=false atlas.graph.index.search.backend=elasticsearch atlas.graph.index.search.directory=/var/lib/atlas/data/es atlas.graph.index.search.elasticsearch.client-only=false
atlas.graph.index.search.elasticsearch.local-mode=true atlas.graph.storage.backend=berkeleyje atlas.graph.storage.directory=/var/lib/atlas/data/berkeley atlas.lineage.hive.process.inputs.name=inputs atlas.lineage.hive.process.outputs.name=outputs
atlas.lineage.hive.process.type.name=Process atlas.lineage.hive.table.schema.query.hive_table=hive_table where name='%s'\, columns
atlas.lineage.hive.table.schema.query.Table=Table where name='%s'\, columns atlas.lineage.hive.table.type.name=DataSet atlas.notification.embedded=false atlas.rest.address=http://localhost:21000
atlas.server.address.id1=localhost:21000 atlas.server.bind.address=localhost atlas.server.ha.enabled=false atlas.server.http.port=21000
atlas.server.https.port=21443 atlas.server.ids=id1
... View more
01-11-2017
01:50 PM
HDP version-2.4.3.0 and ambari version- 2.4.1.0, non secure cluster. Atlas UI is not accessible and throwing HTTP 503-Service Unavailable exception. I could found following exceptions in /var/log/atlas/application.log. ========================================================================================= Caused by: com.thinkaurelius.titan.diskstorage.PermanentBackendException: Error during BerkeleyJE initialization:
at com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager.initialize(BerkeleyJEStoreManager.java:108)
at com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager.<init>(BerkeleyJEStoreManager.java:68)
... 94 more Caused by: com.sleepycat.je.EnvironmentFailureException: (JE 5.0.73) Problem creating output files in: /var/lib/atlas/data/berkeley/je.info UNEXPECTED_EXCEPTION: Unexpected internal Exception, may have side effects.
at com.sleepycat.je.EnvironmentFailureException.unexpectedException(EnvironmentFailureException.java:316)
at com.sleepycat.je.dbi.EnvironmentImpl.initFileHandler(EnvironmentImpl.java:1389)
at com.sleepycat.je.dbi.EnvironmentImpl.<init>(EnvironmentImpl.java:442)
at com.sleepycat.je.dbi.EnvironmentImpl.<init>(EnvironmentImpl.java:382)
at com.sleepycat.je.dbi.DbEnvPool.getEnvironment(DbEnvPool.java:178)
at com.sleepycat.je.Environment.makeEnvironmentImpl(Environment.java:246)
at com.sleepycat.je.Environment.<init>(Environment.java:227)
at com.sleepycat.je.Environment.<init>(Environment.java:170)
at com.thinkaurelius.titan.diskstorage.berkeleyje.BerkeleyJEStoreManager.initialize(BerkeleyJEStoreManager.java:104)
... 95 more
Caused by: java.io.IOException: Couldn't get lock for /var/lib/atlas/data/berkeley/je.info
at java.util.logging.FileHandler.openFiles(FileHandler.java:389)
at java.util.logging.FileHandler.<init>(FileHandler.java:363)
at com.sleepycat.je.util.FileHandler.<init>(FileHandler.java:85)
at com.sleepycat.je.dbi.EnvironmentImpl.initFileHandler(EnvironmentImpl.java:1383)
... 102 more ===========================================================================================
... View more
05-24-2016
06:42 AM
Yes. it is working when i try connecting using single hiverserver2.It is throwing error while enabling hiverserver2 HA
... View more
05-20-2016
11:43 AM
HDP: 2.3.4 ambari- 2.2.0 I had enabled hiverserver2 HA. I have all my hive services(hive server2, hive metastore, webhat server) in one node.Hive metastore is Mysql and it is in another node.I have properly configured the connection using mysql-connector-java.jar. I am getting an error when i try executing hive commands and jobs through beeline . I connect HA enabled hiverserver2 via beeline using !connect jdbc:hive2://<zookeeper ips>:2181/; serviceDiscoveryMode=zooKeeper; zooKeeperNamespace=hiveserver2;principal=hive/_HOST@<realm> Please find the error logs:-
Caused
by: org.apache.hive.service.cli.HiveSQLException: Error while compiling
statement: FAILED: SemanticException Unable to fetch table machines.
For direct MetaStore DB connections, we don't support retries at the
client level.
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:112)
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:181)
at org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:410)
at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:397)
at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:274)
at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:486)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:692)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused
by: org.apache.hadoop.hive.ql.parse.SemanticException: Unable to fetch
table machines. For direct MetaStore DB connections, we don't support
retries at the client level.
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1850)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1531)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10064)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10115)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:211)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:454)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:314)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1164)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1158)
at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:110)
... 15 more Please provide a fix for this.
... View more
04-05-2016
05:00 AM
1 Kudo
HDP version: 2.3.4.0, Ambari version: 2.2.0, Kerberized Cluster I am not able to access oozie web ui (http://<oozieserverip>:11000) in browser. It is throwing HTTP Status:401 Authentication Required error. I am not able to access oozie ui once I installed kerberos in cluster. Please advice a solution.
... View more
03-14-2016
09:02 AM
3 Kudos
HDP version: 2.1.2, hive- 0.13, Tez- 0.4, Not kerberised. I have configured tez as hive execution engine by setting hive.execution.engine=tez. Hive job is running fine and job is getting submitted in Resource Manager UI but I am not able to view history (mapper and reducer details) in Job history Server UI of these jobs.While I can see normal mapreduce jobs in job history server UI in detail. Please help to fix this issue. Also please let me know if there is a possibility of showing no of mappers and reducers used in console during the execution of hive-tez jobs. NB: hive jobs are submitted via beeline
... View more
Labels:
03-01-2016
11:14 AM
2 Kudos
I installed Hive ODBC Driver for HDP 2.3 on my windows machine. I am trying to connect to hive through ODBC(hadoop is installed on CENTOS and the cluster is kerberised).I encoutered following error. I followed the hortonworks document.I gave the server ip and port as 10000. hiveserver2 is running and working fine. Please find the screenshot odbc_error.png attached. Please suggest a solution.
... View more
Labels:
02-22-2016
11:19 AM
Azure is providing different versions of Openlogic CentOs only.That is why I need a confirmation whether HDP support forum will provide support for this OS or else please suggest a best suited OS preferred for HDP 2.3 implentation in azure cloud.
... View more
02-22-2016
10:57 AM
1 Kudo
We are planning to install HDP in microsoft azure cloud.We wolud like to know whether HDP supports Openlogic CentOS.
... View more
Labels: