Member since
08-10-2017
108
Posts
2
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1808 | 01-28-2019 08:41 AM | |
2526 | 01-28-2019 08:35 AM | |
1565 | 12-18-2018 05:42 AM | |
4247 | 08-16-2018 12:12 PM | |
1689 | 07-24-2018 06:55 AM |
03-30-2019
08:03 AM
Thanks @Nitin Shelke. This resolved LLAP starting issue. But while inserting any data in Hive table using LLAP we got below error: TaskAttempt 3 failed, info=[org.apache.hadoop.ipc.RemoteException(java.lang.NoClassDefFoundError): org/apache/tez/runtime/internals/api/TaskReporterInterface
at org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl.submitWork(ContainerRunnerImpl.java:263)
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.submitWork(LlapDaemon.java:554)
at org.apache.hadoop.hive.llap.daemon.impl.LlapProtocolServerImpl.submitWork(LlapProtocolServerImpl.java:101)
at org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$LlapDaemonProtocol$2.callBlockingMethod(LlapDaemonProtocolProtos.java:16818)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
Caused by: java.lang.ClassNotFoundException: org.apache.tez.runtime.internals.api.TaskReporterInterface
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 12 more
]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:0, Vertex vertex_1553784387057_0006_2_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:0 (state=08S01,code=2) To resolve this issue, added all jars under '/usr/hdp/2.6.5.0-292/tez_hive2' location in Auxilary Jar list as follow: /usr/hdp/2.6.5.0-292/tez_hive2/hadoop-shim-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/hadoop-shim-hdp-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-api-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-common-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-dag-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-examples-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-ext-service-tests-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-history-parser-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-javadoc-tools-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-job-analyzer-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-mapreduce-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-runtime-internals-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-runtime-library-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-tests-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-yarn-timeline-cache-plugin-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-yarn-timeline-history-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-yarn-timeline-history-with-acls-0.8.4.2.6.5.0-292.jar,/usr/hdp/2.6.5.0-292/tez_hive2/tez-yarn-timeline-history-with-fs-0.8.4.2.6.5.0-292.jar Now, able to insert data in Hive table.
... View more
03-28-2019
01:27 PM
@Sergey Shelukhin @gopal @Dennis Connolly @Mahmoud Sabri .... please suggest
... View more
03-28-2019
05:46 AM
@Nitin Shelke @Jay Kumar SenSharma..please suggest.
... View more
03-27-2019
10:56 AM
Hello Team, We are using HDP-2.6.5 in our environment. We are getting following error while starting up Hiveserver2 Interactive (LLAP) server: 2019-03-27T04:33:05,291 ERROR [main ()] org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Failed to start LLAP Daemon with exception
java.lang.NoClassDefFoundError: org/apache/tez/hadoop/shim/HadoopShimsLoader
at org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl.<init>(ContainerRunnerImpl.java:157) ~[hive-llap-server-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.<init>(LlapDaemon.java:291) ~[hive-llap-server-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
at org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.main(LlapDaemon.java:529) [hive-llap-server-2.1.0.2.6.5.0-292.jar:2.1.0.2.6.5.0-292]
Caused by: java.lang.ClassNotFoundException: org.apache.tez.hadoop.shim.HadoopShimsLoader
at java.net.URLClassLoader.findClass(URLClassLoader.java:382) ~[?:1.8.0_192]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_192]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) ~[?:1.8.0_192]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_192]
... 3 more On all the nodes of cluster, hadoop-shim* jars are present as given below: [root@p-hdp-01 ~]# cd /usr/hdp/2.6.5.0-292/tez_hive2/
[root@p-hdp-01 tez_hive2]# ls -lrth
total 4.0M
-rw-r--r-- 1 root root 23K May 11 2018 tez-yarn-timeline-history-with-fs-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 7.7K May 11 2018 tez-yarn-timeline-history-with-acls-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 28K May 11 2018 tez-yarn-timeline-history-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 12K May 11 2018 tez-yarn-timeline-cache-plugin-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 155K May 11 2018 tez-tests-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 642K May 11 2018 tez-runtime-library-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 192K May 11 2018 tez-runtime-internals-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 284K May 11 2018 tez-mapreduce-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 72K May 11 2018 tez-job-analyzer-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 15K May 11 2018 tez-javadoc-tools-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 77K May 11 2018 tez-history-parser-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 106K May 11 2018 tez-ext-service-tests-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 56K May 11 2018 tez-examples-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 1.3M May 11 2018 tez-dag-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 76K May 11 2018 tez-common-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 975K May 11 2018 tez-api-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 5.4K May 11 2018 hadoop-shim-hdp-0.8.4.2.6.5.0-292.jar
-rw-r--r-- 1 root root 8.7K May 11 2018 hadoop-shim-0.8.4.2.6.5.0-292.jar
drwxr-xr-x 2 root root 6 May 11 2018 doc
lrwxrwxrwx 1 root root 19 Mar 25 06:16 conf -> /etc/tez_hive2/conf
drwxr-xr-x 3 root root 18 Mar 25 06:16 man
drwxr-xr-x 2 root root 4.0K Mar 25 06:16 lib
drwxr-xr-x 2 root root 42 Mar 25 06:16 ui
[root@p-hdp-01 tez_hive2]
How to resolve this? Please suggest. Thanks, Bhushan
... View more
Labels:
03-14-2019
12:32 PM
@Rafael Leon, I am also facing same issue? Have you resolved this? Could you please suggest.
... View more
03-14-2019
12:30 PM
@amarnath reddy pappu, I followed these steps but when I login to ambari, it is successfully getting redirected to knox gateway and after i give credentials it goes to ambari ui and then coming back to knox gateway log in screen. Could you please suggest. Opened questions in community also: https://community.hortonworks.com/questions/242895/knox-sso-not-working-for-ambari.html
... View more
03-14-2019
11:49 AM
Hello Team, Our environment consist of Ambari-2.7 and HDP-3.1. We have synced AD/LDAP users in Ambari. Using 'ambari-server setup-sso' command, we have setup KnoxSSO for Ambari. But when I login to ambari, it is successfully getting redirected to knox gateway and after i give credentials it goes to ambari ui and then coming back to knox gateway UI screen as shown below: The gateway.log shows Authentication successful message but still its redirecting again to login page. Here is the content of gateway.log file: 2019-03-14 11:26:06,049 DEBUG authc.BasicHttpAuthenticationFilter (BasicHttpAuthenticationFilter.java:createToken(308)) - Attempting to execute login with headers [Basic aGRwdXNlcjpSZWRoYXRAMTIz]
2019-03-14 11:26:06,066 DEBUG ldap.JndiLdapRealm (JndiLdapRealm.java:queryForAuthenticationInfo(369)) - Authenticating user 'hdpuser' through LDAP
2019-03-14 11:26:06,066 DEBUG ldap.JndiLdapContextFactory (JndiLdapContextFactory.java:getLdapContext(488)) - Initializing LDAP context using URL [ldap://WIN-N66EE.hdp.com:389] and principal [cn=hdpuser,ou=hdpcloud,dc=hdp,dc=com] with pooling disabled
2019-03-14 11:26:06,400 DEBUG realm.AuthenticatingRealm (AuthenticatingRealm.java:getAuthenticationInfo(569)) - Looked up AuthenticationInfo [hdpuser] from doGetAuthenticationInfo
2019-03-14 11:26:06,400 DEBUG credential.SimpleCredentialsMatcher (SimpleCredentialsMatcher.java:equals(95)) - Performing credentials equality check for tokenCredentials of type [org.apache.shiro.crypto.hash.SimpleHash and accountCredentials of type [org.apache.shiro.crypto.hash.SimpleHash]
2019-03-14 11:26:06,401 DEBUG credential.SimpleCredentialsMatcher (SimpleCredentialsMatcher.java:equals(101)) - Both credentials arguments can be easily converted to byte arrays. Performing array equals comparison
2019-03-14 11:26:06,401 DEBUG authc.AbstractAuthenticator (AbstractAuthenticator.java:authenticate(233)) - Authentication successful for token [org.apache.shiro.authc.UsernamePasswordToken - hdpuser, rememberMe=false (202.149.217.138)]. Returned account [hdpuser]
2019-03-14 11:26:06,401 DEBUG support.DefaultSubjectContext (DefaultSubjectContext.java:resolveSecurityManager(102)) - No SecurityManager available in subject context map. Falling back to SecurityUtils.getSecurityManager() lookup.
2019-03-14 11:26:06,402 DEBUG support.DefaultSubjectContext (DefaultSubjectContext.java:resolveSecurityManager(102)) - No SecurityManager available in subject context map. Falling back to SecurityUtils.getSecurityManager() lookup.
2019-03-14 11:26:06,539 DEBUG servlet.SimpleCookie (SimpleCookie.java:addCookieHeader(226)) - Added HttpServletResponse Cookie [rememberMe=deleteMe; Path=/gateway/knoxsso; Max-Age=0; Expires=Wed, 13-Mar-2019 11:26:06 GMT]
2019-03-14 11:26:06,539 DEBUG mgt.AbstractRememberMeManager (AbstractRememberMeManager.java:onSuccessfulLogin(290)) - AuthenticationToken did not indicate RememberMe is requested. RememberMe functionality will not be executed for corresponding account.
2019-03-14 11:26:06,540 DEBUG realm.AuthorizingRealm (AuthorizingRealm.java:getAuthorizationCacheLazy(234)) - No authorizationCache instance set. Checking for a cacheManager...
2019-03-14 11:26:06,557 INFO realm.AuthorizingRealm (AuthorizingRealm.java:getAuthorizationCacheLazy(248)) - No cache or cacheManager properties have been set. Authorization cache cannot be obtained.
2019-03-14 11:26:35,316 DEBUG authc.BasicHttpAuthenticationFilter (BasicHttpAuthenticationFilter.java:sendChallenge(274)) - Authentication required: sending 401 Authentication challenge response. Attached KnoxSSO file for reference.knoxsso.txt How to resolve it? Please suggest. Thanks, Bhushan
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Knox
02-22-2019
10:09 AM
Hi Team, We have upgrade HDP-2.6.5 to HDP-3.1.0. Upgrade was successful. But after upgrade, Hive query got stuck for long time and never proceed, whenever we try to run insert , select count(*) command on any Hive table: Beeline version 3.1.0.3.1.0.0-78 by Apache Hive
0: jdbc:hive2://node1.carpenter.com:2181,node> CREATE TABLE tm (a int, b int) TBLPROPERTIES
. . . . . . . . . . . . . . . . . . . . . . .> ('transactional'='true',
. . . . . . . . . . . . . . . . . . . . . . .> 'transactional_properties'='insert_only');
INFO : Compiling command(queryId=hive_20190222095913_ac751fe4-ceda-4fbc-9623-49404106ad91): CREATE TABLE tm (a int, b int) TBLPROPERTIES
('transactional'='true',
'transactional_properties'='insert_only')
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20190222095913_ac751fe4-ceda-4fbc-9623-49404106ad91); Time taken: 0.044 seconds
INFO : Executing command(queryId=hive_20190222095913_ac751fe4-ceda-4fbc-9623-49404106ad91): CREATE TABLE tm (a int, b int) TBLPROPERTIES
('transactional'='true',
'transactional_properties'='insert_only')
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20190222095913_ac751fe4-ceda-4fbc-9623-49404106ad91); Time taken: 0.164 seconds
INFO : OK
No rows affected (0.47 seconds)
0: jdbc:hive2://node1.carpenter.com:2181,node> INSERT INTO tm VALUES(1,1);
INFO : Compiling command(queryId=hive_20190222095925_56b8233d-f6a4-412c-b87f-92121da8af3a): INSERT INTO tm VALUES(1,1)
INFO : Semantic Analysis Completed (retrial = false)
INFO : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:int, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20190222095925_56b8233d-f6a4-412c-b87f-92121da8af3a); Time taken: 0.718 seconds
INFO : Executing command(queryId=hive_20190222095925_56b8233d-f6a4-412c-b87f-92121da8af3a): INSERT INTO tm VALUES(1,1)
INFO : Query ID = hive_20190222095925_56b8233d-f6a4-412c-b87f-92121da8af3a
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode
INFO : Subscribed to counters: [] for queryId: hive_20190222095925_56b8233d-f6a4-412c-b87f-92121da8af3a
INFO : Tez session hasn't been created yet. Opening session Query got hung. How to solve it? Please suggest. Thanks, Bhushan
... View more
Labels:
01-29-2019
12:42 PM
@Giorgi Chitashvili, followed same steps. Still getting same issue. Could you please suggest?
... View more
01-29-2019
12:12 PM
@Giorgi Chitashvili, Still getting same issue. Could you please suggest?
... View more
01-29-2019
12:11 PM
@Prabhjot Singh, I am also facing same issue. Did you get resolution of this issue? could you please suggest.
... View more
01-28-2019
08:41 AM
This issue occurred as we have used Load balancer while implementing Ranger HA. Breaking HA and keeping only one Ranger Admin resolved this issue for us.
... View more
01-28-2019
08:35 AM
Resolved issue by installing hive client on Hive metastore machine.
... View more
01-27-2019
07:38 AM
@Sindhu @Geoffrey Shelton Okot @Sandeep Nemuri ... please suggest
... View more
01-25-2019
11:33 AM
Hello Team, After enabling kerberos on HDP-2.6, Hive metadata server is failing. Getting following error in hive metastore log: 2019-01-25 03:58:28,880 ERROR [pool-7-thread-3]: server.TThreadPoolServer (TThreadPoolServer.java:run(297)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:609)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:606)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1849)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:606)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
How to solve this? Please suggest. Thanks, Bhushan
... View more
Labels:
01-25-2019
11:29 AM
Hello Team, After enabling Kerberos on HDP-2.6 cluster, Atlas metadata server is not starting. Getting following error: java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.coprocessor.CoprocessorException: HTTP 503 Error: HTTP 503
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.grant(RangerAuthorizationCoprocessor.java:1236)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7857)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1999)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1981)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32389)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
How to solve this? Please suggest. Thanks, Bhushan
... View more
Labels:
12-24-2018
10:57 AM
@Gulshad Ansari A record is configured in DNS but still getting same exception. Please suggest. # dig p-hdp-d1.hdp.prod.test.com
; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7_5.1 <<>> p-hdp-d1.hdp.prod.test.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31190
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 3
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;p-hdp-d1.hdp.prod.test.com. IN A
;; ANSWER SECTION:
p-hdp-d1.hdp.prod.test.com. 1200 IN A 10.10.33.46
;; AUTHORITY SECTION:
hdp.prod.test.com. 86400 IN NS vc-ipa001a.apps.test.com.
hdp.prod.test.com. 86400 IN NS vc-ipa001b.apps.test.com.
;; ADDITIONAL SECTION:
vc-ipa001a.apps.test.com. 1200 IN A 10.10.70.40
vc-ipa001b.apps.test.com. 1200 IN A 10.10.70.41
;; Query time: 1 msec
;; SERVER: 10.10.70.40#53(10.10.70.40)
;; WHEN: Mon Dec 24 02:27:37 EST 2018
;; MSG SIZE rcvd: 168
Output of nslookup command: # nslookup p-hdp-d1.hdp.prod.test.com Server: 10.10.70.40
Address: 10.10.70.40#53 Name: p-hdp-d1.hdp.prod.test.com
Address: 10.10.33.46 #
... View more
12-18-2018
05:30 PM
Hi Team, We are configuring Kerberos using FreeIPA. While installing Kerberos using Ambari wizard, we are getting following error: ERROR [Server Action Executor Worker 2824] IPAKerberosOperationHandler:303 - Failed to execute ipa query: service-add --ok-as-delegate=TRUE HTTP/p-hdp-d1.hdp.prod.test.com@APPS.TEST.COM
STDOUT:
STDERR: ipa: ERROR: Host 'p-hdp-d1.hdp.prod.test.com' does not have corresponding DNS A/AAAA record
But ping to this server is working fine: -sh-4.2$ ping p-hdp-d1.hdp.prod.test.com
PING p-hdp-d1.hdp.prod.test.com (10.10.33.21) 56(84) bytes of data.
64 bytes from p-hdp-d1.hdp.prod.test.com (10.10.33.21): icmp_seq=1 ttl=64 time=0.092 ms
64 bytes from p-hdp-d1.hdp.prod.test.com (10.10.33.21): icmp_seq=2 ttl=64 time=0.090 ms
64 bytes from p-hdp-d1.hdp.prod.test.com (10.10.33.21): icmp_seq=3 ttl=64 time=0.095 ms
How to resolve this? Please suggest. Thanks in advance. Thanks, Bhushan
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
12-18-2018
05:42 AM
Thanks @Geoffrey Shelton Okot for researching on this. I have resolved this issue by following instructions given in this link: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/configuring-atlas-sqoop-hook.html
... View more
12-17-2018
11:00 AM
Hi Team, We are using HDP-2.6.5. Using given doc we are configuring Sqoop and Hive lineage: https://hortonworks.com/tutorial/cross-component-lineage-with-apache-atlas-across-apache-sqoop-hive-kafka-storm/#sqoop-and-hive-lineage While running sqoop import, we are getting below ClassNotFoundException : sqoop import --connect jdbc:mysql://vc-hdp-db001a.hdp.test.com/test --table test_table_sqoop1 --hive-import --hive-table test_hive_table4 --username root -P -m 1 --fetch-size 1
Warning: /usr/hdp/2.6.5.0-292/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/12/17 05:50:21 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.5.0-292
Enter password:
18/12/17 05:50:28 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
18/12/17 05:50:28 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
18/12/17 05:50:28 INFO manager.MySQLManager: Argument '--fetch-size 1' will probably get ignored by MySQL JDBC driver.
18/12/17 05:50:28 INFO tool.CodeGenTool: Beginning code generation
18/12/17 05:50:28 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test_table_sqoop1` AS t LIMIT 1
18/12/17 05:50:28 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test_table_sqoop1` AS t LIMIT 1
18/12/17 05:50:28 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.6.5.0-292/hadoop-mapreduce
Note: /tmp/sqoop-hdfs/compile/90ee7535be590b2e48c64709e9c0127d/test_table_sqoop1.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/12/17 05:50:29 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/90ee7535be590b2e48c64709e9c0127d/test_table_sqoop1.jar
18/12/17 05:50:29 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/12/17 05:50:29 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/12/17 05:50:29 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/12/17 05:50:29 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/12/17 05:50:29 INFO mapreduce.ImportJobBase: Beginning import of test_table_sqoop1
18/12/17 05:50:30 INFO client.AHSProxy: Connecting to Application History server at p-hdp-m-r08-02.hdp.test.com/10.10.33.22:10200
18/12/17 05:50:30 INFO client.RequestHedgingRMFailoverProxyProvider: Looking for the active RM in [rm1, rm2]...
18/12/17 05:50:30 INFO client.RequestHedgingRMFailoverProxyProvider: Found active RM [rm1]
18/12/17 05:50:31 INFO db.DBInputFormat: Using read commited transaction isolation
18/12/17 05:50:31 INFO mapreduce.JobSubmitter: number of splits:1
18/12/17 05:50:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1544603908449_0008
18/12/17 05:50:32 INFO impl.YarnClientImpl: Submitted application application_1544603908449_0008
18/12/17 05:50:32 INFO mapreduce.Job: The url to track the job: http://p-hdp-m-r09-01.hdp.test.com:8088/proxy/application_1544603908449_0008/
18/12/17 05:50:32 INFO mapreduce.Job: Running job: job_1544603908449_0008
18/12/17 05:50:40 INFO mapreduce.Job: Job job_1544603908449_0008 running in uber mode : false
18/12/17 05:50:40 INFO mapreduce.Job: map 0% reduce 0%
18/12/17 05:50:48 INFO mapreduce.Job: map 100% reduce 0%
18/12/17 05:50:48 INFO mapreduce.Job: Job job_1544603908449_0008 completed successfully
18/12/17 05:50:48 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=172085
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=172
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=6151
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=6151
Total vcore-milliseconds taken by all map tasks=6151
Total megabyte-milliseconds taken by all map tasks=25194496
Map-Reduce Framework
Map input records=6
Map output records=6
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=68
CPU time spent (ms)=1220
Physical memory (bytes) snapshot=392228864
Virtual memory (bytes) snapshot=6079295488
Total committed heap usage (bytes)=610795520
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=172
18/12/17 05:50:48 INFO mapreduce.ImportJobBase: Transferred 172 bytes in 18.2966 seconds (9.4006 bytes/sec)
18/12/17 05:50:48 INFO mapreduce.ImportJobBase: Retrieved 6 records.
18/12/17 05:50:48 INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners
18/12/17 05:50:48 WARN mapreduce.PublishJobData: Unable to publish import data to publisher org.apache.atlas.sqoop.hook.SqoopHook
java.lang.ClassNotFoundException: org.apache.atlas.sqoop.hook.SqoopHook
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.sqoop.mapreduce.PublishJobData.publishJobData(PublishJobData.java:46)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:284)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:127)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:507)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
18/12/17 05:50:48 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `test_table_sqoop1` AS t LIMIT 1
18/12/17 05:50:48 INFO hive.HiveImport: Loading uploaded data into Hive
Logging initialized using configuration in jar:file:/usr/hdp/2.6.5.0-292/hive/lib/hive-common-1.2.1000.2.6.5.0-292.jar!/hive-log4j.properties
OK
Time taken: 4.355 seconds
Loading data to table default.test_hive_table4
Table default.test_hive_table4 stats: [numFiles=1, numRows=0, totalSize=172, rawDataSize=0]
OK
Time taken: 3.085 seconds
How to resolve it? Please suggest. Thanks in advance. Thanks, Bhushan
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Hive
-
Apache Sqoop
11-29-2018
12:59 PM
@Jay Kumar SenSharma @Sandeep Nemuri @Arti Wadhwani...please suggest.
... View more
11-29-2018
12:46 PM
Hi @Nitin Shelke, I have followed steps mentioned at : https://community.hortonworks.com/articles/42927/adding-kdc-administrator-credentials-to-the-ambari.html But still getting same error in ambari server log. Please suggest.
... View more
11-28-2018
07:27 AM
@Sampath Kumar, I do not have kadmin.local access. Is there any other way I can check for this?
... View more
11-28-2018
07:06 AM
@naveen sangam, I am also facing same issue. Have you resolved it? Please suggest.
... View more
11-28-2018
06:46 AM
Hello Team, We are using HDP-2.6.5.0 and Ambari-2.6.2.2. While enabling kerberos we are getting following error in ambari UI Admin session expiration error.
Missing KDC administrator admin credentials. Please enter admin principal and password". Attached the screenshot for same kerb-admin-cred.png. At the same time in ambari server log we get following error message: 28 Nov 2018 01:35:46,078 INFO [ambari-client-thread-39] AmbariManagementControllerImpl:4173 - Received action execution request, clusterName=hdpmedacist, request=isCommand :true, action :null, command :KERBEROS_SERVICE_CHECK, inputs :{HAS_RESOURCE_FILTERS=true}, resourceFilters: [RequestResourceFilter{serviceName='KERBEROS', componentName='null', hostNames=[]}], exclusive: false, clusterName :hdpmedacist
28 Nov 2018 01:35:47,516 ERROR [ambari-client-thread-39] KerberosHelperImpl:2232 - Cannot validate credentials: org.apache.ambari.server.serveraction.kerberos.KerberosMissingAdminCredentialsException: Missing KDC administrator credentials.
The KDC administrator credentials must be set as a persisted or temporary credential resource.This may be done by issuing a POST to the /api/v1/clusters/:clusterName/credentials/kdc.admin.credential API entry point with the following payload:
{
"Credential" : {
"principal" : "(PRINCIPAL)", "key" : "(PASSWORD)", "type" : "(persisted|temporary)"}
}
}
28 Nov 2018 01:35:47,516 ERROR [ambari-client-thread-39] BaseManagementHandler:67 - Bad request received: Missing KDC administrator credentials.
The KDC administrator credentials must be set as a persisted or temporary credential resource.This may be done by issuing a POST to the /api/v1/clusters/:clusterName/credentials/kdc.admin.credential API entry point with the following payload:
{
"Credential" : {
"principal" : "(PRINCIPAL)", "key" : "(PASSWORD)", "type" : "(persisted|temporary)"}
}
}
How to resolve this? Our Kerberos installation stuck due to this. Please suggest. Thanks in advance. Thanks, Bhushan
... View more
Labels:
10-15-2018
12:54 PM
Hi All, We are using FreeIPA as an Identity management system. We have used below steps to setup ldap: [root@ip-172-10-3-5 ~]# ambari-server setup-ldap
Using python /usr/bin/python
Setting up LDAP properties...
Primary URL* {host:port} (ip-172-10-21-121.us-west-2.compute.internal:389):
Secondary URL {host:port} :
Use SSL* [true/false] (false):
User object class* (posixAccount):
User name attribute* (uid):
Group object class* (posixGroup):
Group name attribute* (cn):
Group member attribute* (memberUid):
Distinguished name attribute* (dn):
Base DN* (dc=test,dc=freeipas,dc=com):
Referral method [follow/ignore] :
Bind anonymously* [true/false] (false):
Handling behavior for username collisions [convert/skip] for LDAP sync* (convert):
Manager DN* (uid=admin,cn=users,cn=accounts,dc=test,dc=freeipas,dc=com):
Enter Manager Password* :
Re-enter password:
====================
Review Settings
====================
authentication.ldap.managerDn: uid=admin,cn=users,cn=accounts,dc=test,dc=freeipas,dc=com
authentication.ldap.managerPassword: *****
Save settings [y/n] (y)? y
Saving...done
Ambari Server 'setup-ldap' completed successfully.
[root@ip-172-10-3-5 ~]#
While syncing user/groups from FreeIPA to Ambari we are getting following error: [root@ip-172-10-3-5 ~]# ambari-server sync-ldap --all
Using python /usr/bin/python
Syncing with LDAP...
Enter Ambari Admin login: admin
Enter Ambari Admin password:
Syncing all.ERROR: Exiting with exit code 1.
REASON: Sync event creation failed. Error details: HTTP Error 403: Login Failed: More than one user with that username found, please work with your Ambari Administrator to adjust your LDAP configuration
[root@ip-172-10-3-5 ~]#
Also, at Ambari web ui login we are getting below error: Login Failed: More than one user with that username found, please work with your Ambari Administrator to adjust your LDAP configuration Attached screenshot freeipa-admin.png How should we resolve this error? Please suggest. Thanks in advance. Thanks, Bhushan
... View more
Labels:
- Labels:
-
Apache Ambari
08-31-2018
07:09 AM
@Felix Albani Please suggest.
... View more
08-29-2018
05:44 AM
@Felix Albani, Our 2-way SSL is working properly. Also, hive public certificate is present in ranger admin truststore. Attached Ranger Hive Repo screenshot hive-repo.png Please suggest.
... View more
08-29-2018
05:39 AM
Thanks @Felix Albani I am able to configure 2-way SSL. But 1-way SSL is not working in HDP-2.5.6. Also, we have configured HiveServer2 HA. What should be the value of Common Name For Certificate in Ranger Policy Manager UI for Hive repository? Currently for one of the Hiveserver2 CN value is hmaster.test.org and for other Hiveserver2 CN value is hmaster2.test.org. Please suggest.
... View more
08-28-2018
10:46 AM
Hello Team, We are using HDP-2.5.6. We are not using Kerberos security. We have configured SSL for Hiveserver2 daemon. We have enabled Ranger plugin for Hive service. When we click on Test connection in Rangers Hive repository, it gives following error: ---------------------------------------------------------------------------------------
Connection Failed.
Unable to retrieve any files using given parameters, You can still save the repository and start creating policies, but you would not be able to use autocomplete for resource names. Check ranger_admin.log for more info.
org.apache.ranger.plugin.client.HadoopException: Unable to connect to Hive Thrift Server instance.. Unable to connect to Hive Thrift Server instance.. Could not open client transport with JDBC Uri: jdbc:hive2://hmaster.test.com:10001: null.
------------------------------------------------------------------------------------- Ranger autocomplete resource name feature is not working. How to resolve it? Please suggest. Thanks, Bhushan
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger