Member since
07-17-2017
20
Posts
0
Kudos Received
0
Solutions
10-23-2018
08:09 AM
Hi Friends, I am trying to run oozie workflow for hive actions. While running oozie workflow, oozie identifies user as "userId@domain.com" instead of "userid" and it is trying to create inode on hdfs directory for this user like hdfs://Hadoop1/user/userId@domain.com instead of "hdfs://Hadoop1/user/userId". How can I explicitly point oozie to use inode/oozie temp location as "hdfs://Hadoop1/user/userId". . Error Log: Caused by: org.apache.hadoop.ipc.RemoteException (org.apache.hadoop.security.AccessControlException): permission denied : user=d12345, access=WRITE, inode="/user/412345@abc.com/oozie-hdp1/0000-00-oozie-hdp1-W/create_external_table--hive.tmp":hdfs:hdfs:drwxr-xr-x Thanks in Advance!!
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie
10-04-2018
07:19 AM
getting null values while parsing like "{"longitude":"\"{'longitude': '-118.23721'","latitude":null,"needs_recoding":null}"
... View more
10-04-2018
07:05 AM
Not able to parse field "Starting Lat-Long" with "struct<longitude:string,latitude:string,needs_recoding:string>" hive> describe formatted hivelearn1;
OK
col_name data_type comment
# col_name data_type comment
trip_id int
duration bigint
starttime string
endtime string
starting_station_id bigint
start_station_lat double
start_st_long double
ending_st_id bigint
ending_st_lat double
ending_st_long double
bike_id int
plan_duration bigint
trip_route string
pass_type string
start_lat_long struct<longitude:string,latitude:string,needs_recoding:string>
# Detailed Table Information
Database: shareride
Owner: hdf62-hdfs
CreateTime: Thu Oct 04 07:12:16 BST 2018
LastAccessTime: UNKNOWN
Protect Mode: None
Retention: 0
Location: hdfs://hdf62/apps/hive/warehouse/shareride.db/motorshare
Table Type: MANAGED_TABLE
Table Parameters:
COLUMN_STATS_ACCURATE false
last_modified_by hdf62-hdfs
last_modified_time 1538636103
numFiles 1
numRows -1
rawDataSize -1
skip.header.line.count 1
totalSize 36376430
transient_lastDdlTime 1538636103
# Storage Information
SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat: org.apache.hadoop.mapred.TextInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
field.delim ,
serialization.format ,
Trip ID
Duration
Start Time
End Time
Starting Station ID
Starting Station Latitude
Starting Station Longitude
Ending Station ID
Ending Station Latitude
Ending Station Longitude
Bike ID
Plan Duration
Trip Route Category
Passholder Type
Starting Lat-Long
1912818
180
2016-07-07T04:17:00
2016-07-07T04:20:00
3014
34.05661
-118.237
3014
34.05661
-118.237
6281
30
Round Trip
Monthly Pass
{'longitude': '-118.23721', 'latitude': '34.0566101', 'needs_recoding':
False}
1919661
1980
2016-07-07T06:00:00
2016-07-07T06:33:00
3014
34.05661
-118.237
3014
34.05661
-118.237
6281
30
Round Trip
Monthly Pass
{'longitude': '-118.23721', 'latitude': '34.0566101', 'needs_recoding':
False}
1933383
300
2016-07-07T10:32:00
2016-07-07T10:37:00
3016
34.052898
-118.242
3016
34.0529
-118.242
5861
365
Round Trip
Flex Pass
{'longitude': '-118.24156', 'latitude': '34.0528984', 'needs_recoding':
False}
... View more
Labels:
- Labels:
-
Apache Hive
09-27-2018
08:05 AM
@Artem Ervits Could you please help me here.
... View more
09-26-2018
09:45 PM
Hi folks, Could you please help me in finding hive DDL for following type of records. This is CSV file which contains JSON Header ==> field1, field2, field3, field4
Data ====> value1, value2, {'field31':'value31','field32':'value32'}, value4
Thanks !
... View more
Labels:
- Labels:
-
Apache Hive
09-26-2018
09:43 PM
Hi folks, I am trying to load following data into hive tables. failing to load json value [host_attributes column which is json object] into the table: host_id,host_name,cpu_count,ph_cpu_count, cpu_info,discovery_status,host_attributes,ipv4,ipv6,public_host_name,last_registration_time,os_arch,os_info,os_type,rack_info,total_mem
1,abc1103.xyz.com,8,8,,,{"interfaces":"ens192,lo","os_family":"redhat","kernel":"Linux","timezone":"GMT","kernel_release":"3.10.0-862.11.6.el7.x86_64","os_release_version":"7.5","physicalprocessors_count":"8","hardware_isa":"x86_64","kernel_majorversion":"3.10","kernel_version":"3.10.0","netmask":"255.255.255.0","mac_address":"xx:xx:xa:aa","swap_free":"8.00 GB","swap_size":"8.00 GB","selinux_enabled":"true","hardware_model":"x86_64","processors_count":"8"},1.2.3.4,0.0.0.0,abc1103.xyz.com,1536923595397,x86_64,,redhat7,/926,61680136 I have tried following DDL: create external table hosts ( host_id int,hostname string,cpu_count int, ph_cpu_count int, cpu_info string, discovery_status string, host_attributes MAP<string,string>, ipv4 string, ipv6 string, public_host_name string,last_registration_time bigint,os_arch string,os_info string, os_type string, rack_info string, total_mem bigint )
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' COLLECTION ITEMS TERMINATED BY ',' MAP KEYS TERMINATED BY ':' LINES TERMINATED BY '\n' location '/user/test'
I also tried by altering host_attributes by executing following query: alter table hosts change host_attributes host_attributes struct<interfaces:string,os_family:string,kernel:string,timezonetimezone:string,kernel_release:string,os_release_version:float,physicalprocessors_count:int,hardware_isa:string, kernel_majorversion:string,kernel_version:string, netmask:string, mac_address:string, swap_free:string, swap_size:string,selinux_enabled:string,hardware_model:string,processors_count:int> after discovery_status; What is appropriate Hive DDL for this data. Thanks in Advance!
... View more
Labels:
- Labels:
-
Apache Hive
09-21-2018
10:58 AM
Hi, To install below Spark2 components, I need to choose a server for each of these (Master/Slave/Edge node). SPARK2_JOBHISTORYSERVER
LIVY2_SERVER
SPARK2_THRIFTSERVER We have 3 master nodes on our cluster. Can anyone help me in understanding which component has to be installed on what node. Can Livy2_server can be installed on Edge nodes? Thanks.
... View more
Labels:
- Labels:
-
Apache Spark
06-22-2018
04:10 PM
I am failing to start HBase because of directory permission issue. I am getting the error : "org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=hbase, access=EXECUTE, inode="/apps/hbase/data/hbase.version":pu86-hbase:pu86-app-hdfs:drwx------". I have already setup my service account for hbase [hbase-env/hbase_user] and hbase supergroup as "pu86-hbase" but while accessing inode it is using "hbase" as a user. What config changes are expected so that "pu86-hbase" user will be used for accessing inode "/apps/hbase/data/hbase.version"
... View more
Labels:
- Labels:
-
Apache HBase
06-14-2018
12:25 PM
Thanks @vperiasamy, we are restricted to use HDP 2.4.3.0 and it is kerberos env. I have followed the same document for configuration.
... View more
06-12-2018
08:46 AM
Hi Friends, Hbase master fails to start after enabling ranger plugin ssl and giving error "No trusted certificate found". Please help me to understand which keystore/truststore properties/config/files I should look into. Please find the detailed log below: 2018-06-11 08:08:11,891 ERROR [Thread-106] util.PolicyRefresher: PolicyRefresher(serviceName=hdp_hbase): failed to refresh policies. Will continue to use last known version of policies (-1)
com.sun.jersey.api.client.ClientHandlerException: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at com.sun.jersey.api.client.WebResource$Builder.get(WebResource.java:503)
at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:73)
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:215)
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:183)
at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:156)
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1959)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:296)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1514)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:216)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1026)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:961)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1072)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1413)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1397)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1546)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:240)
at com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147)
... 8 more
Caused by: sun.security.validator.ValidatorException: No trusted certificate found
at sun.security.validator.SimpleValidator.buildTrustedChain(SimpleValidator.java:397)
at sun.security.validator.SimpleValidator.engineValidate(SimpleValidator.java:134)
at sun.security.validator.Validator.validate(Validator.java:260)
at sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:324)
at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:229)
at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:124)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1496)
... 23 more
<br>
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Ranger
09-27-2017
08:52 AM
Hi @Shahrukh Khan Thanks for your reply. After referring above mentioned documents I am able to start my Storm and Kafka services.The issues were solved by properly configuring the jaas files. But, Ambari metrics collector still does not start and Hbase master stops soon after starting. For metrics collector we get following logs repeatedly: /var/log/ambari-metrics-collector/ambari-metrics-collector.log 2017-09-27 09:43:00,001 ERROR org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer: RECEIVED SIGNAL 15: SIGTERM (not sure what this error indicates) 2017-09-27 09:47:40,572 INFO org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception, tries=21, retries=35, started=270488 ms ago, cancelled=false, msg= 2017-09-27 09:48:00,659 INFO org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception, tries=22, retries=35, started=290575 ms ago, cancelled=false, msg= 2017-09-27 09:48:20,693 INFO org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception, tries=23, retries=35, started=310609 ms ago, cancelled=false, msg= /var/log/ambari-metrics-collector/hbase-master.log 2017-09-27 09:43:06,975 ERROR [main] master.HMasterCommandLine: Master exiting
java.io.IOException: Could not start ZK with 3 ZK servers in local mode deployment. Aborting as clients (e.g. shell) will not be able to find this ZK quorum.
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:175)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2451)
... View more
09-21-2017
01:31 PM
I have kerberized HDP [2.4] cluster using Ambari Rest-API and all services are running fine except Storm, Kafka and ambari-metrics-collector. All keytabs are available and properly placed on respective hosts From logs what I can understand is Storm, Kafka and ambari-metrics-collector services fail to connect to zkclient or zk quorum. All zookeeper servers are running fine for a long time and if I do telnet, I am able to connect zk quorum with same port [2181]. So, somewhere I am missing some configuration for these services connecting zookeeper in kerberized environment. [or SASL configurations]. Zookeeper logs 2017-09-21 12:14:47,271 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /128.160.120.21:41906 (no session established for client)
2017-09-21 12:15:43,963 - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@494] - Processed session termination for sessionid: 0x35ea3480411005c
2017-09-21 12:15:47,309 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /128.160.120.21:42000
2017-09-21 12:15:47,310 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:748)
2017-09-21 12:15:47,310 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /128.160.120.21:42000 (no session established for client)
2017-09-21 12:16:47,273 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@197] - Accepted socket connection from /128.160.120.21:42122
2017-09-21 12:16:47,275 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x0, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:748)
2017-09-21 12:16:47,275 - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for client /128.160.120.21:42122 (no session established for client)
~
Kafka server logs advertised.listeners = PLAINTEXTSASL://abctestlab0515.bdaas.com:6667
leader.imbalance.per.broker.percentage = 10
(kafka.server.KafkaConfig)
[2017-09-21 12:07:30,276] INFO starting (kafka.server.KafkaServer)
[2017-09-21 12:07:30,291] INFO Connecting to zookeeper on abctestlab0512.bdaas.com:2181,abctestlab0515.bdaas.com:2181,abctestlab0513.bdaas.com:2181 (kafka.server.KafkaServer)
[2017-09-21 12:11:40,363] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 250000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:155)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:129)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-09-21 12:11:40,364] INFO shutting down (kafka.server.KafkaServer)
[2017-09-21 12:11:40,370] INFO shut down completed (kafka.server.KafkaServer)
[2017-09-21 12:11:40,370] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 250000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:155)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:129)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-09-21 12:11:40,372] INFO shutting down (kafka.server.KafkaServer) Storm- DRPC logs 2017-09-20 14:21:06.114 o.a.s.z.s.ZooKeeperServer [INFO] Server environment:user.dir=/home/hdp44-storm
2017-09-20 14:21:07.304 b.s.u.Utils [INFO] Using defaults.yaml from resources
2017-09-20 14:21:07.324 b.s.u.Utils [INFO] Using storm.yaml from resources
2017-09-20 14:21:07.373 b.s.d.drpc [INFO] Starting Distributed RPC servers...
2017-09-20 14:21:07.450 b.s.s.a.k.ServerCallbackHandler [WARN] No password found for user: null
2017-09-20 14:21:07.452 b.s.s.a.k.KerberosSaslTransportPlugin [ERROR] Server failed to login in principal:javax.security.auth.login.LoginException: No pa
ssword pr
ovided
javax.security.auth.login.LoginException: No password provided
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:919) ~[?:1.8.0_131]
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:760) ~[?:1.8.0_131]
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617) ~[?:1.8.0_131]
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Kafka
-
Apache Storm
09-21-2017
07:57 AM
Hi @ssharma, Issue is resoleved, when I added following property to core-site.xml ipc.client.fallback-to-simple-auth-allowed=true. I think this issue occurs when you are dealing with cluster which was earlier kerberized and then dekerberized and vice-versa. Thanks, Ajit
... View more
09-19-2017
07:54 AM
I am trying to kerberized cluster and all machines are adjoined with centrify. I checked without kerberization all services are running fine but after kerberization of the cluster server logs showing following error: Failed on local exception:
java.io.IOException: Server asks us to fall back to SIMPLE auth, but this client is configured to only allow secure connections.; Host Details : local host is ; destination host is
... View more
Labels:
- Labels:
-
Apache Hadoop
09-19-2017
07:45 AM
Thanks @Robert Levas, problem solved with your solution.
... View more
09-18-2017
12:51 PM
I am trying to start zkfc from ambari but it is failing while executing the command: hdfs zkfc -formatZK -nonInteractive . All 3 Zookeeper servers are running state as per Ambari dashboard. When I checked for the port 2181, it is running. But at the same time when i tried with : telnet abctestlab0512.bdaas.com 2181
Trying x.x.x.x...
Connected to abctestlab0512.bdaas.com
Escape character is '^]'.
Connection closed by foreign host.
It seems like port is getting closed. Check the following ZKFC logs: 17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:os.version=3.10.0-514.16.1.el7.x86_64
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:user.name=root
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Client environment:user.dir=/var/log/hadoop/hdp44-hdfs
17/09/18 07:36:34 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=abctestlab0512.bdaas.com:2181,abctestlab0513.bdaas.com:2181,abctestlab0515.bdaas.com:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@610f7aa
17/09/18 07:36:34 INFO zookeeper.ClientCnxn: Opening socket connection to server abctestlab0515.bdaas.com/192.120.10.24:2181. Will not attempt to authenticate using SASL (unknown error)
17/09/18 07:36:34 INFO zookeeper.ClientCnxn: Socket connection established to abctestlab0515.bdaas.com/192.120.10.24:2181, initiating session
17/09/18 07:36:34 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
17/09/18 07:36:34 INFO zookeeper.ClientCnxn: Opening socket connection to server abctestlab0513.bdaas.com/192.120.10.22:2181. Will not attempt to authenticate using SASL (unknown error)
17/09/18 07:36:34 INFO zookeeper.ClientCnxn: Socket connection established to abctestlab0513.bdaas.com/192.120.10.22:2181, initiating session
17/09/18 07:36:34 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
17/09/18 07:36:35 INFO zookeeper.ClientCnxn: Opening socket connection to server abctestlab0512.bdaas.com/192.120.10.21:2181. Will not attempt to authenticate using SASL (unknown error)
17/09/18 07:36:35 INFO zookeeper.ClientCnxn: Socket connection established to abctestlab0512.bdaas.com/192.120.10.21:2181, initiating session
17/09/18 07:36:35 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
17/09/18 07:36:37 INFO zookeeper.ClientCnxn: Opening socket connection to server abctestlab0515.bdaas.com/192.120.10.24:2181. Will not attempt to authenticate using SASL (unknown error)
17/09/18 07:36:37 INFO zookeeper.ClientCnxn: Socket connection established to abctestlab0515.bdaas.com/192.120.10.24:2181, initiating session
17/09/18 07:36:37 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
17/09/18 07:36:38 INFO zookeeper.ClientCnxn: Opening socket connection to server abctestlab0513.bdaas.com/192.120.10.22:2181. Will not attempt to authenticate using SASL (unknown error)
17/09/18 07:36:38 INFO zookeeper.ClientCnxn: Socket connection established to abctestlab0513.bdaas.com/192.120.10.22:2181, initiating session
17/09/18 07:36:38 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
17/09/18 07:36:38 INFO zookeeper.ClientCnxn: Opening socket connection to server abctestlab0512.bdaas.com/192.120.10.21:2181. Will not attempt to authenticate using SASL (unknown error)
17/09/18 07:36:38 INFO zookeeper.ClientCnxn: Socket connection established to abctestlab0512.bdaas.com/192.120.10.21:2181, initiating session
17/09/18 07:36:38 INFO zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
17/09/18 07:36:39 ERROR ha.ActiveStandbyElector: Connection timed out: couldn't connect to ZooKeeper in 5000 milliseconds
17/09/18 07:36:39 INFO zookeeper.ZooKeeper: Session: 0x0 closed
17/09/18 07:36:39 FATAL ha.ZKFailoverController: Unable to start failover controller. Unable to connect to ZooKeeper quorum at abctestlab0512.bdaas.com:2181,abctestlab0513.bdaas.com:2181,abctestlab0515.bdaas.com:2181. Please check the configured value for ha.zookeeper.quorum and ensure that ZooKeeper is running.
17/09/18 07:36:39 INFO zookeeper.ClientCnxn: EventThread shut down
17/09/18 07:36:39 INFO tools.DFSZKFailoverController: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DFSZKFailoverController at abctestlab0512.bdaas.com/192.120.10.21
************************************************************/
... View more
Labels:
- Labels:
-
Apache Hadoop
07-23-2017
05:48 PM
Thanks Robert for your quick reply. Is there any REST API or Ambari Blueprint option which supports "Manual " way of kerberization.
... View more
07-17-2017
10:45 AM
Hi All, We are trying to kerberize cluster using Centirfy with pre
created AD Accounts and Keytabs . So far we are able kerberize with following
approach.
Generate computer account in AD and centrify
using APIs. [We can access AD or Centrify only through APIs]. Do “adjoin” after creating computer accounts in
AD and CENTRIFY.
Create principals and keytabs for user and
services in AD/Centrify Place user and service keytabs on respective
hosts in /etc/security/keytabs From Ambari UI,
Enable Security ->
Existing Active Directory But in reaches to point till creation of principal and gets failed. So, Is there any procedure which can skip procedure of "create principal" and "create keytabs", as it is already created and placed at respective hosts.
... View more
Labels:
- Labels:
-
Apache Hadoop