Member since
10-29-2015
41
Posts
16
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
961 | 06-21-2016 03:31 AM |
04-09-2017
12:06 PM
hi all, i don't find the place to run the unit test for row-level filter and column mask, and when i add the row-level filter policy to the hive-agent test resource, it doesn't work. So how can i do it? i use ranger 0.6.
... View more
Labels:
- Labels:
-
Apache Ranger
03-01-2017
02:37 AM
thanks,i am sure i have add the same properties and use the hbase user. but i find other service policy update also not work. here is the ranger access log: 192.168.55.205 - - [01/Mar/2017:10:00:03 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseMaster@bigdata6-venus_bigdata_hbase HTTP/1.1" 401 -
192.168.55.205 - - [01/Mar/2017:10:00:03 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hive?lastKnownVersion=3&pluginId=hiveServer2@bigdata6-venus_bigdata_hive HTTP/1.1" 401 -
192.168.55.206 - - [01/Mar/2017:10:00:04 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseRegional@bigdata7-venus_bigdata_hbase HTTP/1.1" 401 -
192.168.55.207 - - [01/Mar/2017:10:00:04 +0800] "GET /service/plugins/policies/download/venus_bigdata_storm?lastKnownVersion=3&pluginId=storm@bigdata8-venus_bigdata_storm HTTP/1.1" 304 -
192.168.55.205 - - [01/Mar/2017:10:00:05 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hadoop?lastKnownVersion=2&pluginId=hdfs@bigdata6-venus_bigdata_hadoop HTTP/1.1" 401 -
192.168.55.206 - - [01/Mar/2017:10:00:05 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hadoop?lastKnownVersion=2&pluginId=hdfs@bigdata7-venus_bigdata_hadoop HTTP/1.1" 401 -
192.168.55.207 - - [01/Mar/2017:10:00:05 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseRegional@bigdata8-venus_bigdata_hbase HTTP/1.1" 401 -
192.168.55.205 - - [01/Mar/2017:10:00:05 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_yarn?lastKnownVersion=2&pluginId=yarn@bigdata6-venus_bigdata_yarn HTTP/1.1" 401 -
192.168.55.208 - - [01/Mar/2017:10:00:07 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseRegional@bigdata9-venus_bigdata_hbase HTTP/1.1" 401 -
192.168.55.205 - - [01/Mar/2017:10:00:08 +0800] "GET /login.jsp HTTP/1.1" 200 3325
192.168.55.207 - - [01/Mar/2017:10:00:11 +0800] "GET /service/plugins/policies/download/venus_bigdata_storm?lastKnownVersion=3&pluginId=storm@bigdata8-venus_bigdata_storm HTTP/1.1" 304 -
192.168.55.208 - - [01/Mar/2017:10:00:13 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_kafka?lastKnownVersion=4&pluginId=kafka@bigdata9-venus_bigdata_kafka HTTP/1.1" 401 -
192.168.55.206 - - [01/Mar/2017:10:00:13 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_kafka?lastKnownVersion=4&pluginId=kafka@bigdata7-venus_bigdata_kafka HTTP/1.1" 401 -
192.168.55.208 - - [01/Mar/2017:10:00:13 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_kafka?lastKnownVersion=4&pluginId=kafka@bigdata9-venus_bigdata_kafka HTTP/1.1" 304 -
192.168.55.206 - - [01/Mar/2017:10:00:13 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_kafka?lastKnownVersion=4&pluginId=kafka@bigdata7-venus_bigdata_kafka HTTP/1.1" 304 -
192.168.55.205 - - [01/Mar/2017:10:00:33 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseMaster@bigdata6-venus_bigdata_hbase HTTP/1.1" 401 -
192.168.55.205 - - [01/Mar/2017:10:00:33 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hive?lastKnownVersion=3&pluginId=hiveServer2@bigdata6-venus_bigdata_hive HTTP/1.1" 401 -
... View more
02-28-2017
10:03 AM
i use hdp 2.5.3.0 and the cluster has kerberised. And the properties you mentioned have added, but not work.
... View more
02-28-2017
09:09 AM
when i send the request use curl -i --negotiate -u hbase "http://bigdata6:6080/service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseMaster@bigdata6-venus_bigdata_hbase", i return the results, i don't know the way hbase plugin works.
... View more
02-28-2017
07:53 AM
thanks @Deepak Sharma. i find the problem in the log the hbase log: 2017-02-28 15:48:58,330 ERROR [Thread-74] client.RangerAdminRESTClient: Error getting policies. secureMode=true, user=hbase/bigdata6@VENUS.COM (auth:KERBEROS), response={"httpStatusCode":401,"statusCode":0}, serviceName=venus_bigdata_hbase
2017-02-28 15:48:58,330 ERROR [Thread-74] util.PolicyRefresher: PolicyRefresher(serviceName=venus_bigdata_hbase): failed to refresh policies. Will continue to use last known version of policies (4)
java.lang.Exception: HTTP 401
at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:126)
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:232)
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:188)
at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:158) and the ranger access log: 192.168.55.205 - - [28/Feb/2017:15:18:24 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseMaster@bigdata6-venus_bigdata_hbase HTTP/1.1" 401 -
192.168.55.206 - - [28/Feb/2017:15:18:24 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseRegional@bigdata7-venus_bigdata_hbase HTTP/1.1" 401 -
192.168.55.207 - - [28/Feb/2017:15:18:25 +0800] "GET /service/plugins/secure/policies/download/venus_bigdata_hbase?lastKnownVersion=4&pluginId=hbaseRegional@bigdata8-venus_bigdata_hbase HTTP/1.1" 401 ------------ the ranger audit page have no edit history recently,and the same to other service. i don't know how to solve it. Please give me some advice, thanks.
... View more
02-27-2017
01:42 AM
How to check? i‘ve restart ranger many times, and not work.
... View more
02-24-2017
09:51 AM
2 Kudos
Hi ,Atlas Metadata server start fail and i find the reason is the hbase table grant operation was denied by ranger. The doc has said that the permissions do not have the grant. I don't know why. the audit log and ranger policy: here is the log: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 231, in <module>
MetadataServer().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 92, in start
user=params.hbase_user
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-venus_bigdata@VENUS.COM; cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. atlas_titan
ATLAS_ENTITY_AUDIT_EVENTS
atlas
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
access_tracker
alertDataSource
alertExecutor
alertStream
alertStreamSchema
alertdef
alertdetail
atlas_titan
eagle_metric
eaglehdfs_alert
enrichment
fileSensitivity
hiveResourceSensitivity
ipzone
mlmodel
pcap
pcapfiles
streamMetadata
streamdef
t
threatintel
userprofile
23 row(s) in 0.3190 seconds
nil
TABLE
ATLAS_ENTITY_AUDIT_EVENTS
access_tracker
alertDataSource
alertExecutor
alertStream
alertStreamSchema
alertdef
alertdetail
atlas_titan
eagle_metric
eaglehdfs_alert
enrichment
fileSensitivity
hiveResourceSensitivity
ipzone
mlmodel
pcap
pcapfiles
streamMetadata
streamdef
t
threatintel
userprofile
23 row(s) in 0.0170 seconds
nil
java exception
ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.security.AccessDeniedException: org.apache.hadoop.security.AccessControlException: Permission denied.
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.grant(RangerAuthorizationCoprocessor.java:1168)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.grant(AccessControlProtos.java:9933)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10097)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7717)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1897)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1879)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32299)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.AccessControlException: Permission denied.
at org.apache.ranger.admin.client.RangerAdminRESTClient.grantAccess(RangerAdminRESTClient.java:168)
at org.apache.ranger.plugin.service.RangerBasePlugin.grantAccess(RangerBasePlugin.java:308)
at org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor.grant(RangerAuthorizationCoprocessor.java:1161)
... 11 more
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache HBase
-
Apache Ranger
02-23-2017
02:18 AM
yes,thank you,the knox host and ambari host should be the same domain suffix. i've solve this.
... View more
02-22-2017
09:29 AM
17/02/22 17:46:07 ||f67516cd-e553-43c8-9666-4dfd95b63a3c|audit|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=http://bigdata6:8080/|unavailable|Request method: POST
17/02/22 17:46:07 ||f67516cd-e553-43c8-9666-4dfd95b63a3c|audit|KNOXSSO|venus|||authentication|uri|/gateway/knoxsso/api/v1/websso?originalUrl=http://bigdata6:8080/|success|
17/02/22 17:46:07 ||f67516cd-e553-43c8-9666-4dfd95b63a3c|audit|KNOXSSO|venus|||authentication|uri|/gateway/knoxsso/api/v1/websso?originalUrl=http://bigdata6:8080/|success|Groups: []
17/02/22 17:46:07 ||f67516cd-e553-43c8-9666-4dfd95b63a3c|audit|KNOXSSO|venus|||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=http://bigdata6:8080/|success|Response status: 303
17/02/22 17:46:07 ||cc006ac5-1b98-4d20-bbdd-03a30f26fda4|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/redirecting.html?originalUrl=http://bigdata6:8080/|unavailable|Request method: GET
17/02/22 17:46:07 ||cc006ac5-1b98-4d20-bbdd-03a30f26fda4|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/redirecting.html?originalUrl=http://bigdata6:8080/|success|Response status: 200
17/02/22 17:46:07 ||2f023049-55b3-4bd9-879d-2430bde60f1f|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/bootstrap.min.css|unavailable|Request method: GET
17/02/22 17:46:07 ||2f023049-55b3-4bd9-879d-2430bde60f1f|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/bootstrap.min.css|success|Response status: 200
17/02/22 17:46:07 ||a0848c4b-637b-4699-8cec-efc85f425f6f|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/knox.css|unavailable|Request method: GET
17/02/22 17:46:07 ||a0848c4b-637b-4699-8cec-efc85f425f6f|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/knox.css|success|Response status: 200
17/02/22 17:46:07 ||cd5a3a24-5332-45c2-80b6-edbb8298cd07|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/images/loading.gif|unavailable|Request method: GET
17/02/22 17:46:07 ||cd5a3a24-5332-45c2-80b6-edbb8298cd07|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/images/loading.gif|success|Response status: 200
17/02/22 17:46:08 ||ded2eb86-5184-4c17-bfe2-ca557ae16fac|audit|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=http%3A%2F%2Fbigdata6%3A8080%2F%23%2Flogin?redirected=true|unavailable|Request method: GET
17/02/22 17:46:08 ||ded2eb86-5184-4c17-bfe2-ca557ae16fac|audit|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=http%3A%2F%2Fbigdata6%3A8080%2F%23%2Flogin?redirected=true|success|Response status: 401
17/02/22 17:46:08 ||6eb6e25a-4321-4c69-a7f5-aa7ea15ceb57|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/login.html?originalUrl=http%3A%2F%2Fbigdata6%3A8080%2F%23%2Flogin?redirected=true|unavailable|Request method: GET 17/02/22 17:46:08 ||6eb6e25a-4321-4c69-a7f5-aa7ea15ceb57|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/login.html?originalUrl=http%3A%2F%2Fbigdata6%3A8080%2F%23%2Flogin?redirected=true|success|Response status: 200
17/02/22 17:46:08 ||6e3bca36-1991-40bc-9587-fe35c3ecc61d|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/bootstrap.min.css|unavailable|Request method: GET
17/02/22 17:46:08 ||f355e30a-2159-42b9-8659-043dc3ef9496|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/knox.css|unavailable|Request method: GET
17/02/22 17:46:08 ||f355e30a-2159-42b9-8659-043dc3ef9496|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/knox.css|success|Response status: 200
17/02/22 17:46:08 ||6e3bca36-1991-40bc-9587-fe35c3ecc61d|audit|knoxauth||||access|uri|/gateway/knoxsso/knoxauth/styles/bootstrap.min.css|success|Response status: 200 this is log that visit one time
... View more
02-22-2017
09:27 AM
<topology>
<gateway>
<provider>
<role>webappsec</role>
<name>WebAppSec</name>
<enabled>true</enabled>
<param><name>xframe.options.enabled</name><value>true</value></param>
</provider>
<provider>
<role>authentication</role>
<name>ShiroProvider</name>
<enabled>true</enabled>
<param>
<name>sessionTimeout</name>
<value>30</value>
</param>
<param>
<name>redirectToUrl</name>
<value>/gateway/knoxsso/knoxauth/login.html</value>
</param>
<param>
<name>restrictedCookies</name>
<value>rememberme,WWW-Authenticate</value>
</param>
<param>
<name>main.ldapRealm</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
</param>
<param>
<name>main.ldapContextFactory</name>
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
</param><param>
<name>main.ldapRealm.contextFactory</name>
<value>$ldapContextFactory</value>
</param>
<param>
<name>main.ldapRealm.userDnTemplate</name>
<value>uid={0},ou=people,dc=VENUS,dc=COM</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.url</name>
<value>ldap://bigdata7:389</value>
</param>
<param>
<name>main.ldapRealm.authenticationCachingEnabled</name>
<value>false</value>
</param>
<param>
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
<value>simple</value>
</param>
<param>
<name>urls./**</name>
<value>authcBasic</value>
</param>
</provider>
<provider>
<role>identity-assertion</role>
<name>Default</name>
<enabled>true</enabled>
</provider>
</gateway> <application>
<name>knoxauth</name>
</application>
<service>
<role>KNOXSSO</role>
<param>
<name>knoxsso.cookie.secure.only</name>
<value>false</value>
</param>
<param>
<name>knoxsso.token.ttl</name>
<value>30000</value>
</param>
<param>
<name>knoxsso.redirect.whitelist.regex</name>
<value>^https?:\/\/(bigdata[0-9]|localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*{replace15}lt;/value>
</param></service>
</topology> this is my knox sso topology, and my knox and ambari-server is not in the same machine.
... View more
02-22-2017
03:01 AM
hi all: i config the knox sso for ambari use this doc,https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_security/content/setting_up_knox_sso_for_ambari.html, but when i submit the login page, then the page redirect to the ambari login page,and the redirect back again. here is the amabri-server.log: User(null), RemoteIp(192.168.XX.XX), Operation(User login), Roles(
), Status(Failed), Reason(Authentication required). and knox gateway.log: ed310ab8-e377-4781-adfb-27f94d472e90|audit|KNOXSSO||||access|uri|/gateway/knoxsso/api/v1/websso?originalUrl=http%3A%2F%2Fbigdata%3A8080%2F%23%2Flogin?redirected=true|success|Response status: 401
... View more
Labels:
06-21-2016
03:31 AM
1 Kudo
You can set scp_if_ssh=True under [ssh_connection] in the ansible.cfg in the full-dev-platform dir, and then works.
... View more
06-14-2016
07:05 AM
@Michael Miklavcic thank you for reply, but when i use the latest master branch, there is no panels in the dashboard. How can i config the kibana4 to view the data like in old metron-ui?
... View more
05-31-2016
10:20 AM
hi all: I use the kibana to query the pcap data through pcap_service that it was said in this link,https://cwiki.apache.org/confluence/display/METRON/PCAP+Service. And the pcap have be writed in the hdfs. But in the pcap panel of kibana dashboard, there are not search button that the above link shows.
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
05-30-2016
03:15 AM
@Dave thants for your reply. i have installed the matron successfully by the playbooks. Now I need to read some tutories to use it.
... View more
05-23-2016
07:13 AM
hi: how to install the metron in the existed ambari and elasticsearch cluster? when i use the ansible playbooks, it is always show me various errors. Because i have installed the ambari and elasticsearch, so there are many repeated steps in the playbook. Is there a simple way to install it after i already have the ambari and elasticsearch clusters? thanks.
... View more
Labels:
03-11-2016
05:13 AM
1 Kudo
@Umair Khan, thanks for reply, but after i chmod 777 on that directory, i still cannot start.
... View more
02-29-2016
08:25 AM
1 Kudo
thank you for reply, after i create the dir /tmp/hive/hive use hive account, the hiveserver2 still not work. java.lang.Error: Max start attempts 30 exhausted
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:374)
Caused by: java.lang.NoSuchMethodError: org.apache.curator.utils.PathUtils.validatePath(Ljava/lang/String;)Ljava/lang/String;
... View more
02-24-2016
02:14 AM
1 Kudo
there is no more error logs,there is also a warning session.HiveSessionImpl(HiveSessionImpl.java:setOperationLogSessionDir(230))-Unable to create operation log session directory:/tmp/hive/operation_logs/6cb7c752-3742-4a66-8f5a-ecb87abd78b9
... View more
02-24-2016
01:44 AM
1 Kudo
The hiveserver2 service alway shut down after i start it. The errors follows: 2016-02-24 09:31:57,949 INFO [main]: service.AbstractService (AbstractService.java:stop(125)) - Service:CLIService is stopped.
2016-02-24 09:31:57,950 INFO [main]: service.AbstractService (AbstractService.java:stop(125)) - Service:HiveServer2 is stopped.
2016-02-24 09:31:57,965 INFO [main]: zookeeper.ZooKeeper (ZooKeeper.java:close(684)) - Session: 0x3530737a5b706d0 closed
2016-02-24 09:31:57,965 INFO [main-EventThread]: zookeeper.ClientCnxn (ClientCnxn.java:run(524)) - EventThread shut down
2016-02-24 09:31:57,965 INFO [main]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(279)) - Server instance removed from ZooKeeper.
2016-02-24 09:31:57,967 WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(376)) - Error starting HiveServer2 on attempt 1, will retry in 60 seconds
java.lang.NoSuchMethodError: org.apache.curator.utils.PathUtils.validatePath(Ljava/lang/String;)Ljava/lang/String;
at org.apache.curator.framework.recipes.nodes.PersistentEphemeralNode.<init>(PersistentEphemeralNode.java:194)
at org.apache.hive.service.server.HiveServer2.addServerInstanceToZooKeeper(HiveServer2.java:194)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:351)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:74)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:588)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:461)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
2016-02-24 09:32:09,219 INFO [HiveServer2-Handler-Pool: Thread-53]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(294)) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
2016-02-24 09:32:09,266 INFO [HiveServer2-Handler-Pool: Thread-53]: session.SessionState (SessionState.java:createPath(641)) - Created local directory: /tmp/6cb7c752-3742-4a66-8f5a-ecb87abd78b9_resources
2016-02-24 09:32:09,282 INFO [HiveServer2-Handler-Pool: Thread-53]: session.SessionState (SessionState.java:createPath(641)) - Created HDFS directory: /tmp/hive/hive/6cb7c752-3742-4a66-8f5a-ecb87abd78b9
2016-02-24 09:32:09,284 INFO [HiveServer2-Handler-Pool: Thread-53]: session.SessionState (SessionState.java:createPath(641)) - Created local directory: /tmp/hive/6cb7c752-3742-4a66-8f5a-ecb87abd78b9
2016-02-24 09:32:09,290 INFO [HiveServer2-Handler-Pool: Thread-53]: session.SessionState (SessionState.java:createPath(641)) - Created HDFS directory: /tmp/hive/hive/6cb7c752-3742-4a66-8f5a-ecb87abd78b9/_tmp_space.db
2016-02-24 09:32:09,294 WARN [HiveServer2-Handler-Pool: Thread-53]: session.HiveSessionImpl (HiveSessionImpl.java:setOperationLogSessionDir(230)) - Unable to create operation log session directory: /tmp/hive/operation_logs/6cb7c752-3742-4a66-8f5a-ecb87abd78b9
2016-02-24 09:32:09,569 INFO [HiveServer2-Handler-Pool: Thread-53]: service.CompositeService (SessionManager.java:closeSession(300)) - This instance of HiveServer2 has been removed from the list of server instances available for dynamic service discovery. The last client session has ended - will shutdown now.
2016-02-24 09:32:09,570 INFO [Thread-23]: server.HiveServer2 (HiveServer2.java:stop(305)) - Shutting down HiveServer2
2016-02-24 09:32:09,570 INFO [Thread-23]: server.HiveServer2 (HiveServer2.java:removeServerInstanceFromZooKeeper(279)) - Server instance removed from ZooKeeper.
... View more
Labels:
01-20-2016
06:46 AM
yes,you are right. after i install the kms service ,it works!
... View more
01-19-2016
08:22 AM
when i test use the mapreduce example, the error also appear
... View more
01-19-2016
07:53 AM
when i use spark-submit to run some code, the spark does not work, the error follows: Traceback (most recent call last):
File "/home/lizhen/test.py", line 27, in <module>
abc = raw_data.count()
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1006, in count
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 997, in sum
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 871, in fold
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 773, in collect
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/usr/hdp/2.3.4.0-3485/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: java.io.IOException: java.net.ConnectException: Connection refused
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:888)
at org.apache.hadoop.crypto.key.KeyProviderDelegationTokenExtension.addDelegationTokens(KeyProviderDelegationTokenExtension.java:86)
at org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2243)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:206)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
at org.apache.spark.rdd.RDD.collect(RDD.scala:908)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
at org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:190)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:128)
at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.doDelegationTokenOperation(DelegationTokenAuthenticator.java:285)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.getDelegationToken(DelegationTokenAuthenticator.java:166)
at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.getDelegationToken(DelegationTokenAuthenticatedURL.java:371)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:875)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$2.run(KMSClientProvider.java:870)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.crypto.key.kms.KMSClientProvider.addDelegationTokens(KMSClientProvider.java:870)
... 41 more
... View more
Labels:
- Labels:
-
Apache Spark