Member since
02-29-2016
108
Posts
213
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2026 | 08-18-2017 02:09 PM | |
3534 | 06-16-2017 08:04 PM | |
3282 | 01-20-2017 03:36 AM | |
8910 | 01-04-2017 03:06 AM | |
4449 | 12-09-2016 08:27 PM |
11-28-2016
09:32 PM
Storm is installed, look at the screenshot
... View more
11-28-2016
08:27 PM
2 Kudos
installed Metron cluster with Ambari following the tutorial from https://community.hortonworks.com/content/kbentry/60805/deploying-a-fresh-metron-cluster-using-ambari-serv.html Installation went very smoothly, but when starting services, Metron components won't start. All other components started successfully. Looking at the error log, failure is with following script /usr/metron/0.3.0/bin/start_enrichment_topology.sh -s enrichment -z zk1:2181,zk2:2181,zk3:2181 Error message is related to storm +- Apache Storm -+
+- data FLow User eXperience -+
Version: 1.0.1
Parsing file: /usr/metron/0.3.0/flux/enrichment/remote.yaml
20:18:44.691 [main] INFO o.a.s.f.p.FluxParser - loading YAML from input stream...
20:18:44.702 [main] INFO o.a.s.f.p.FluxParser - Performing property substitution.
20:18:44.716 [main] INFO o.a.s.f.p.FluxParser - Not performing environment variable substitution.
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/storm/Config
at org.apache.storm.flux.FluxBuilder.buildConfig(FluxBuilder.java:45)
at org.apache.storm.flux.Flux.runCli(Flux.java:151)
at org.apache.storm.flux.Flux.main(Flux.java:98)
Caused by: java.lang.ClassNotFoundException: org.apache.storm.Config
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 3 more
Google the error message says it is related to pom storm scope, but I have no idea where it was defined in Metron. Storm supervisor and other related components are installed on the same node as metron components as mentioned in the tutorial.
... View more
Labels:
- Labels:
-
Apache Metron
11-19-2016
04:07 AM
Yes, I saw that. The Hbase policy was created but not the kafka ones. So it was quite confusing. Also, is there an updated SchemaLayoutView.js?
... View more
11-19-2016
03:49 AM
2 Kudos
Tried to install HDP 2.5 with Atlas 0.7.0 on kerberos cluster. The install followed the instruction provided in Eric Mxwell's github https://github.com/emaxwell-hw/Atlas-Ranger-Tag-Security When checking the installation, found out the 2 policy in kafka for ATLAS_HOOK and ATLAS_ENTITIES are missing. Have to add them manually to making things working. Also even with the provided SchemaLayoutView.js the layout still missing the schema part Are these bugs in the installer or something I missed in the process? I used MIT KDC and openLDAP backend. Everything else seems to be working fine. Tag based policy works after all manual steps are done.
... View more
Labels:
- Labels:
-
Apache Atlas
11-18-2016
04:03 PM
I have not tried this yet, but the link you provide is sufficient for my reference. I was trying to troubleshoot Infra but the underlying problem was fixed after installing Log Search. So it is fine now.
... View more
11-17-2016
06:02 PM
2 Kudos
After Kerberos with MIT KDC, I got error when try to access Ambari Infra UI http://<server>:8886/solr/ HTTP ERROR 401 Problem accessing /solr/. Reason: Authentication required Kinit with infra and HTTP keytabs worked fine. [root@qwang-hdp ~]# kinit -kt /etc/security/keytabs/ambari-infra-solr.service.keytab infra-solr/qwang-hdp.field.hortonworks.com@FIELD.HORTONWORKS.COM
[root@qwang-hdp ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: infra-solr/qwang-hdp.field.hortonworks.com@FIELD.HORTONWORKS.COM
Valid starting Expires Service principal
11/17/2016 17:53:33 11/18/2016 17:53:33 krbtgt/FIELD.HORTONWORKS.COM@FIELD.HORTONWORKS.COM
[root@qwang-hdp ~]# kinit -kt /etc/security/keytabs/spnego.service.keytab HTTP/qwang-hdp.field.hortonworks.com@FIELD.HORTONWORKS.COM
[root@qwang-hdp ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: HTTP/qwang-hdp.field.hortonworks.com@FIELD.HORTONWORKS.COM
Valid starting Expires Service principal
11/17/2016 18:01:33 11/18/2016 18:01:33 krbtgt/FIELD.HORTONWORKS.COM@FIELD.HORTONWORKS.COM
Any other keytabs I need to check and anything else that could cause the problem?
... View more
Labels:
- Labels:
-
Apache Ambari
11-15-2016
09:43 PM
3 Kudos
@emaxwell Following your HCC tutorial on kerberizing cluster with FreeIPA. Run into error where the password for the test principle always expire https://community.hortonworks.com/content/kbentry/59645/ambari-24-kerberos-with-freeipa.html Performing kinit using qi1-111516@FIELD.HORTONWORKS.COM
2016-11-15 21:33:42,394 - Execute['/usr/bin/kinit -c /var/lib/ambari-agent/tmp/kerberos_service_check_cc_79b5f4cfa04c21fdbd26a3e07b45366e -kt /etc/security/keytabs/kerberos.service_check.111516.keytab qi1-111516@FIELD.HORTONWORKS.COM'] {'user': 'ambari-qa'}
2016-11-15 21:33:42,460 - File['/var/lib/ambari-agent/tmp/kerberos_service_check_cc_79b5f4cfa04c21fdbd26a3e07b45366e'] {'action': ['delete']}
Command failed after 1 tries I updated password global policy to make it never expire, and the user is using that policy ipa pwpolicy-mod --maxlife=0 --minlife=0 global_policy
[root@qwang-hdp ~]# ipa pwpolicy-show --user=qi1-111516
Group: global_policy
Max lifetime (days): 0
Min lifetime (hours): 0
History size: 0
Character classes: 0
Min length: 8
Max failures: 6
Failure reset interval: 60
Lockout duration: 600
But if I kinit with the user, it will ask me to reset the password anyway. This seems to related to the second requirement of the wizard, but I can't make it work Greatly appreciate if you could provide some advice.
... View more
Labels:
- Labels:
-
Apache Ambari
11-10-2016
12:42 AM
I won't assume the package is available. Better find a way to do that in python.
... View more
11-04-2016
05:16 PM
2 Kudos
I have some question about the hive jdbc connection string for AD Kerberized cluster. Hive server: qwang-hdp2 Hive clients: qwang-hdp0, qwang-hdp2, qwang-hdp4 I could connect using beeline using following conn string beeline -u "jdbc:hive2://qwang-hdp2:10000/default;principal=hive/qwang-hdp2@REALM.NAME"
But not this conn string beeline -u "jdbc:hive2://qwang-hdp2:10000/default;principal=hive/qwang-hdp0@REALM.NAME" The only difference is the hive principal, got the following error Error: Could not open client transport with JDBC Uri: jdbc:hive2://qwang-hdp2:10000/default;principal=hive/qwang-hdp0@REALM.NAME: Peer indicated failure: GSS initiate failed (state=08S01,code=0) Root is under hadoopadmin principal [root@qwang-hdp0 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hadoopadmin@REALM.NAME
Also keytabs are available [root@qwang-hdp0 ~]# klist -kt /etc/security/keytabs/hive.service.keytab
Keytab name: FILE:/etc/security/keytabs/hive.service.keytab
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
0 11/02/2016 20:35:50 hive/qwang-hdp0@REALM.NAME
0 11/02/2016 20:35:50 hive/qwang-hdp0@REALM.NAME
0 11/02/2016 20:35:50 hive/qwang-hdp0@REALM.NAME
0 11/02/2016 20:35:50 hive/qwang-hdp0@REALM.NAME
0 11/02/2016 20:35:50 hive/qwang-hdp0@REALM.NAME
Could you suggest any way to trouble shoot why this is happening?
... View more
Labels:
- Labels:
-
Apache Hive