Member since
04-03-2017
5
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
984 | 05-18-2017 11:14 AM |
07-20-2018
09:17 AM
3 Kudos
Short Description: This article walks you through how to deploy a custom parser on kerberized cluster in Metron . Article When adding a net new data source to Metron, the first step is to create kafka topic 'sensor_topic' to push the events from the new telemetry data source into Metron. The second step is to configure Metron to parse the telemetry data source so that downstream processing can be done on it. This wiki doc will walk you through how to perform both of these steps. In this article I have added steps to perform the same operation on a kerberised cluster. On kerberised cluster we need to add the required ACL's to metron user and parser group for the created topic. Also we need to provide the storm authorization while deploying the topology.
1. Switch to the kafka user .
su kafka
2. create sensor topic
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper $ZOOKEEPER --create --topic sensor_topic_name --partitions 1 --replication-factor 1
3. Add required ACL to metron user /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=nat-r7-hets-metron-1:2181 --add --allow-principal User:metron --operation All --topic 'sensor_topic_name' --cluster
4. Add required ACL to group /usr/hdp/current/kafka-broker/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=$ZOOKEEPER --add --allow-principal User:metron --group sensor_topic_name_parser
5. Switch to the metron user and acquire a Kerberos ticket for the metron principal. su metron
kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
6. Deploy the new Parser Topology /usr/hcp/1.6.0.0-1/metron/bin/start_parser_topology.sh -k $KAFKA_BROKER -z $ZOOKEEPER -s sensor_name -ksp PLAINTEXTSASL -e ~metron/.storm/storm.config
... View more
Labels:
04-03-2018
11:07 AM
@Wang Ao since you used the metron-rest deb to install , You can try manually creating those users, refer below docs https://docs.hortonworks.com/HDPDocuments/HCP1/HCP-1.4.1/bk_installation/content/installing_rest_app_manually.html
... View more
05-18-2017
11:14 AM
1 Kudo
I found the solution ... The problem was, users were created before I configuring the KDC to issue renewable tickets, I was under the impression that setting the max_life and max_renewable_life in /var/kerberos/krb5kdc/kdc.conf and restarting the kadmin and krb5kdc services would be enough, but as the values were already stored in KDC it didn’t work. So, as a quick fix I set the renew lifetime for the existing user and krbtgt realm. I think I need to recreate the KDB using "kdb5_util create -s" as even for the new users I see the max_renewable_life is set to 0. below are the commands to set the renew life time for the eisting users Modify the appropriate principals to allow renewable tickets using the following commands. Adjust the parameters to match your desired KDC parameters: kadmin.local -q "modprinc -maxlife 1days -maxrenewlife 7days +allow_renewable krbtgt/EXAMPLE.COM@EXAMPLE.COM"
kadmin.local -q "modprinc -maxlife 1days -maxrenewlife 7days +allow_renewable metron@EXAMPLE.COM"
... View more
05-15-2017
03:45 PM
Hi, I am enabling kerberos on a 12 node cluster, I did successful installation of KDC and set all the required properties in the conf files and added the required principals. While enabling Kerberos through wizard from Amabari it fails at ‘Start and Test Services “. The task at the failure happened is ‘Metron Enrichment Start’. Below is the Trace for the same, the exception is Caused by: java.lang.RuntimeException: The TGT found is not renewable I have set the ‘max_renewable_life = 7d’ In /var/kerberos/krb5kdc/kdc.conf in in the realm section, If the KDC cannot issue renewable tickets should I remove this property and proceed ? stderr: /var/lib/ambari-agent/data/errors-1174.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.0.1.1.0.0/package/scripts/enrichment_master.py", line 113, in <module>
Enrichment().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.0.1.1.0.0/package/scripts/enrichment_master.py", line 74, in start
commands.start_enrichment_topology()
File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.0.1.1.0.0/package/scripts/enrichment_commands.py", line 146, in start_enrichment_topology
user=self.__params.metron_user)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hcp/1.1.0.0-71/metron/bin/start_enrichment_topology.sh -s enrichment -z hcpa-11.openstacklocal:2181,hcpa-12.openstacklocal:2181,hcpa-10.openstacklocal:2181' returned 1. Running: /usr/jdk64/jdk1.8.0_77/bin/java -server -Ddaemon.name= -Dstorm.options= -Dstorm.home=/grid/0/hdp/2.5.3.0-37/storm -Dstorm.log.dir=/var/log/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /grid/0/hdp/2.5.3.0-37/storm/lib/zookeeper.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/storm-core-1.0.1.2.5.3.0-37.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/kryo-3.0.3.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-slf4j-impl-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-core-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/ring-cors-0.1.5.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-api-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/servlet-api-2.5.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/minlog-1.3.0.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-over-slf4j-1.6.6.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/objenesis-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/asm-5.0.3.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/clojure-1.7.0.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/storm-rename-hack-1.0.1.2.5.3.0-37.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/disruptor-3.3.2.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/slf4j-api-1.7.7.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/reflectasm-1.10.1.jar org.apache.storm.daemon.ClientJarTransformerRunner org.apache.storm.hack.StormShadeTransformer /usr/hcp/1.1.0.0-71/metron/lib/metron-enrichment-0.4.0.1.1.0.0-71-uber.jar /tmp/07366eac398511e79f57fa163e0f2645.jar
Running: /usr/jdk64/jdk1.8.0_77/bin/java -client -Ddaemon.name= -Dstorm.options= -Dstorm.home=/grid/0/hdp/2.5.3.0-37/storm -Dstorm.log.dir=/var/log/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /grid/0/hdp/2.5.3.0-37/storm/lib/zookeeper.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/storm-core-1.0.1.2.5.3.0-37.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/kryo-3.0.3.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-slf4j-impl-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-core-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/ring-cors-0.1.5.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-api-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/servlet-api-2.5.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/minlog-1.3.0.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/log4j-over-slf4j-1.6.6.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/objenesis-2.1.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/asm-5.0.3.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/clojure-1.7.0.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/storm-rename-hack-1.0.1.2.5.3.0-37.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/disruptor-3.3.2.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/slf4j-api-1.7.7.jar:/grid/0/hdp/2.5.3.0-37/storm/lib/reflectasm-1.10.1.jar:/tmp/07366eac398511e79f57fa163e0f2645.jar:/home/metron/.storm:/grid/0/hdp/2.5.3.0-37/storm/bin -Dstorm.jar=/tmp/07366eac398511e79f57fa163e0f2645.jar org.apache.storm.flux.Flux --remote /usr/hcp/1.1.0.0-71/metron/flux/enrichment/remote.yaml --filter /usr/hcp/1.1.0.0-71/metron/config/enrichment.properties
███████╗██╗ ██╗ ██╗██╗ ██╗
██╔════╝██║ ██║ ██║╚██╗██╔╝
█████╗ ██║ ██║ ██║ ╚███╔╝
██╔══╝ ██║ ██║ ██║ ██╔██╗
██║ ███████╗╚██████╔╝██╔╝ ██╗
╚═╝ ╚══════╝ ╚═════╝ ╚═╝ ╚═╝
+- Apache Storm -+
+- data FLow User eXperience -+
Version: 1.0.1
Parsing file: /usr/hcp/1.1.0.0-71/metron/flux/enrichment/remote.yaml
655 [main] INFO o.a.s.f.p.FluxParser - loading YAML from input stream...
666 [main] INFO o.a.s.f.p.FluxParser - Performing property substitution.
692 [main] INFO o.a.s.f.p.FluxParser - Not performing environment variable substitution.
994 [main] INFO o.a.c.f.i.CuratorFrameworkImpl - Starting
1111 [main-EventThread] INFO o.a.c.f.s.ConnectionStateManager - State change: CONNECTED
1436 [main] INFO o.a.s.f.FluxBuilder - Detected DSL topology...
1823 [main] INFO o.a.s.k.s.KafkaSpoutStream - Declared [streamId = default], [outputFields = [value]] for [topic = enrichments]
---------- TOPOLOGY DETAILS ----------
Topology Name: enrichment
--------------- SPOUTS ---------------
kafkaSpout [1] (org.apache.metron.storm.kafka.flux.StormKafkaSpout)
---------------- BOLTS ---------------
enrichmentSplitBolt [1] (org.apache.metron.enrichment.bolt.EnrichmentSplitterBolt)
geoEnrichmentBolt [1] (org.apache.metron.enrichment.bolt.GenericEnrichmentBolt)
stellarEnrichmentBolt [1] (org.apache.metron.enrichment.bolt.GenericEnrichmentBolt)
hostEnrichmentBolt [1] (org.apache.metron.enrichment.bolt.GenericEnrichmentBolt)
simpleHBaseEnrichmentBolt [1] (org.apache.metron.enrichment.bolt.GenericEnrichmentBolt)
enrichmentJoinBolt [1] (org.apache.metron.enrichment.bolt.EnrichmentJoinBolt)
enrichmentErrorOutputBolt [1] (org.apache.metron.writer.bolt.BulkMessageWriterBolt)
threatIntelSplitBolt [1] (org.apache.metron.enrichment.bolt.ThreatIntelSplitterBolt)
simpleHBaseThreatIntelBolt [1] (org.apache.metron.enrichment.bolt.GenericEnrichmentBolt)
stellarThreatIntelBolt [1] (org.apache.metron.enrichment.bolt.GenericEnrichmentBolt)
threatIntelJoinBolt [1] (org.apache.metron.enrichment.bolt.ThreatIntelJoinBolt)
threatIntelErrorOutputBolt [1] (org.apache.metron.writer.bolt.BulkMessageWriterBolt)
outputBolt [1] (org.apache.metron.writer.bolt.BulkMessageWriterBolt)
--------------- STREAMS ---------------
kafkaSpout --SHUFFLE--> enrichmentSplitBolt
enrichmentSplitBolt --FIELDS--> hostEnrichmentBolt
enrichmentSplitBolt --FIELDS--> geoEnrichmentBolt
enrichmentSplitBolt --FIELDS--> stellarEnrichmentBolt
enrichmentSplitBolt --FIELDS--> simpleHBaseEnrichmentBolt
enrichmentSplitBolt --FIELDS--> enrichmentJoinBolt
geoEnrichmentBolt --FIELDS--> enrichmentJoinBolt
stellarEnrichmentBolt --FIELDS--> enrichmentJoinBolt
simpleHBaseEnrichmentBolt --FIELDS--> enrichmentJoinBolt
hostEnrichmentBolt --FIELDS--> enrichmentJoinBolt
geoEnrichmentBolt --FIELDS--> enrichmentErrorOutputBolt
stellarEnrichmentBolt --FIELDS--> enrichmentErrorOutputBolt
hostEnrichmentBolt --FIELDS--> enrichmentErrorOutputBolt
simpleHBaseEnrichmentBolt --FIELDS--> enrichmentErrorOutputBolt
enrichmentJoinBolt --FIELDS--> threatIntelSplitBolt
threatIntelSplitBolt --FIELDS--> simpleHBaseThreatIntelBolt
threatIntelSplitBolt --FIELDS--> stellarThreatIntelBolt
simpleHBaseThreatIntelBolt --FIELDS--> threatIntelJoinBolt
stellarThreatIntelBolt --FIELDS--> threatIntelJoinBolt
threatIntelSplitBolt --FIELDS--> threatIntelJoinBolt
threatIntelJoinBolt --FIELDS--> outputBolt
simpleHBaseThreatIntelBolt --FIELDS--> threatIntelErrorOutputBolt
stellarThreatIntelBolt --FIELDS--> threatIntelErrorOutputBolt
--------------------------------------
1876 [main] INFO o.a.s.f.Flux - Running remotely...
1876 [main] INFO o.a.s.f.Flux - Deploying topology in an ACTIVE state...
1911 [main] INFO o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -4812787568915311395:-5778894691446041368
2027 [main] INFO o.a.s.s.a.AuthUtils - Got AutoCreds [org.apache.storm.security.auth.kerberos.AutoTGT@798256c5]
2027 [main] INFO o.a.s.StormSubmitter - Running org.apache.storm.security.auth.kerberos.AutoTGT@798256c5
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: The TGT found is not renewable
at org.apache.storm.security.auth.kerberos.AutoTGT.populateCredentials(AutoTGT.java:103)
at org.apache.storm.StormSubmitter.populateCredentials(StormSubmitter.java:94)
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:214)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:310)
at org.apache.storm.flux.Flux.runCli(Flux.java:171)
at org.apache.storm.flux.Flux.main(Flux.java:98)
Caused by: java.lang.RuntimeException: The TGT found is not renewable
at org.apache.storm.security.auth.kerberos.AutoTGT.populateCredentials(AutoTGT.java:94)
... 5 morestdout: /var/lib/ambari-agent/data/output-1174.txt
2017-05-15 15:41:52,819 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-15 15:41:52,986 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-15 15:41:52,987 - Group['metron'] {}
2017-05-15 15:41:52,988 - Group['livy'] {}
2017-05-15 15:41:52,988 - Group['elasticsearch'] {}
2017-05-15 15:41:52,988 - Group['spark'] {}
2017-05-15 15:41:52,988 - Group['zeppelin'] {}
2017-05-15 15:41:52,989 - Group['hadoop'] {}
2017-05-15 15:41:52,989 - Group['kibana'] {}
2017-05-15 15:41:52,989 - Group['users'] {}
2017-05-15 15:41:52,989 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,990 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,991 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,991 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-15 15:41:52,992 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,992 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,993 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,993 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,994 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,995 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-15 15:41:52,995 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,996 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,996 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,997 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,997 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,998 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,999 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-15 15:41:52,999 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-15 15:41:53,001 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-05-15 15:41:53,023 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-05-15 15:41:53,024 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-05-15 15:41:53,025 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-15 15:41:53,027 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-05-15 15:41:53,045 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-05-15 15:41:53,045 - Group['hdfs'] {}
2017-05-15 15:41:53,045 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-05-15 15:41:53,046 - FS Type:
2017-05-15 15:41:53,046 - Directory['/etc/hadoop'] {'mode': 0755}
2017-05-15 15:41:53,062 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2017-05-15 15:41:53,063 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-05-15 15:41:53,080 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-05-15 15:41:53,117 - Skipping Execute[('setenforce', '0')] due to only_if
2017-05-15 15:41:53,118 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-05-15 15:41:53,121 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-05-15 15:41:53,122 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-05-15 15:41:53,126 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2017-05-15 15:41:53,128 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2017-05-15 15:41:53,129 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-05-15 15:41:53,140 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-05-15 15:41:53,140 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-05-15 15:41:53,141 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-05-15 15:41:53,145 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-05-15 15:41:53,163 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-05-15 15:41:53,414 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-15 15:41:53,417 - Running enrichment configure
2017-05-15 15:41:53,422 - File['/usr/hcp/1.1.0.0-71/metron/config/enrichment.properties'] {'owner': 'metron', 'content': Template('enrichment.properties.j2'), 'group': 'metron'}
2017-05-15 15:41:53,424 - Calling security setup
2017-05-15 15:41:53,425 - Directory['/usr/hcp/1.1.0.0-71/metron'] {'owner': 'metron', 'group': 'metron', 'create_parents': True, 'mode': 0755}
2017-05-15 15:41:53,425 - Directory['/home/metron/.storm'] {'owner': 'metron', 'group': 'metron', 'mode': 0755}
2017-05-15 15:41:53,427 - File['/usr/hcp/1.1.0.0-71/metron/client_jaas.conf'] {'owner': 'metron', 'content': Template('client_jaas.conf.j2'), 'group': 'metron', 'mode': 0755}
2017-05-15 15:41:53,429 - File['/home/metron/.storm/storm.yaml'] {'owner': 'metron', 'content': Template('storm.yaml.j2'), 'group': 'metron', 'mode': 0755}
2017-05-15 15:41:53,430 - File['/home/metron/.storm/storm.config'] {'owner': 'metron', 'content': Template('storm.config.j2'), 'group': 'metron', 'mode': 0755}
2017-05-15 15:41:53,431 - kinit command: /usr/bin/kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM; as user: metron
2017-05-15 15:41:53,431 - Execute['/usr/bin/kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM; '] {'user': 'metron'}
2017-05-15 15:41:53,509 - Create Metron Local Config Directory
2017-05-15 15:41:53,509 - Configure Metron global.json
2017-05-15 15:41:53,510 - Directory['/usr/hcp/1.1.0.0-71/metron/config/zookeeper'] {'owner': 'metron', 'group': 'metron', 'mode': 0755}
2017-05-15 15:41:53,514 - File['/usr/hcp/1.1.0.0-71/metron/config/zookeeper/global.json'] {'content': InlineTemplate(...), 'owner': 'metron'}
2017-05-15 15:41:53,518 - File['/usr/hcp/1.1.0.0-71/metron/config/zookeeper/../elasticsearch.properties'] {'content': InlineTemplate(...), 'owner': 'metron'}
2017-05-15 15:41:53,519 - Loading config into ZooKeeper
2017-05-15 15:41:53,519 - Execute['/usr/hcp/1.1.0.0-71/metron/bin/zk_load_configs.sh --mode PUSH -i /usr/hcp/1.1.0.0-71/metron/config/zookeeper -z hcpa-11.openstacklocal:2181,hcpa-12.openstacklocal:2181,hcpa-10.openstacklocal:2181'] {'path': [u'/usr/jdk64/jdk1.8.0_77/bin']}
2017-05-15 15:41:55,190 - Starting Metron enrichment topology: enrichment
2017-05-15 15:41:55,190 - Starting enrichment
2017-05-15 15:41:55,190 - Execute['/usr/hcp/1.1.0.0-71/metron/bin/start_enrichment_topology.sh -s enrichment -z hcpa-11.openstacklocal:2181,hcpa-12.openstacklocal:2181,hcpa-10.openstacklocal:2181'] {'user': 'metron'}
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Metron