Member since
03-28-2016
99
Posts
9
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
940 | 05-08-2018 06:39 AM | |
632 | 04-27-2018 10:12 AM | |
1458 | 09-11-2017 01:07 PM | |
15459 | 03-14-2017 10:00 AM | |
3438 | 02-10-2017 08:40 AM |
06-25-2019
06:33 AM
After configuring the multiple version of spark clients, manually configured spark client are throwing the exception as simillar as exception on below link, https://community.hortonworks.com/questions/155634/issue-with-launching-application-master.html
... View more
06-24-2019
09:16 AM
Need help in configuring multiple version of spark client on same edge node operating on YARN.
... View more
Labels:
- Labels:
-
Apache Spark
02-27-2019
06:13 AM
1 Kudo
Finally, parser started working..!! I just cleared the parser zookeeper metadata directory and restarted the metron parser.
... View more
01-04-2019
12:09 PM
@Harshali Patel NameNode service stores its metadata on the configured "dfs.namenode.name.dir" tag available on hdfs-site.xml. This folder is used for storing the metadata in the form of FSI IMAGE and also used for checkpointing. For more details go through the above link posted by Jay.
... View more
01-02-2019
07:57 AM
Execute['/usr/hcp/1.6.0.0-7/metron/bin/start_parser_topology.sh -k <server>:6667 -z <server>:2181 -s bro -ksp PLAINTEXT'] {'logoutput': True, 'tries': 3, 'user': 'metron', 'try_sleep': 5}
Submitting parser topology; args='-k <server>:6667 -z <server>:2181 -s bro -ksp PLAINTEXT'
Running: /usr/jdk64/jdk1.8.0_112/bin/java -server -Ddaemon.name= -Dstorm.options= -Dstorm.home=/usr/hdp/2.6.5.0-292/storm -Dstorm.log.dir=/var/log/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /usr/hdp/2.6.5.0-292/storm/lib/asm-5.0.3.jar:/usr/hdp/2.6.5.0-292/storm/lib/clojure-1.7.0.jar:/usr/hdp/2.6.5.0-292/storm/lib/disruptor-3.3.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/kryo-3.0.3.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-api-2.8.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-core-2.8.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-over-slf4j-1.6.6.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-slf4j-impl-2.8.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/minlog-1.3.0.jar:/usr/hdp/2.6.5.0-292/storm/lib/objenesis-2.1.jar:/usr/hdp/2.6.5.0-292/storm/lib/reflectasm-1.10.1.jar:/usr/hdp/2.6.5.0-292/storm/lib/ring-cors-0.1.5.jar:/usr/hdp/2.6.5.0-292/storm/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/storm/lib/slf4j-api-1.7.21.jar:/usr/hdp/2.6.5.0-292/storm/lib/storm-core-1.1.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/storm/lib/storm-rename-hack-1.1.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/storm/lib/zookeeper.jar:/usr/hdp/2.6.5.0-292/storm/lib/ambari-metrics-storm-sink.jar org.apache.storm.daemon.ClientJarTransformerRunner org.apache.storm.hack.StormShadeTransformer /usr/hcp/1.6.0.0-7/metron/lib/metron-parsers-0.5.1.1.6.0.0-7-uber.jar /tmp/1deecdb80e6011e9b9db005056a4f2b7.jar
Running: /usr/jdk64/jdk1.8.0_112/bin/java -Ddaemon.name= -Dstorm.options= -Dstorm.home=/usr/hdp/2.6.5.0-292/storm -Dstorm.log.dir=/var/log/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib -Dstorm.conf.file= -cp /usr/hdp/2.6.5.0-292/storm/lib/asm-5.0.3.jar:/usr/hdp/2.6.5.0-292/storm/lib/clojure-1.7.0.jar:/usr/hdp/2.6.5.0-292/storm/lib/disruptor-3.3.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/kryo-3.0.3.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-api-2.8.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-core-2.8.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-over-slf4j-1.6.6.jar:/usr/hdp/2.6.5.0-292/storm/lib/log4j-slf4j-impl-2.8.2.jar:/usr/hdp/2.6.5.0-292/storm/lib/minlog-1.3.0.jar:/usr/hdp/2.6.5.0-292/storm/lib/objenesis-2.1.jar:/usr/hdp/2.6.5.0-292/storm/lib/reflectasm-1.10.1.jar:/usr/hdp/2.6.5.0-292/storm/lib/ring-cors-0.1.5.jar:/usr/hdp/2.6.5.0-292/storm/lib/servlet-api-2.5.jar:/usr/hdp/2.6.5.0-292/storm/lib/slf4j-api-1.7.21.jar:/usr/hdp/2.6.5.0-292/storm/lib/storm-core-1.1.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/storm/lib/storm-rename-hack-1.1.0.2.6.5.0-292.jar:/usr/hdp/2.6.5.0-292/storm/lib/zookeeper.jar:/usr/hdp/2.6.5.0-292/storm/lib/ambari-metrics-storm-sink.jar:/tmp/1deecdb80e6011e9b9db005056a4f2b7.jar:/usr/hdp/current/storm-supervisor/conf:/usr/hdp/2.6.5.0-292/storm/bin -Dstorm.jar=/tmp/1deecdb80e6011e9b9db005056a4f2b7.jar -Dstorm.dependency.jars= -Dstorm.dependency.artifacts={} org.apache.metron.parsers.topology.ParserTopologyCLI -k <server>:6667 -z <server>:2181 -s bro -ksp PLAINTEXT
628 [main] INFO o.a.c.f.i.CuratorFrameworkImpl - Starting
681 [main-EventThread] INFO o.a.c.f.s.ConnectionStateManager - State change: CONNECTED
org.apache.metron.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "invalidWriterClassName" (class org.apache.metron.common.configuration.SensorParserConfig), not marked as ignorable (25 known properties: "errorWriterParallelism", "numAckers", "parserClassName", "readMetadata", "errorTopic", "errorWriterClassName", "filterClassName", "outputTopic", "numWorkers", "writerClassName", "rawMessageStrategy", "spoutConfig", "rawMessageStrategyConfig", "sensorTopic", "securityProtocol", "spoutNumTasks", "parserParallelism", "parserConfig", "errorWriterNumTasks", "cacheConfig", "mergeMetadata", "spoutParallelism", "fieldTransformations", "parserNumTasks", "stormConfig"])
at [Source: java.io.ByteArrayInputStream@33ecda92; line: 1, column: 192] (through reference chain: org.apache.metron.common.configuration.SensorParserConfig["invalidWriterClassName"])
at org.apache.metron.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:62)
at org.apache.metron.jackson.databind.DeserializationContext.reportUnknownProperty(DeserializationContext.java:851)
at org.apache.metron.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1085)
at org.apache.metron.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1389)
at org.apache.metron.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1367)
at org.apache.metron.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:266)
at org.apache.metron.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:125)
at org.apache.metron.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3807)
at org.apache.metron.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2867)
at org.apache.metron.common.utils.JSONUtils.load(JSONUtils.java:102)
at org.apache.metron.common.configuration.ParserConfigurations.updateSensorParserConfig(ParserConfigurations.java:40)
at org.apache.metron.common.configuration.ParserConfigurations.updateSensorParserConfig(ParserConfigurations.java:36)
at org.apache.metron.common.configuration.ConfigurationsUtils.lambda$updateParserConfigsFromZookeeper$0(ConfigurationsUtils.java:219)
at org.apache.metron.common.configuration.ConfigurationsUtils.updateConfigsFromZookeeper(ConfigurationsUtils.java:209)
at org.apache.metron.common.configuration.ConfigurationsUtils.updateParserConfigsFromZookeeper(ConfigurationsUtils.java:217)
at org.apache.metron.parsers.topology.ParserTopologyBuilder.getSensorParserConfig(ParserTopologyBuilder.java:378)
at org.apache.metron.parsers.topology.ParserTopologyBuilder.build(ParserTopologyBuilder.java:120)
at org.apache.metron.parsers.topology.ParserTopologyCLI.getParserTopology(ParserTopologyCLI.java:571)
at org.apache.metron.parsers.topology.ParserTopologyCLI.createParserTopology(ParserTopologyCLI.java:540)
at org.apache.metron.parsers.topology.ParserTopologyCLI.main(ParserTopologyCLI.java:601)
... View more
Labels:
- Labels:
-
Apache Metron
07-05-2018
09:56 AM
My stacks are : HCP 1.4.2.0 with Metron 0.4.1.4.2.0 i.e actually Metron 0.4.1 HDP 2.6.5.0-292 KAFKA 1.0.0 STROM 1.1.0 Should i have to downgrade the Kafka, if yes, Please suggest the way.
... View more
07-05-2018
09:36 AM
@Sindhu Its on Metron alert ui log, but there is no error on Metron management and rest log. I just changed the port to 6k series now that error is resolve, but now the problem is different. Now the Metron service on my cluster is running with green, but Strom which Metron internally uses not able to read the Kafka topic, and there are no errors on any of the logs. And can please suggest the version Metron that i can use with HDP 2.6.5.0 includes KAFKA 1.0.0 and Strom 1.1.0.
... View more
06-29-2018
02:47 PM
@nallen True, but i registered HCP 1.5.0 ambari mpack on HDP 2.6.5.0 getting option to install only metron, no option to add kibana and elasticsearch.
... View more
06-28-2018
07:44 PM
Need to unregister both HDF and HCP mpack register on cluster. Both ambari-server --uninstall-mpack/--upgrade-mpack option saying there is no management package to uninstall.
... View more
Labels:
- Labels:
-
Apache Ambari
06-08-2018
10:12 AM
@Mathi Murugan 1. First install the kerberos workstation and client package on the host. yum install krb5-workstation krb5-client 2. Copy the the kerberos config(/etc/krb5.conf) file to the new host. 3. Add the host choosing the proper template using Ambari https://hortonworks.com/hadoop-tutorial/using-apache-ambari-add-new-nodes-existing-cluster/
... View more
05-23-2018
09:24 AM
@brahmasree b Connect to zookeeper and delete the znode dir related to storm service and restart both "zookeeper" and "storm" service. /usr/hdp/current/zookeeper-server/bin/zkCli.sh -server <hostname>:2181
[zk: <hostname>:2181(CONNECTED) 0]ls /
[zk: <hostname>:2181(CONNECTED) 0]rmr /storm
... View more
05-23-2018
06:29 AM
@Matthias Tewordt Below link will help you, https://stackoverflow.com/questions/2983248/com-mysql-jdbc-exceptions-jdbc4-communicationsexception-communications-link-fai
... View more
05-16-2018
10:39 AM
@Mudassar Hussain Below links may help for your practice, https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_yarn-resource-management/content/ch_capacity_scheduler.html https://hortonworks.com/tutorial/configuring-yarn-capacity-scheduler-with-apache-ambari/
... View more
05-16-2018
10:23 AM
@omkar powar Stop the hadoop services first , and then delete the tmp secondary namenode directory (" hadoop.tmp.dir " will tell the path for secondary namenode data directory). After this, start the services again and the issue will be fixed.
... View more
05-16-2018
10:07 AM
@Ankita Ghate Disable the proxy on the server and try starting the reasource manager from Ambari.
... View more
05-15-2018
01:27 PM
@Hardeep Singh Check whether all the disk partition exists after the reboot.
... View more
05-15-2018
09:49 AM
@Praveen Patel Try with hcatalog partition key option , --hcatalog-database <database> --hcatalog-table <table_name>\ --map-column-java <Column_name>=String --hcatalog-partition-keys 'col1,col2' --hcatalog-partition-values 'XXXXXXXX' \ --verbose --m X --split-by XXXXXX
... View more
05-09-2018
06:31 AM
1 Kudo
@Kumar Deepak "spark" user is a service user, hence it has no password. you should switch user either from root or sudo user.
... View more
05-08-2018
10:08 AM
1 Kudo
HDF 3.1.X stack works with Ambari 2.6.1 and higher. Please read the support metric of the HDF stack your are using.
... View more
05-08-2018
09:17 AM
@Vinay K Which version of Ambari you are using? And are you using mpack?
... View more
05-08-2018
06:39 AM
1 Kudo
@Vinay K You can use HDF with HDP 2.6.2. Try downloading the suitable ambari mpack and install. Any version higher than HDF-3.0.1.1 should be compatible to your requirement. Link to install ambari mpack for HDF:https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.1/bk_installing-hdf-and-hdp/content/ch_install-mpack.html
... View more
04-27-2018
12:51 PM
Even i faced the same problem while upgrading the HDP version. I just logged into Ranger UI as admin user, and in the user section i reset the same password for "admin" and "amb_ranger_admin" users and restarted ranger service. This worked for me.
... View more
04-27-2018
10:18 AM
@Geoffrey Shelton Okot Using Version Definition File i am not able to register a version with local repository. "Save" button is not getting enabled. On the release notes, it is mentioned that this issue got fixed on Ambari 2.6.1.3 or later .
... View more
04-27-2018
10:12 AM
@Zsolt
Szabo Try using HCP 1.4.1, it comes with metron 0.4.2. So no need to build the metron.
... View more
04-27-2018
09:21 AM
Ambari 2.6.1.5 not able to register new HDP versions.
... View more
Labels:
- Labels:
-
Apache Ambari
04-25-2018
09:45 AM
@Jan De Luyck Set these below parameters in your “Custom spark-defaults” config setting in Ambari (or your spark-env.sh) to take care of these massive logs: spark.history.fs.cleaner.enabled=true
spark.history.fs.cleaner.interval=1d
spark.history.fs.cleaner.maxAge=5d
... View more
04-25-2018
09:40 AM
@Yishai
Bouganim
Use java 1.7 with sqljdbc41 driver
... View more
01-19-2018
09:35 AM
@Alberto Rodriguez Even i got the same issue , i just disabled the proxy and started the service. And running curl with "-noproxy" is better option, i will try that.
... View more