Member since
01-03-2017
181
Posts
44
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1852 | 12-02-2018 11:49 PM | |
2474 | 04-13-2018 06:41 AM | |
2043 | 04-06-2018 01:52 AM | |
2349 | 01-07-2018 09:04 PM | |
5696 | 12-20-2017 10:58 PM |
10-05-2017
02:27 PM
Hi @Sumit Sharma, For the given data, Replace Text processor will do the job by tokenizing the data with given Regular expression syntax you can replace the text. (?s)(^\[.*\]) :(.*?):(.*?):(.*?):(.*?):(.*?): Receiver Node(.*?), Sender Node(.*?), Message(.*?)$ and the replacement text for the same can be : Date : $1 , Sender: $7, Receve : $8, Message: $9 nifi-replacetext.png nifi-output-data.png Hope this helps !!
... View more
10-05-2017
01:06 PM
Hi @Ashnee Sharma Looks you recently enabled the Phoenix, and not configured properly for spark, can you please ensure that libraries(jars) mentioned in the directory "/usr/hdp/current/phoenix-client" must be added to spark CLASSPATH List hive-phoenix-handler-<version>.jar is the library complaining by spark so make sure that particular jar is available. ex: in spark-env.sh template from ambari.
CLASSPATH=$CLASSPATH:/usr/hdp/current/phoenix-client/*
Alternatively you can use following options in spark submit, provided these paths have the Phoenix client jars --conf spark.driver.extraClassPath /usr/hdp/current/phoenix-client/* --conf spark.executor.extraClassPath /usr/hdp/current/phoenix-client/* Hope that helps !!
... View more
10-05-2017
09:18 AM
Hi @badr bakkou, can you please check the ambari version(both server and agents) which you have chosen, it must be of 2.5.1, at the same time please ensure that you have the appropriate management pack as well (compatible version mentioned below.) please find the compatible matrix here Hope that helps !!
... View more
10-05-2017
12:06 AM
Hi @Vijay Kumar, You can use Try and catch the exception and control the execution flow depends up on the error. for instance try {
//code to be executed
} catch {
case e: <Exception> => {
println(s"error occurred for while processing the data frame ");
System.exit(1);
}
None
} Hope this helps!!
... View more
10-03-2017
09:02 AM
Hi @Hoang Le, Not sure if you hit the bug in your version - https://issues.apache.org/jira/browse/SPARK-14261 if so upgrade may help !!
... View more
10-03-2017
08:36 AM
Hi @raouia, The error mentioned that unknown host exception (ambari-agent1/2), which means the nodes can't be discovered with that name. if you don't have fully qualified domain name with DNS, you can update your /etc/hosts file with respective IP address, so that that can be discovered. that implies your /etc/hosts file can be appended with <ip of ambari-agent1> amabir-agent1 <ip of ambari-agent2> amabir-agent2 in all the hosts so that every one can be discover other hosts.
... View more
10-03-2017
08:28 AM
Hi @uri ben-ari, zookeeper name can be found from ambari (can any of the zookeeper server ) Kafka topic name is the directory name without the partition index (after Kafka-logs ex : mmno.aso.prpl.proces ) on the other note : logs are Kafka messages, not the application logs hence please look for the option to reduce the retention of the topic so that will purge some of the un-used messages from topic.
... View more
10-03-2017
07:53 AM
Hi @uri ben-ari, Yes, thats possible provided following : Partitions : Increase the partitions will keep the data in different log files which also gives the benefit of increased parallelism along with reducing the log file size(increases the number of files) at the same this will not reduce the data volume in disk, but split into multiple files. Prcedure to increase the partition : # Add new option or change exsiting option
bin/kafka-configs.sh --alter --zookeeper <Zookeeper_server>:2181 --entity-name <topicName> --entity-type topics --add-config cleanup.policy=compact and then ensure that Partition reassignment script executed with --execute option bin/kafka-reassign-partitions.sh more on these utilities with syntax and examples can be found here Data Retention : If you don't need to hold the data, that can be purged after reached the retention this can be set while the topic creation time with the --config option retention.bytes or retention.ms #Example bin/kafka-configs.sh --zookeeper <zookeeper_server>:2181 --entity-type topics --alter --add-config retention.ms=86400000 --entity-name <topic_name> Hope this helps!!
... View more
10-03-2017
05:55 AM
Hi @Gulshan Agivetova, To remove the zookeeper service from the cluster, the simple method is to use the REST API and have the zookeeper service restarted. Command to remove the service: curl -i -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-Server>:<ambari-port>/api/v1/clusters/<cluster-name>/hosts/<fully_qualified_host_name(to_be_removed)>/host_components/ZOOKEEPER_SERVER I presume, at least other hosts are alive in zookeeper quorum(to retain the data). On the other Note, it is always good to have odd number of zookeeper nodes in cluster, hence you can add a new zookeeper server by doing Go to Hosts in Ambari -> select the new node to be installed with zookeeper Server -> click on Components Add+ -> select Zookeeper Server and install.(same for ZOOKEEPER_CLIENT) Hope this helps!
... View more
10-03-2017
05:55 AM
Hi @Gulshan Agivetova, To remove the zookeeper service from the cluster, the simple method is to use the REST API and have the zookeeper service restarted. Command to remove the service: curl -i -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://<ambari-Server>:<ambari-port>/api/v1/clusters/<cluster-name>/hosts/<fully_qualified_host_name(to_be_removed)>/host_components/ZOOKEEPER_SERVER I presume, at least other hosts are alive in zookeeper quorum(to retain the data). On the other Note, it is always good to have odd number of zookeeper nodes in cluster, hence you can add a new zookeeper server by doing Go to Hosts in Ambari -> select the new node to be installed with zookeeper Server -> click on Components Add+ -> select Zookeeper Server and install.(same for ZOOKEEPER_CLIENT) Hope this helps!
... View more