Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2725 | 04-27-2020 03:48 AM | |
| 5285 | 04-26-2020 06:18 PM | |
| 4450 | 04-26-2020 06:05 PM | |
| 3576 | 04-13-2020 08:53 PM | |
| 5380 | 03-31-2020 02:10 AM |
05-26-2018
10:34 PM
@Moustapha MOUSSA SALEY Seems in your Windows machine you are missing the winutil.exe. Can you try this: 1. Download winutils.exe from http://public-repo-1.hortonworks.com/hdp-win-alpha/winutils.exe. 2. Set your HADOOP_HOME environment variable on the OS level to the full path to the bin folder with winutils.
... View more
05-24-2018
12:44 PM
@Michael Bronson please check your environment variables if somewhere you are setting "" variable is defined twice of the script is being invoked 3 times. But practically it will not harm. Because duplicate JVM flags are allowed .. only the last occurrance of the duplicate value takes precedence. However please try this to fix this: remove the "$KAFKA_HEAP_OPTS" from the previously mentioned line. thenr estart broker. # Set KAFKA specific environment variables here.
export KAFKA_HEAP_OPTS="-Xms3g -Xmx3g" .
... View more
05-24-2018
12:22 PM
@Michael Bronson Regarding your query: Why not to update the script - /usr/hdp/2.6.4 /kafka/bin/kafka-server-start on each kafka ? >>> Editing the sh file on all broker host is also possible but error prown and requires manual efforts on all the hosts. Ambari provides a better option. You have both the options. Ambari provides a better centralized way to control the configuration and also manage config history,
... View more
05-24-2018
11:25 AM
1 Kudo
In addition to above comment there is a very good article available for Kafka Brest practice & tuning you might want to refer to it: 1. https://community.hortonworks.com/articles/80813/kafka-best-practices-1.html 2. https://community.hortonworks.com/articles/80813/kafka-best-practices-2.html
... View more
05-24-2018
11:20 AM
1 Kudo
@Michael Bronson Kafka relies heavily on the filesystem for storing and caching messages. All data is immediately written to a persistent log on the filesystem without necessarily flushing to disk. Kafka uses heap space very carefully and does not require setting heap sizes more than 5GB. Kafka uses page cache memory as a buffer for active writers and readers, so after you specify JVM size (using -Xmx and -Xms Java options), leave the remaining RAM available to the operating system for page caching. Add the "KAFKA_HEAP_OPTS" option inside the "Advanced kafka-env" to a larger value than 1GB (default) and then restart the kafka. Ambari UI --> Configs --> Advanced --> "Advanced kafka-env" --> kafka-env template
# Set KAFKA specific environment variables here.
export KAFKA_HEAP_OPTS="$KAFKA_HEAP_OPTS -Xms3g -Xmx3g" Please note the -Xmx3g and -Xmx3g is just a value i picked up. You can increase the value a bit more based on your requirement / GC log analysis. then restart Kafka brokers. Verify if it is taking correct settings or not as following: # ps -ef | grep -i kafka .
... View more
05-24-2018
10:04 AM
@Michael Bronson The command that you are using just shows the values that the JVM has picked the default values and has started with the listed values [root@kafka01 ~]# java -XX:+PrintFlagsFinal -version | grep HeapSize . However as you are seeing 32 GB which looks strange because JVM does not start with that huge value by default until you have some Environment variable defined globally like "_JAVA_OPTIONS" Or "JAVA_OPTIONS" So please check the output of the same command after unsetting some global variable settings. [root@kafka01 ~]# unset _JAVA_OPTIONS
[root@kafka01 ~]# unset JAVA_OPTIONS
[root@kafka01 ~]# java -XX:+PrintFlagsFinal -version | grep HeapSize
.
... View more
05-22-2018
12:17 PM
1 Kudo
@Michael Bronson Yes, the complete Kafka service (including all the brokers will be deleted) when we will make the 2nd call. But If you just want to delete a particular kafka Broker then you will need to make the API call on individual host_components. # curl -i -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/hosts/kafkahost1.example.com/host_components/KAFKA_BROKER .
... View more
05-22-2018
10:47 AM
1 Kudo
@Michael Bronson 1. First Stop the Kafka Service # curl -i -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"_PARSE_.STOP.KAFKA","operation_level":{"level":"SERVICE","cluster_name":"NewCluster","service_name":"KAFKA"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/services/KAFKA 2. Delete the Kafka Service: # curl -i -u admin:admin -H "X-Requested-By: ambari" -X DELETE http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/services/KAFKA .
... View more
05-17-2018
05:40 AM
@Mike Wong As you are selecting "Existing Postgresql Database", So you will need to make sure that you have done the presetup for Hive to use External Existing Postgres. When you use "Existing" database options then you will need to make sure that the Database is already installed on the mentioned host and it is running fine (port 5432) is accessible and bound. # netstat -tnlpa | grep 5432
# ps -ef | grep postgres Please see the doc to know how to use Oozie with an existing database, other than the Derby database instance that Ambari installs by default. https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-administration/content/using_oozie_with_postgresql.html
... View more
05-17-2018
05:29 AM
@Michael Bronson I see the error in your command: WatchedEvent state:SyncConnected type:None path:null . This is because you have not defined the "cmd" (like get , set) and "args" (like Path)
... View more