Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2725 | 04-27-2020 03:48 AM | |
| 5285 | 04-26-2020 06:18 PM | |
| 4449 | 04-26-2020 06:05 PM | |
| 3576 | 04-13-2020 08:53 PM | |
| 5377 | 03-31-2020 02:10 AM |
05-31-2018
11:36 AM
There was a slight typo in the above article which was identified and fixed as part of feedback provided by HCC user on thread: https://community.hortonworks.com/questions/194177/kafka-best-practices-kafka-jvm-performance-opts.html
... View more
05-31-2018
11:31 AM
@Michael Bronson Thank you for your feedback. I have fixed the article typo in the referrenced article..
... View more
05-31-2018
10:19 AM
1 Kudo
@Michael Bronson Between two JVM parameters you need to give a space. There are no space between the following parametyers: 1. -XX:+UseG1GC-XX:MaxGCPauseMillis=20 2. -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 . Correct option should be: export KAFKA_JVM_PERFORMANCE_OPTS="-XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80" .
... View more
05-31-2018
09:28 AM
@Mohamed Farook Please check if the directory is present and has proper permission for
the user who is running ambari server to read contents of this
directory? Are you starting ambari-server as "root" user? # ls -ld /var/lib/ambari-server/resources/common-services/HDFS
# ls -ld /var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0
... View more
05-31-2018
08:16 AM
@Ian Bradshaw By anychance are you running any other service on port 8080? Can you please try the following API call to see if it is returning results from ambari server? (using oyur own ambari credentials that you have reset) # curl -iv -u admin:admin -H "X-Requested-By: ambari" -X GET http://127.0.0.1:8080/api/v1/services/AMBARI/components/AMBARI_SERVER . Also do we see any error inside the ambari server.log ? If yes then can you please share the output of the above call and the serve log?
... View more
05-31-2018
08:07 AM
@Mohamed Farooke Based on the stackTrace that you posted here it looks like you have the "HDFS" directory missing in the following location: # ls -ld /var/lib/ambari-server/resources/common-services/HDFS
(OR)
# ls -ld /var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0
(OR)
# ls /var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0
alerts.json configuration kerberos.json metainfo.xml metrics.json package widgets.json . Please check if the directory is present and has proper permission for the user who is running ambari server to read contents of this directory?
... View more
05-31-2018
03:00 AM
@Mohamed Farook Can you please share the complete ambari-server.log the complete stacktrace of the error helps. In the above case we see some errors but that may or may not be related to your issue. So we will need to see the complete ambari-server.log 9and ambari-server.out)
... View more
05-30-2018
09:43 PM
@Anpan K Ambari Server uses Version Definition Files (VDF) to understand which product and component versions are included in a release. The VDF file contains various Operating System and their Repo URLs. VDFs also provides individual components name & version information that are shipped with that verison of product. Example: HDP 2.6.2 (VDF): http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.5.0/HDP-2.6.5.0-292.xml Users can also create their own VDFs (by following the same format as mentioned in the above VDF) by editing the <repoid/> tags informations for each repository to match their Local repository OR Satellite/Spacewalk channel names previously configured. VDFs helps in minimizing efforts in registering a new version. It also helps in automated cluster creation specially in case of Blueprints: https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-release-notes/content/ambari_relnotes-2.6.0.0-behavioral-changes.html
... View more
05-28-2018
10:07 PM
@Knows NotMuch In Sandbox it works because Sandbox includes a pre configured cluster with pre configured users and their home directories.
... View more
05-28-2018
10:06 PM
@Knows NotMuch In order to fulfill the common requirement to initialize user accounts to run Hadoop components is the existence of a unique, /user/<username> HDFS home directory. You can enable automated creation of a /user/<username> HDFS home directory for each user that you create. Home directory creation occurs for users created either manually using the Ambari Admin page, or through LDAP synchronization. Please find the below link to know more about the new ambari.properties property and about the script "post-user-creation-hook.sh" ambari.post.user.creation.hook.enabled=true Script: ambari.post.user.creation.hook=/var/lib/ambari-server/resources/scripts/post-user-creation-hook.sh https://docs.hortonworks.com/HDPDocuments/Ambari-2.6.0.0/bk_ambari-administration/content/create_user_home_directory.html
... View more