Member since
11-07-2016
637
Posts
253
Kudos Received
144
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2286 | 12-06-2018 12:25 PM | |
2341 | 11-27-2018 06:00 PM | |
1812 | 11-22-2018 03:42 PM | |
2881 | 11-20-2018 02:00 PM | |
5228 | 11-19-2018 03:24 PM |
05-29-2018
07:26 AM
1 Kudo
@bharat sharma, You can do it in the 2 ways my_list[0].__getitem__("col name") my_list[0].asDict()["col name"] Please "Accept" the answer if this works. . -Aditya
... View more
05-28-2018
11:01 AM
2 Kudos
@Henry Luo, This is done by the ambari stack advisor. The stack advisor is invoked whenever a new service is added/deleted or any config change is made. The stack advisor recommends you the set of configs need to be used. If you want to tweak something than what is needed , you can do it in stack_advisor.py https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.6/services/stack_advisor.py . If you are using HDP , then the path for stack_advisor.py is /var/lib/ambari-server/resources/stacks/HDP/{HDP-version}/services/stack_advisor.py. You can make the changes in the file and it should work. You need to restart ambari server for the new changes to be reflected. Also ambari agents caches these files , so you need to restart agents as well for the latest changes to be picked up. . Additional reference : https://community.hortonworks.com/questions/141855/stack-advisor-how-to-use-it.html?childToView=141892 Please "Accept" the answer if you find this helpful or revert back if you need help 🙂 . . -Aditya
... View more
05-28-2018
10:43 AM
2 Kudos
@Vinay K, The 2 jobs which are running are Spark Thrift servers which will run as yarn applications. There is no need to worry. If you stop spark thrift servers then you won't see them running. Spark2 thrift server will be running with app name "Thrift JDBC/ODBC Server" Spark/Spark1 thrift server will be running with app name "org.apache.spark.sql.hive.thriftserver.HiveThriftServer2". . Please "Accept" the answer if this helps. . -Aditya
... View more
05-25-2018
03:28 PM
1 Kudo
@Michael Bronson, Yes. Your both the above configs looks file and your final curl call will look like this curl -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '[
{
"Clusters": {
"desired_config": [
{
"type": "kafka-env",
"tag": "unique value",
"properties" : {
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi\nexport KAFKA_HEAP_OPTS=\"-Xmx8g -Xms8g\"\nKAFKA_JVM_PERFORMANCE_OPTS=\"-XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80\n"",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536"
},
"service_config_version_note": "New config version"
}
]
}
}
]' "http://master02:8080/api/v1/clusters/HDP"
... View more
05-25-2018
11:53 AM
1 Kudo
@Michael Bronson, Yes. The steps are same for any config change in kafka-env. The key is KAFKA_JVM_PERFORMANCE_OPTS and not KAFKA_HEAP_OPTS.
... View more
05-25-2018
11:10 AM
1 Kudo
@Michael Bronson, You can try setting export KAFKA_JVM_PERFORMANCE_OPTS="-Xmx8g -Xms8g -XX:MetaspaceSize=96m -XX:+UseG1GC-XX:MaxGCPauseMillis=20 - XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M-XX:MinMetaspaceFreeRatio=50 - XX:MaxMetaspaceFreeRatio=80"
... View more
05-25-2018
10:10 AM
1 Kudo
@Michael Bronson, The config has to be added in kafka-env template. Yes. This can be done through API. Please look at the similar question to do the change using API https://community.hortonworks.com/questions/193769/how-to-add-variable-in-kafka-env-template-by-api.html . -Aditya
... View more
05-25-2018
10:07 AM
@Charbel Keyrouz, Yes, you can add multiple TServers to a node. The support was added from Accumulo 1.8 https://accumulo.apache.org/1.9/accumulo_user_manual.html#_running_multiple_tabletservers_on_a_single_node https://issues.apache.org/jira/browse/ACCUMULO-4328 I guess this cannot be done using Ambari. On a side note, the latest HDP 2.6.5 still uses Accumulo 1.7.0 https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/comp_versions.html . -Aditya
... View more
05-25-2018
09:57 AM
2 Kudos
@Michael Bronson, You can do this in 3 steps. 1. Get the latest version tag for kafka-env. You can do this by hitting below curl request #curl -u admin:admin -H "X-Requested-By: ambari" http://{ambari-host}:{ambari-port}/api/v1/clusters/{cluster-name}?fields=Clusters/desired_configs
Sample resp:
{
"href" : "http://localhost:8080/api/v1/clusters/clustername?fields=Clusters/desired_configs",
"Clusters" : {
"cluster_name" : "clustername",
"version" : "HDP-2.6",
"desired_configs" : {
"accumulo-env" : {
"tag" : "version1525370182117",
"version" : 8
},
"accumulo-log4j" : {
"tag" : "version1525368283467",
"version" : 4
},
"accumulo-logsearch-conf" : {
"tag" : "version1525368283467",
"version" : 4
},
"accumulo-site" : {
"tag" : "version1525987821696",
"version" : 9
},
"kafka-env" : {
"tag" : "version1526330057712",
"version" : 1
},
"admin-properties" : {
"tag" : "version1526330057712",
"version" : 1
},
"ams-env" : {
"tag" : "version1",
"version" : 1
},
"ams-grafana-env" : {
"tag" : "version1",
"version" : 1
}
}
}
}
2) Get the tag for kafka-env from the above response. For above example call, tag for kafka-env is "version1526330057712". Now get the latest kafka-env config by using the above tag and the curl call. curl -u admin:admin -H "X-Requested-By: ambari" "http://{ambari-host}:{ambari-port}/api/v1/clusters/{cluster-name}/configurations?type=kafka-env&tag={tag-version}"
Sample resp:
{
"href" : "http://localhost:8080/api/v1/clusters/clustername/configurations?type=kafka-env&tag=version1525370182459",
"items" : [
{
"href" : "http://localhost:8080/api/v1/clusters/clustername/configurations?type=kafka-env&tag=version1525370182459",
"tag" : "version1525370182459",
"type" : "kafka-env",
"version" : 10,
"Config" : {
"cluster_name" : "clustername",
"stack_id" : "HDP-2.6"
},
"properties" : {
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\n{% if kerberos_security_enabled or kafka_other_sasl_enabled %}\nexport KAFKA_KERBEROS_PARAMS=\"-Djavax.security.auth.useSubjectCredsOnly=false {{kafka_kerberos_params}}\"\n{% else %}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\n{% endif %}\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_keytab" : "/etc/security/keytabs/kafka.service.keytab",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_principal_name" : "kafka/_HOST@KDC_COLO.COM",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536"
}
}
]
}
3) Copy the properties json from the above response. Append your config "export KAFKA_HEAP_OPTS="-Xms3g -Xmx3g" to the content field under properties json. New content json should look like below "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\n{% if kerberos_security_enabled or kafka_other_sasl_enabled %}\nexport KAFKA_KERBEROS_PARAMS=\"-Djavax.security.auth.useSubjectCredsOnly=false {{kafka_kerberos_params}}\"\n{% else %}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\n{% endif %}\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi\nexport KAFKA_HEAP_OPTS=\"-Xms3g -Xmx3g\"" 4) Post the new config to ambari curl -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '[
{
"Clusters": {
"desired_config": [
{
"type": "kafka-env",
"tag": "unique value",
"properties": {
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\n{% if kerberos_security_enabled or kafka_other_sasl_enabled %}\nexport KAFKA_KERBEROS_PARAMS=\"-Djavax.security.auth.useSubjectCredsOnly=false {{kafka_kerberos_params}}\"\n{% else %}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\n{% endif %}\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi\nexport KAFKA_HEAP_OPTS=\"-Xms3g -Xmx3g\"",
"is_supported_kafka_ranger": "true",
"kafka_keytab": "/etc/security/keytabs/kafka.service.keytab",
"kafka_log_dir": "/var/log/kafka",
"kafka_pid_dir": "/var/run/kafka",
"kafka_principal_name": "kafka/_HOST@KDC_COLO.COM",
"kafka_user": "kafka",
"kafka_user_nofile_limit": "128000",
"kafka_user_nproc_limit": "65536"
},
"service_config_version_note": "New config version"
}
]
}
}
]' "http://localhost:8080/api/v1/clusters/clustername" Make sure to give unique value for the tag key in the above json. Add all the properties obtained from step 3 in the above curl call and add extra config values if you need any After doing these steps, new config will be added to Kafka. Restart kafka for the changes to reflect. Reference : https://cwiki.apache.org/confluence/display/AMBARI/Modify+configurations . -Aditya
... View more
05-21-2018
10:36 AM
@Khouloud Landari, Did you check if there is enough space in /tmp folder in all the nodes (workers + master). Also check the permissions of the folder. . -Aditya
... View more