Support Questions

Find answers, ask questions, and share your expertise

cant start kafka broker from ambari

avatar

hi all

we have 3 kafka brokers ( HDP version - 2.6.0.1 and ambari verdion - 2.6.0 )

we cant start them from ambari , and we saw the following errors on ambari-agent logs

the error about - Fail: Configuration parameter 'kafka-env' was not found in configurations dictionary!

what chould be the problem here ?

Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.py", line 59, in run_file
    imp.load_source('__main__', script)
  File "/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/package/scripts/kafka_broker.py", line 141, in <module>
    KafkaBroker().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/package/scripts/kafka_broker.py", line 128, in status
    import status_params
  File "/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/package/scripts/status_params.py", line 26, in <module>
    kafka_pid_file = format("{kafka_pid_dir}/kafka.pid")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py", line 95, in format
    return ConfigurationFormatter().format(format_string, args, **result)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py", line 59, in format
    result_protected = self.vformat(format_string, args, all_params)
  File "/usr/lib64/python2.7/string.py", line 549, in vformat
    result = self._vformat(format_string, args, kwargs, used_args, 2)
  File "/usr/lib64/python2.7/string.py", line 582, in _vformat
    result.append(self.format_field(obj, format_spec))
  File "/usr/lib64/python2.7/string.py", line 599, in format_field
    return format(value, format_spec)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
    raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
Fail: Configuration parameter 'kafka-env' was not found in configurations dictionary!
Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Michael Bronson

Then what you could do using the config.py copy the kafka.env to the /tmp on the working cluster see below

# /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=get --host=localhost --cluster={your_clustername} --config-type=kafka-env --file=/tmp/kafka-env.json 

Sample output

2019-01-27 22:27:09,474 INFO ### Performing "get" content:
2019-01-27 22:27:09,474 INFO ### to file "/tmp/kafka.env.json"
2019-01-27 22:27:09,600 INFO ### on (Site:kafka.env, Tag:version1) 

Validate the contents of the .json in the "/tmp/kafka-env.json"

Sample output

{
  "properties": {
    "kafka_user_nproc_limit": "65536",
    "content": "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\n{% if kerberos_security_enabled or kafka_other_sasl_enabled %}\nexport KAFKA_KERBEROS_PARAMS=\"-Djavax.security.auth.useSubjectCredsOnly=false {{kafka_kerberos_params}}\"\n{% else %}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\n{% endif %}\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
    "kafka_log_dir": "/var/log/kafka",
    "kafka_pid_dir": "/var/run/kafka",
    "kafka_user_nofile_limit": "128000",
    "is_supported_kafka_ranger": "true",
    "kafka_user": "kafka"
  }

Copy the file using scp or whatever it over to your cluster and run the below command --action=set to update your problematic cluster. Before you start the kafka check the properties in the kafka.env.json to match you ie memory to match you cluster config.

# /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=set --host=localhost --cluster={your_clustername} --config-type=kafka-env --file=/tmp/kafka-env.json 

Sample output

2019-01-27 22:29:38,568 INFO ### Performing "set":
2019-01-27 22:29:38,568 INFO ### from file /tmp/kafka.env.json
2019-01-27 22:29:38,569 INFO ### PUTting file: "/tmp/kafka.env.json"
2019-01-27 22:29:38,569 INFO ### PUTting json into: doSet_version1.json
2019-01-27 22:29:38,719 INFO ### NEW Site:kafka.env, Tag:version2 

Start you Kafka from Ambari this should work.

Please let me know

View solution in original post

10 REPLIES 10

avatar
Master Mentor

@Michael Bronson

If you have exhausted all other avenues YES,

Step 1

  • Check and compare the /usr/hdp/current/kafka-broker symlinks

Step 2

  • Download both env'es as backup from the problematic and functioning cluster
  • Upload the functioning cluster env to the problematic one, since you have a backup
  • Start kafka through ambari

Step 3

sed -i 's/verify=platform_default/verify=disable/'/etc/python/cert-verification.cfg

Step 4

Lastly, if the above steps don't remedy the issue, then remove and -re-install the ambari-agent and remember to manually point to the correct ambari server in the ambari-agent.ini