Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

cant start kafka broker from ambari

avatar

hi all

we have 3 kafka brokers ( HDP version - 2.6.0.1 and ambari verdion - 2.6.0 )

we cant start them from ambari , and we saw the following errors on ambari-agent logs

the error about - Fail: Configuration parameter 'kafka-env' was not found in configurations dictionary!

what chould be the problem here ?

Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.py", line 59, in run_file
    imp.load_source('__main__', script)
  File "/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/package/scripts/kafka_broker.py", line 141, in <module>
    KafkaBroker().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/package/scripts/kafka_broker.py", line 128, in status
    import status_params
  File "/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1/package/scripts/status_params.py", line 26, in <module>
    kafka_pid_file = format("{kafka_pid_dir}/kafka.pid")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py", line 95, in format
    return ConfigurationFormatter().format(format_string, args, **result)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/format.py", line 59, in format
    result_protected = self.vformat(format_string, args, all_params)
  File "/usr/lib64/python2.7/string.py", line 549, in vformat
    result = self._vformat(format_string, args, kwargs, used_args, 2)
  File "/usr/lib64/python2.7/string.py", line 582, in _vformat
    result.append(self.format_field(obj, format_spec))
  File "/usr/lib64/python2.7/string.py", line 599, in format_field
    return format(value, format_spec)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
    raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
Fail: Configuration parameter 'kafka-env' was not found in configurations dictionary!
Michael-Bronson
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Michael Bronson

Then what you could do using the config.py copy the kafka.env to the /tmp on the working cluster see below

# /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=get --host=localhost --cluster={your_clustername} --config-type=kafka-env --file=/tmp/kafka-env.json 

Sample output

2019-01-27 22:27:09,474 INFO ### Performing "get" content:
2019-01-27 22:27:09,474 INFO ### to file "/tmp/kafka.env.json"
2019-01-27 22:27:09,600 INFO ### on (Site:kafka.env, Tag:version1) 

Validate the contents of the .json in the "/tmp/kafka-env.json"

Sample output

{
  "properties": {
    "kafka_user_nproc_limit": "65536",
    "content": "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\n{% if kerberos_security_enabled or kafka_other_sasl_enabled %}\nexport KAFKA_KERBEROS_PARAMS=\"-Djavax.security.auth.useSubjectCredsOnly=false {{kafka_kerberos_params}}\"\n{% else %}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\n{% endif %}\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
    "kafka_log_dir": "/var/log/kafka",
    "kafka_pid_dir": "/var/run/kafka",
    "kafka_user_nofile_limit": "128000",
    "is_supported_kafka_ranger": "true",
    "kafka_user": "kafka"
  }

Copy the file using scp or whatever it over to your cluster and run the below command --action=set to update your problematic cluster. Before you start the kafka check the properties in the kafka.env.json to match you ie memory to match you cluster config.

# /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=set --host=localhost --cluster={your_clustername} --config-type=kafka-env --file=/tmp/kafka-env.json 

Sample output

2019-01-27 22:29:38,568 INFO ### Performing "set":
2019-01-27 22:29:38,568 INFO ### from file /tmp/kafka.env.json
2019-01-27 22:29:38,569 INFO ### PUTting file: "/tmp/kafka.env.json"
2019-01-27 22:29:38,569 INFO ### PUTting json into: doSet_version1.json
2019-01-27 22:29:38,719 INFO ### NEW Site:kafka.env, Tag:version2 

Start you Kafka from Ambari this should work.

Please let me know

View solution in original post

10 REPLIES 10

avatar
Master Mentor

@Michael Bronson

Do you have a working cluster of same HDP version? Is it a kerberized environment?

avatar

yes we have other cluster with the same HDP , do you want to check something ?

Michael-Bronson

avatar

by trhe way , we can start the kafka broker from cli , and also stop it , but from ambari we get - Fail: Configuration parameter 'kafka-env' was not found in configurations dictionary!


Michael-Bronson

avatar
Master Mentor

@Michael Bronson

Then what you could do using the config.py copy the kafka.env to the /tmp on the working cluster see below

# /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=get --host=localhost --cluster={your_clustername} --config-type=kafka-env --file=/tmp/kafka-env.json 

Sample output

2019-01-27 22:27:09,474 INFO ### Performing "get" content:
2019-01-27 22:27:09,474 INFO ### to file "/tmp/kafka.env.json"
2019-01-27 22:27:09,600 INFO ### on (Site:kafka.env, Tag:version1) 

Validate the contents of the .json in the "/tmp/kafka-env.json"

Sample output

{
  "properties": {
    "kafka_user_nproc_limit": "65536",
    "content": "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\n{% if kerberos_security_enabled or kafka_other_sasl_enabled %}\nexport KAFKA_KERBEROS_PARAMS=\"-Djavax.security.auth.useSubjectCredsOnly=false {{kafka_kerberos_params}}\"\n{% else %}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\n{% endif %}\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
    "kafka_log_dir": "/var/log/kafka",
    "kafka_pid_dir": "/var/run/kafka",
    "kafka_user_nofile_limit": "128000",
    "is_supported_kafka_ranger": "true",
    "kafka_user": "kafka"
  }

Copy the file using scp or whatever it over to your cluster and run the below command --action=set to update your problematic cluster. Before you start the kafka check the properties in the kafka.env.json to match you ie memory to match you cluster config.

# /var/lib/ambari-server/resources/scripts/configs.py --user=admin --password=admin --port=8080 --action=set --host=localhost --cluster={your_clustername} --config-type=kafka-env --file=/tmp/kafka-env.json 

Sample output

2019-01-27 22:29:38,568 INFO ### Performing "set":
2019-01-27 22:29:38,568 INFO ### from file /tmp/kafka.env.json
2019-01-27 22:29:38,569 INFO ### PUTting file: "/tmp/kafka.env.json"
2019-01-27 22:29:38,569 INFO ### PUTting json into: doSet_version1.json
2019-01-27 22:29:38,719 INFO ### NEW Site:kafka.env, Tag:version2 

Start you Kafka from Ambari this should work.

Please let me know

avatar

@Geoffrey Shelton Okot

before I will do the procedure , I see that from ambari - kafka-env is already set with the relevant parameters , so I not understand why need to reconfigure this ? ,


99435-capture.png



capture.png
Michael-Bronson

avatar
Master Mentor

@Michael Bronson

If you can start your brokers from the CLI then that means your env is not set properly as Ambari depends on that env to successfully start or stop a component.


What you could do is export the env from the problematic cluster and compare it meticulously against the env from the working cluster using the procedures I sent above.

You should be able to see the difference

Can you also validate that the symlinks are okay

99436-bronson.png


avatar

just for your info

we also see that ( from /var/log/messages )


python: detected unhandled python exception in val/lob/ambari-agent/cache/common-services/KAFKA/0.8.1/package/scripts/kafka_broker.py

and

package ambari-agent
isn't signed with proper key
post-create on /var/spool/abrt/python-2019-01-28-08:34:49-107750 exited with 1
Michael-Bronson

avatar

@Geoffrey Shelton Okot , do you recomended to reinstall the ambari agent on the kafka machines ?

Michael-Bronson

avatar

yes about the symlinks , they are ok


Michael-Bronson