Support Questions

Find answers, ask questions, and share your expertise

Kafka Ambari alert configuration when not using port 6667

avatar
Rising Star

Currently running HDP 2.3.4.7

Have a Kafka broker running but can't connect to it on port 6667. Turns out this was a switch issue due to port 6667 being blocked. Rather than reconfigure the switch, we've changed to Kafka listener port to 9092.

Everything is working fine but Ambari is raising an alert that it cannot detect that Kafka broker is running. Looking at the alert message, the alert is still trying to connect on port 6667.

Tried restarting Ambari monitoring and Ambari server, but the alert is still picking up port 6667 from somewhere, even though the Kafka confit is set to 9092.

Any ideas? Is 6667 hard coded in somewhere or hidden in a jar file?

1 ACCEPTED SOLUTION

avatar
Rising Star

Managed to fix this in the end.

Using the REST API (:8080/api/v1/clusters/XXXXX/alert_definitions/59) I could retrieve the actual alert definition and get the following:

{

  "href" : "http://XXXX:8080/api/v1/clusters/XXXX/alert_definitions/59",
  "AlertDefinition" : {
    "cluster_name" : "XXXXX",
    "component_name" : "KAFKA_BROKER",
    "description" : "This host-level alert is triggered if the Kafka Broker cannot be determined to be up..",
    "enabled" : true,
    "id" : 59,
    "ignore_host" : false,
    "interval" : 1,
    "label" : "Kafka Broker Process",
    "name" : "kafka_broker_process",
    "scope" : "HOST",
    "service_name" : "KAFKA",
    "source" : {
      "default_port" : 6667.0,
      "reporting" : {
        "ok" : {
          "text" : "TCP OK - {0:.3f}s response on port {1}"
        },
        "warning" : {
          "text" : "TCP OK - {0:.3f}s response on port {1}",
          "value" : 1.5
        },
        "critical" : {
          "text" : "Connection failed: {0} to {1}:{2}",
          "value" : 5.0
        }
      },
      "type" : "PORT",
      "uri" : "{{kafka-broker/port}}"
    }
  }
}

The default port is 6667 but it appears to be looking at kafka-broker/port for the actual port address, rather than the listener port set up via Ambari.

Manually changed the default port to 9092 by saving this output to a file, editting and then doing a curl PUT. This changed the alert port and the alert went away.

View solution in original post

12 REPLIES 12

avatar
Super Collaborator

How did you change the port in Ambari? On HDP 2.3, this alert will look at the kafka-broker/listeners property. In older versions of HDP, it used kafka-broker/port.

Please check what these properties are and set them correctly. When they are not set, the alert defaults to 6667. Chances are it's wanting to use kafka-broker/listeners and that property doesn't exist.

avatar
Rising Star

Port was changed in Ambari. Property Kafka-broker/listener set to 9092 at which points Kafka clients could connect to broker.

avatar

Is it the fresh alert or a Stale alert? Many times we see stale alerts. Which might be generated when the Kafka broker was down for few minutes before the port change.

avatar
Rising Star

Fresh alerts. Can clear the alert by restarting Ambari, but then it comes back. Looking at the log message, it's trying to connect on 6667

avatar
Master Guru

If you still have "kafka-broker/port" can you try setting it to 9092. Since Alerts are a part of Ambari it will be good to know what's your Ambari version, and have you upgraded Ambari recently? Also, please check the file /etc/kakfa/conf/server.properties and make sure "listeners" contain the correct port. In same rare cases it can happen that Ambari doesn't apply actual settings to config files.

avatar
Rising Star

Ambari 2.2.0.0. This is a clean install, not an upgrade.

server.properties has listener on 9092

avatar
Rising Star

Managed to fix this in the end.

Using the REST API (:8080/api/v1/clusters/XXXXX/alert_definitions/59) I could retrieve the actual alert definition and get the following:

{

  "href" : "http://XXXX:8080/api/v1/clusters/XXXX/alert_definitions/59",
  "AlertDefinition" : {
    "cluster_name" : "XXXXX",
    "component_name" : "KAFKA_BROKER",
    "description" : "This host-level alert is triggered if the Kafka Broker cannot be determined to be up..",
    "enabled" : true,
    "id" : 59,
    "ignore_host" : false,
    "interval" : 1,
    "label" : "Kafka Broker Process",
    "name" : "kafka_broker_process",
    "scope" : "HOST",
    "service_name" : "KAFKA",
    "source" : {
      "default_port" : 6667.0,
      "reporting" : {
        "ok" : {
          "text" : "TCP OK - {0:.3f}s response on port {1}"
        },
        "warning" : {
          "text" : "TCP OK - {0:.3f}s response on port {1}",
          "value" : 1.5
        },
        "critical" : {
          "text" : "Connection failed: {0} to {1}:{2}",
          "value" : 5.0
        }
      },
      "type" : "PORT",
      "uri" : "{{kafka-broker/port}}"
    }
  }
}

The default port is 6667 but it appears to be looking at kafka-broker/port for the actual port address, rather than the listener port set up via Ambari.

Manually changed the default port to 9092 by saving this output to a file, editting and then doing a curl PUT. This changed the alert port and the alert went away.

avatar
Super Collaborator

Yes, this is why I suggested you check both kafka-broker/port and kafka-broker/listeners. Changing the default port is fine for now, but if you ever change the port again, you'll need to repeat this step. Instead, it's better to either set kafka-broker/port or change the alert to use kafka-broke/listeners.

I'm guessing that at one point this was an HDP 2.2 cluster (which used kafka-broker/port originally) and then it was upgraded to an HDP 2.3 cluster.

avatar
Rising Star

So change "uri":"{{kafka-broker/port}}" to "uri":"{{kafka-broker/listeners}}" I assume?

FYI - Just checked on an untouched 2.4 sandbox and the alert is looking at kafka-broker/port