Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2030 | 04-27-2020 03:48 AM | |
4023 | 04-26-2020 06:18 PM | |
3249 | 04-26-2020 06:05 PM | |
2601 | 04-13-2020 08:53 PM | |
3866 | 03-31-2020 02:10 AM |
12-11-2019
03:22 PM
1 Kudo
@pauljoshiva I was reading the following doc: https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cm_631_known_issues.html Which says "Many Cloudera Manager wizards, including Installation wizards and Add Service/Role wizards, cannot be completed when using Microsoft Internet Explorer version 11.x. To work around the issue, use another supported browser. Can you try with a slightly different browser to see if it works.
... View more
12-11-2019
03:07 PM
@TJSully Similarly for the Zookeeper process you can search for "zookeeper.log.dir" property in the output: # ps -ef | grep zookeeper | grep zookeeper.log.dir --color
... View more
12-11-2019
03:02 PM
@TJSully On the Kafka Host where Kafka Broker is running can you please try running the following command to see the value of "kafka.logs.dir" property. # ps -ef | grep kafka . Example Output in my case: kafka 21831 1 41 22:58 ? 00:00:03 /usr/jdk64/jdk1.8.0_112/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Xloggc:/var/log/kafka/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/var/log/kafka -Dlog4j.configuration=file:/usr/hdp/2.6.5.0-292kafka/bin/../config/log4j.properties .................. kafka.Kafka /usr/hdp/2.6.5.0-292/kafka/config/server.properties Notice the value of property "kafka.logs.dir" it might be a different value in your case. . .
... View more
12-10-2019
01:06 AM
@eswarloges In one of your previous update you have mentioned that you have added "the hiveserver2 jdbc url is configured in the spark config." However, looks like the error that you are getting is because the mentioned properties are not found in the spark2-defaults config which is there in your classpath. So can you please make sure that you have included the correct CLASSPATH which is pointing to correct spark-defaults which has the following properties added as mentioned in "Required properties" Section of the following Doc: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/integrating-hive/content/hive_configure_a_spark_hive_connection.html . You must add several Spark properties through spark2-defaults in Ambari to use the Hive Warehouse Connector for accessing data in Hive. Alternatively, configuration can be provided for each job using --conf. spark.sql.hive.hiveserver2.jdbc.url The URL for HiveServer2 Interactive spark.datasource.hive.warehouse.metastoreUri The URI for the metastore spark.datasource.hive.warehouse.load.staging.dir The HDFS temp directory for batch writes to Hive, /tmp for example spark.hadoop.hive.llap.daemon.service.hosts The application name for LLAP service spark.hadoop.hive.zookeeper.quorum The ZooKeeper hosts used by LLAP Set the values of these properties as follows: spark.sql.hive.hiveserver2.jdbc.url In Ambari, copy the value from Services > Hive > Summary > HIVESERVER2 INTERACTIVE JDBC URL. spark.datasource.hive.warehouse.metastoreUri Copy the value from hive.metastore.uris. In Hive, at the hive> prompt, enter set hive.metastore.uris and copy the output. For example, thrift://mycluster-1.com:9083. spark.hadoop.hive.llap.daemon.service.hosts Copy value from Advanced hive-interactive-site > hive.llap.daemon.service.hosts. spark.hadoop.hive.zookeeper.quorum Copy the value from Advanced hive-sitehive.zookeeper.quorum. . .
... View more
12-08-2019
12:37 PM
2 Kudos
@mike_bronson7 1. You can get the List of KAFKA Broker Hosts (hostnames) using the following API call. # curl -iv -u admin:admin -H "X-Requested-By: ambari" -X GET http://$AMBARI_HOST:8080/api/v1/clusters/TestCluster/services/KAFKA/components/KAFKA_BROKER?fields=host_components/HostRoles/host_name 2. Once you know/decide the Hostname (For example: 'kafkabroker5.example.com') in which you want to stop/start the Kafka Broker then you can try the following: . A. To Stop Kafka Broker on Host 'kafkabroker5.example.com' # curl -iv -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop Kafka Broker","operation_level":{"level":"HOST_COMPONENT","cluster_name":"TestCluster","host_name":"kafkabroker5.example.com","service_name":"KAFKA"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/TestCluster/hosts/kafkabroker5.example.com/host_components/KAFKA_BROKER . B. To Start Kafka Broker on Host 'kafkabroker5.example.com' # curl -iv -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Start Kafka Broker","operation_level":{"level":"HOST_COMPONENT","cluster_name":"TestCluster","host_name":"kafkabroker5.example.com","service_name":"KAFKA"}},"Body":{"HostRoles":{"state":"STARTED"}}}' http://$AMBARI_HOST:8080/api/v1/clusters/TestCluster/hosts/kafkabroker5.example.com/host_components/KAFKA_BROKER .
... View more
12-03-2019
12:41 PM
1 Kudo
@mike_bronson7 Yes, you are right if the alert state is "OK" means the service is running well usually. If it is WARNING/CRITICAL then we need to look at the alert text and alert host to find out why and in which host the alert is in that state. Basically the Kafka "host" where the alert was triggered, The "state" of the alert like CRITICAL,OK,WARNING and then Alert "text" are usually the important parts of an alert which gives us a good idea on what is happening. So you can capture those selected output using: # curl -u admin:admin -H "X-Requested-By: ambari" -X GET "http://$AMBARI_HOST:8080/api/v1/clusters/NewCluster/alerts?fields=Alert/host_name,Alert/host_name,Alert/state,Alert/text&Alert/service_name=KAFKA" . Example Output: # curl -u admin:admin -H "X-Requested-By: ambari" -X GET "<a href="http://newhwx1.example.com:8080/api/v1/clusters/$CLUSTER_NAME/alerts?fields=Alert/host_name,Alert/host_name,Alert/state,Alert/text&Alert/service_name=KAFKA" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/$CLUSTER_NAME/alerts?fields=Alert/host_name,Alert/host_name,Alert/state,Alert/text&Alert/service_name=KAFKA</a>"
{
"href" : "<a href="http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts?fields=Alert/host_name,Alert/host_name,Alert/state,Alert/text&Alert/service_name=KAFKA" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts?fields=Alert/host_name,Alert/host_name,Alert/state,Alert/text&Alert/service_name=KAFKA</a>",
"items" : [
{
"href" : "<a href="http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/704" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/704</a>",
"Alert" : {
"cluster_name" : "NewCluster",
"definition_id" : 401,
"definition_name" : "kafka_broker_process",
"host_name" : "newhwx3.example.com",
"id" : 704,
"service_name" : "KAFKA",
"state" : "OK",
"text" : "TCP OK - 0.000s response on port 6667"
}
},
{
"href" : "<a href="http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/1201" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/1201</a>",
"Alert" : {
"cluster_name" : "NewCluster",
"definition_id" : 401,
"definition_name" : "kafka_broker_process",
"host_name" : "newhwx5.example.com",
"id" : 1201,
"service_name" : "KAFKA",
"state" : "CRITICAL",
"text" : "Connection failed: [Errno 111] Connection refused to newhwx5.example.com:6667"
}
}
]
} .
... View more
12-03-2019
12:02 PM
1 Kudo
@mike_bronson7 You can try something like this: # curl -u admin:admin -H "X-Requested-By: ambari" -X GET "http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER_NAME/alerts?fields=*&Alert/service_name=KAFKA" . Example Output: # curl -u admin:admin -H "X-Requested-By: ambari" -X GET "<a href="http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts?fields=*&Alert/service_name=KAFKA" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts?fields=*&Alert/service_name=KAFKA</a>"
{
"href" : "<a href="http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts?fields=*&Alert/service_name=KAFKA" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts?fields=*&Alert/service_name=KAFKA</a>",
"items" : [
{
"href" : "<a href="http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/704" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/704</a>",
"Alert" : {
"cluster_name" : "NewCluster",
"component_name" : "KAFKA_BROKER",
"definition_id" : 401,
"definition_name" : "kafka_broker_process",
"firmness" : "HARD",
"host_name" : "newhwx3.example.com",
"id" : 704,
"instance" : null,
"label" : "Kafka Broker Process",
"latest_timestamp" : 1575403190535,
"maintenance_state" : "OFF",
"occurrences" : 14,
"original_timestamp" : 1575402410385,
"repeat_tolerance" : 1,
"repeat_tolerance_remaining" : 0,
"scope" : "HOST",
"service_name" : "KAFKA",
"state" : "OK",
"text" : "TCP OK - 0.000s response on port 6667"
}
},
{
"href" : "<a href="http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/1201" target="_blank">http://newhwx1.example.com:8080/api/v1/clusters/NewCluster/alerts/1201</a>",
"Alert" : {
"cluster_name" : "NewCluster",
"component_name" : "KAFKA_BROKER",
"definition_id" : 401,
"definition_name" : "kafka_broker_process",
"firmness" : "HARD",
"host_name" : "newhwx5.example.com",
"id" : 1201,
"instance" : null,
"label" : "Kafka Broker Process",
"latest_timestamp" : 1575403167289,
"maintenance_state" : "OFF",
"occurrences" : 12,
"original_timestamp" : 1575402507311,
"repeat_tolerance" : 1,
"repeat_tolerance_remaining" : 0,
"scope" : "HOST",
"service_name" : "KAFKA",
"state" : "CRITICAL",
"text" : "Connection failed: [Errno 111] Connection refused to newhwx5.example.com:6667"
}
}
]
} .
... View more
11-21-2019
07:30 PM
@jhoward 1. How was the "/etc/ambari-server/conf/ambari.properties" file got deleted? 2. Do you have a backup of this file "amabri.properties" to use? 3. Ambari stores all the cluster informations inside it's Database. So if your Database is not dropped (lost), you can try using a dummy "ambari.properties" of same version from any of your other working cluster by just changing the JDBC settings to point to your correct ambari DB host/port and with DB credentials. The DB settings for ambari can be found from following command (in any of working ambari cluster) and then you can try to place those properties with correct Database values in your problematic ambari # grep 'jdbc' /etc/ambari-server/conf/ambari.properties .
... View more
11-21-2019
06:41 PM
@asmarz Please try using "spark-submit" instead of "spark-shell" as mentioned in: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/running-spark-applications/content/running_sample_spark_2_x_applications.html Example: # /usr/hdp/current/spark2-client/bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn \
--num-executors 1 \
--driver-memory 512m \
--executor-memory 512m \
--executor-cores 1 \
--deploy-mode cluster \
/usr/hdp/current/spark2-client/examples/jars/spark-examples_2.11-2.3.2.3.1.4.0-315.jar 10
... View more
11-21-2019
06:00 PM
@vciampa Similarly you can also find the Ambari repos as well in the docs "Ambari Repositories" section: Example: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.4.0/bk_ambari-installation-ppc/content/ambari_repositories.html
... View more