Member since
03-14-2016
4721
Posts
1109
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1351 | 04-27-2020 03:48 AM | |
2333 | 04-26-2020 06:18 PM | |
1986 | 04-26-2020 06:05 PM | |
1481 | 04-13-2020 08:53 PM | |
1988 | 03-31-2020 02:10 AM |
05-14-2020
03:06 PM
1 Kudo
@ansharma1 You can run the following query in Ambari DB SELECT view_instance_id,resource_id,view_name, cluster_handle,cluster_type FROM viewinstance; Above query will show that the view which is causing the problem might not be associated with any cluster_handle. (cluster_handle is basically the cluster_id, which you can see in the clusters table). If cluster_handle for a view is not correctly updated then you might see that kind of message: org.apache.ambari.server.view.IllegalClusterException: Failed to get cluster information associated with this view instance If you want to use the same old View to work fine (instead of creating a new Instance of that view) then you might have to make sure to update the cluster_handle for that view instance is set correctly. Like 1. Take ambari DB dump (latest dump for backup), As we are going to change the DB manually. 2. Stop ambari-server 3. Run the following queries in the amabri DB. NOTE: Following is just a dummy query the values for 'cluster_handle' and 'view_instance_id' in that query may vary. UPDATE viewinstance SET cluster_handle = 4 WHERE view_instance_id=3;
... View more
04-27-2020
03:50 AM
1 Kudo
@mike_bronson7 Similar thread: https://community.cloudera.com/t5/Support-Questions/set-Variable-in-ambari-rest-API/m-p/294856/highlight/false#M217470
... View more
04-27-2020
03:48 AM
1 Kudo
@mike_bronson7 You can achieve it in a similar way described on the following thread: https://community.cloudera.com/t5/Support-Questions/AMBARI-how-to-set-value-in-json-REST-API/td-p/290385 Example: AMBARI_FQDN=newhwx1.example.com
CLUSTER_NAME=NewCluster
DATANODES=newhwx1.example.com,newhwx2.example.com,newhwx3.example.com,newhwx5.example.com
# curl -s -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop DataNodes","operation_level":{"level":"SERVICE","cluster_name":"'"$CLUSTER_NAME"'"},"query":"HostRoles/component_name=DATANODE&HostRoles/host_name.in('$DATANODES')&HostRoles/maintenance_state=OFF"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' "http://$AMBARI_FQDN:8080/api/v1/clusters/$CLUSTER_NAME/host_components" . .
... View more
04-26-2020
06:18 PM
2 Kudos
@mike_bronson7 This like talks about the command link options to Stop various HDP components manually using CLI. (Including HS2 and Hive Metastore) https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_reference/content/stopping_hdp_services.html
... View more
04-26-2020
06:15 PM
1 Kudo
@mike_bronson7 You can try this. 1. Get the list of DataNode hosts. When you can get like following: # curl -s -u admin:admin -H "X-Requested-By: ambari" -X GET http://$AMBARI_FQDN:8080/api/v1/clusters/CLUSTER_NAME/services/HDFS/components/DATANODE | grep 'host_name' 2. The. run the following kind of API call by passing the hostnames where the datanode are running. Suppose datanodes are running on 3 Hosts with name "dn1.example.com, dn2.example.com, dn3.example.com" then you can do the following: # curl -s -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop DataNodes","operation_level":{"level":"SERVICE","cluster_name":"$CLUSTER_NAME"},"query":"HostRoles/component_name=DATANODE&HostRoles/host_name.in(dn1.example.com,dn2.example.com,dn3.example.com)&HostRoles/maintenance_state=OFF"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' "h ttp://$AMBARI_FQDN:8080/api/v1/clusters/$CLUSTER_NAME/host_components" . Please replace all $CLUSTER_NAME and $AMBARI_FQDN accordingly.
... View more
04-26-2020
06:05 PM
1 Kudo
@mike_bronson7 If you want to get the HA Status (Active/Standby) for ResourceManager then you can make the following call: # curl -s -u admin:admin -H "X-Requested-By: ambari" -X GET "h ttp://$AMBARI_FQDN:8080/api/v1/clusters/$CLUSTER_NAME/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state.in(ACTIVE,STANDBY)" If you just want to know that which host the ResourceManager is in Active State then: # curl -s -u admin:admin -H "X-Requested-By: ambari" -X GET "h ttp://$AMBARI_FQDN:8080/api/v1/clusters/$CLUSTER_NAME/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=ACTIVE" .
... View more
04-13-2020
08:53 PM
1 Kudo
@sarm What is your HDFS version? Is it Hadoop 2.8.0, 3.0.0-alpha1 or higher? # hadoop version Quick check on what the JAR contains? # javap -cp /usr/hdp/3.1.0.0-78/hadoop/client/hadoop-hdfs-client.jar org.apache.hadoop.hdfs.web.resources.PutOpParam.Op | grep -i ALLOW
public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op ALLOWSNAPSHOT;
public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op DISALLOWSNAPSHOT; For example i am able to use the same WebHDFS API call without any issue as following: # curl -i -X PUT "http://kerlatest1.example.com:50070/webhdfs/v1/tmp/aaaa_bbbb?op=DISALLOWSNAPSHOT&user.name=hdfs"
HTTP/1.1 200 OK
Date: Tue, 14 Apr 2020 03:45:24 GMT
Cache-Control: no-cache
Expires: Tue, 14 Apr 2020 03:45:24 GMT
Date: Tue, 14 Apr 2020 03:45:24 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: hadoop.auth="u=hdfs&p=hdfs&t=simple&e=1586871924286&s=xxxxxxxx/yyyyyyyyy="; Path=/; HttpOnly
Content-Type: application/octet-stream
Content-Length: 0 Please refer to the following JIRA to verify if you are using the correct version (like 2.8.0, 3.0.0-alpha1 or higher) of HDFS where this option is available? Reference: https://issues.apache.org/jira/browse/HDFS-9057 https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8.0+Release (look for HDFS-9057)
... View more
04-03-2020
06:57 PM
@samue Latest HDP version is HDP 3.1.5, Which has many additional fixes with many additional features. https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/release-notes/content/fixed_issues.html Upgrade Guide to HDP 3.1.5 https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/release-notes/content/upgrading_parent.html Some behavioral changes in HDP 3.1.5: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/release-notes/content/behavior_changes.html
... View more
03-31-2020
02:10 AM
1 Kudo
@mike_bronson7 A good explanation for some delay is mentioned in JIRA AMBARI-20220: The ambari-server start and ambari-server restart commands are currently hard coded to wait a maximum of 50 seconds for the Jetty server to bind to port 8080 (or whatever the configured port is). Under normal circumstances, this value should be fine. However, since Jetty loads classes from views, the more views which are installed increases the total load time before Jetty binds to the server port. There could be few other reasons like a slightly high system resource utilisation when ambari was restarting could also cause little delay in opening the ambari api port. So you should try the following to fix this: - Edit the "/etc/ambari-server/conf/ambari.properties" and increase the following property value to 120 or 150 seconds. server.startup.web.timeout=120 - Then restart the ambari-server again. # ambari-server restart Reference: 1. https://issues.apache.org/jira/browse/AMBARI-20220 2. https://community.cloudera.com/t5/Support-Questions/change-the-port-for-ambari-server/m-p/214911#M176823
... View more
03-12-2020
10:22 PM
@Aman075 Instead of doing "#cd forPig" and then checking the listing can you try specifying the fully qualified path of the dir like: (because there may be multiple "forPig" directories (like one may be in relative path and other on absolute path) # pwd
# ls -lart /forPig If you still face any issue then please try the DEBUG logging to see if there is anything wrong? # export HADOOP_ROOT_LOGGER=DEBUG,console
# hdfs dfs -get -f /testing/pigData/drivers.csv /forPig/ .
... View more
03-12-2020
10:12 PM
@Aman075 Because the directory "/forPig" or may be its content already exist in your local file system hence the hdfs get command is not able to replace the local dir/file "/forPig" content with the HDFS file content "/testing/pigData/drivers.csv" So if you want to override the content inside the directory "/forPig" then you can use "-f" force option. # hdfs dfs -get -f /testing/pigData/drivers.csv /forPig/drivers.csv
(OR)
# hdfs dfs -get -f /testing/pigData/drivers.csv /forPig/ Also in your case "/forData" seems to be a directory hence you can try to specify the filename there like above or the dir like above
... View more
03-11-2020
09:36 PM
@prakashpunj By default it uses sqllite as Database (no password). However, you can try checking the following files for more details on the Superset host. # grep 'SUPERSET_DATABASE_PASSWORD' /etc/superset/conf/superset_config.py
# grep 'SUPERSET_DATABASE_PASSWORD' /var/lib/ambari-agent/data/command-* . Via ambari Ui you can check Ambari UI --> Superset --> Configs --> 'SUPERSET META DATA STORAGE CONFIG' (tab) Search for "Superset Database password" section in the above page and verify if it is empty or password is set. .
... View more
03-11-2020
04:02 AM
2 Kudos
@Gaurang1 Good to know that after enabling Port forwarding for 7180 port you are able to access the http://localhost:7180 properly and it is working fine. Regarding your SSH issue , i think you should map the port 22 to something else because your Laptop where you are running virtual Box also might be using that default port 22 If you still face a SSH port access issue while using VirtualBox then it can be discussed as part of. separate thread as the Original issue which you posted in the original thread is resolved. If your original is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-10-2020
07:27 PM
@Gaurang1 As you are using VirtualBox to CM so can you please chck if you have configured Port Forwarding in VirtualBox in order to be able to access the port 7180 running inside the VM to outside of VirtualBox? Reference: (from google search) https://www.simplified.guide/virtualbox/port-forwarding https://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/
... View more
03-09-2020
05:13 PM
1 Kudo
@Ham Starting with the Ambari 2.7.5 release, access to Ambari repositories requires authentication. To access the binaries, you must first have the required authentication credentials ( username and password ). Authentication credentials for new customers and partners are provided in an email sent from Cloudera to registered support contacts. Existing users can file a non-technical case within the support portal (https://my.cloudera.com) to obtain credentials. Reference: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-installation/content/access_ambari_paywall.html https://www.cloudera.com/contact-sales.html
... View more
03-04-2020
03:28 PM
@san_t_o In addition to my previous comment: Which path do you see when you run following on failing NodeManager node? # source /etc/hadoop/conf/yarn-env.sh
# echo $JAVA_LIBRARY_PATH
:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
(OR)
# echo $HADOOP_OPTS
-Dyarn.id.str= -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir If that path writable for the yarn user? Or if that user belongs to the correct group? # id yarn .
... View more
03-04-2020
03:21 PM
@san_t_o Can you please check few things: 1). Please verify what is the value set for "" property in the NodeManager option? (If it starts even for few seconds) # ps -ef | grep NodeManager Things to look for: 2). If above does not start due to the "" error then please check the permissions set for this directory: Example: # ls -ld /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
drwxrwxrwt. 8 hdfs hadoop 4096 Feb 25 07:23 /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
# ls -ld /var/lib/ambari-agent/tmp/
drwxrwxrwt. 12 ambari hadoop 4096 Mar 4 01:48 /var/lib/ambari-agent/tmp/ Why we wanted to check permissions on "/var/lib/ambari-agent/tmp/" and "/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir" directory because the "" is usually set to this directory so the yarn user should have proper read/write access on the directory listed here. Example: # grep 'JAVA_LIBRARY_PATH' /etc/hadoop/conf/yarn-env.sh
export JAVA_LIBRARY_PATH="${JAVA_LIBRARY_PATH}:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir" 3). Also please check if you see any "" related files here. Ideally those should be owned by "yarn" user like "yarn:hadoop" (hadoop is group). This directory and it's content should be writable be yarn user. Example: # ls -lart /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni*
-rwxr-xr-x. 1 yarn hadoop 752803 Dec 2 06:33 /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir/libleveldbjni-64-1-2529926063314066012.8 Possible Cause: So if by mistake if you would have ever restarted YARN NodeManager /RM with "root" user then the permissions on those directories/files might have changed and it might not be writable by Yarn user. So please check the directory permissions if they are writable or not?
... View more
03-02-2020
03:04 AM
@stryjz Most the API calls will remain same/similar in Ambari 2.7 and previous releases. Ambari 2.7 has a cool new feature where it is integrated with Swagger and you can try and explore all the REST APIs. Steps to use Swagger Login to Ambari Hit this url ( http://{ambari-host}:8080/api-docs) For more information on this please refer to : https://community.cloudera.com/t5/Community-Articles/How-To-Use-Swagger-with-Ambari-Explore-Ambari-REST/ta-p/248692 If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-01-2020
05:26 PM
@alepiedra Are you really connecting to the MySQL db provided as part of Sandbox ? Or is it some other mysql? Can you try specifying the Hostname as well in the mysql command to see how it goes? Something like this # mysql -u root -h sandbox.hortonworks.com .
... View more
03-01-2020
04:50 PM
2 Kudos
@Daria Ideally a 4 node cluster is good enough to setup locally with VirtualBox in a local environment to test various features in a HDFS/Yarn HA Enabled cluster. However, if you want to quickly test some features in a single host machine then you can also have a look at the HDP Sandbox. https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html
... View more
02-28-2020
01:32 AM
@shyamshaw Hive LLAP uses Hive 2.x, So please check if you have enabled the LLAP interactive query in order to get the statistics for the partition table. Also can you please verify if you have "Interactive Mode" is enabled for the Hive View ? Ambari UI --> Manage Ambari --> Views --> Hive > Hive View 2.0 Find the "Settings" section, and verify if the "Interactive Mode" is set to "true" or not? If not then can you try to set it to "true" and then hard refresh the browser and then try again.
... View more
02-27-2020
07:45 PM
1 Kudo
@Sud To reset of forgotten password to 'admin' https://community.cloudera.com/t5/Community-Articles/Ambari-2-7-0-How-to-Reset-Ambari-Admin-Password-from/ta-p/248891 Once you remember the old admin credential (or have already reset to 'admin'). https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/administering-ambari/content/amb_change_the_admin_password.html If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
02-27-2020
07:40 PM
1 Kudo
@mike_bronson7 In addition to my previous comment: Ambari provides option to "Rolling Restart Kafka Brokers" (one by one). In the ambari UI when you click on Ambari UI --> Kafka --> Service Actions (drop down) --> "Restart Kafka Brokers" Then it basically shows the Rolling restart Settings like. You can decide how much time in your env one Kafka Broker full restart taks and then after how much time do you want other Kafka broker restart will be scheduled by Ambari. It can also be achieved using the API call as following: # curl -iskH "X-Requested-By: ambari" -u admin:admin -X POST -d '[{"RequestSchedule":{"batch":[{"requests":[{"order_id":1,"type":"POST","uri":"/clusters/NewCluster/requests","RequestBodyInfo":{"RequestInfo":{"context":"_PARSE_.ROLLING-RESTART.KAFKA_BROKER.1.3","command":"RESTART"},"Requests/resource_filters":[{"service_name":"KAFKA","component_name":"KAFKA_BROKER","hosts":"testnode2.example.com"}]}},{"order_id":2,"type":"POST","uri":"/clusters/NewCluster/requests","RequestBodyInfo":{"RequestInfo":{"context":"_PARSE_.ROLLING-RESTART.KAFKA_BROKER.2.3","command":"RESTART"},"Requests/resource_filters":[{"service_name":"KAFKA","component_name":"KAFKA_BROKER","hosts":"testnode3.example.com"}]}},{"order_id":3,"type":"POST","uri":"/clusters/NewCluster/requests","RequestBodyInfo":{"RequestInfo":{"context":"_PARSE_.ROLLING-RESTART.KAFKA_BROKER.3.3","command":"RESTART"},"Requests/resource_filters":[{"service_name":"KAFKA","component_name":"KAFKA_BROKER","hosts":"testnode5.example.com"}]}}]},{"batch_settings":{"batch_separation_in_seconds":"121","task_failure_tolerance":1}}]}}]' ttp://testnode1.example.com:8080/api/v1/clusters/NewCluster/request_schedules .
... View more
02-27-2020
07:34 PM
@mike_bronson7 Looks like you have asked a very similar query on the other thread : https://community.cloudera.com/t5/Support-Questions/amari-rest-API-how-to-stop-service-on-specific-host/m-p/290651
... View more
02-27-2020
04:34 PM
2 Kudos
@mike_bronson7 Error says : hostname=kafka01. Host not found So please check if the Hostname is correct (i means fully qualified hostnmae) Please compare it with the hostname listed in the following API call response. Just try to open this URL in the browser to see which hostname ambari is expecting: http://ambari_server_hostname:8080/api/v1/clusters/$CLUSTER_NAME/hosts/ .
... View more
02-27-2020
03:41 PM
3 Kudos
@mike_bronson7 Using API call Get the List of Hostnames where KAFKA_BROKERS are running: Example: # curl -iskH "X-Requested-By: ambari" -X GET -u admin:admin ttp://testnode1.example.com:8080/api/v1/clusters/NewCluster/services/KAFKA/components/KAFKA_BROKER?fields=host_components/HostRoles/hostname | grep host_name | awk -F ":" '{print $2}' | sed -e 's|["'\'']||g'
testnode2.example.com
testnode3.example.com
testnode5.example.com The API call to Start Kafka Broker on Node "testnode2.example.com" can be achieved as following: # curl -iskH "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Start Kafka Broker","operation_level":{"level":"HOST_COMPONENT","cluster_name":"NewCluster","host_name":"testnode2.example.com","service_name":"KAFKA"}},"Body":{"HostRoles":{"state":"STARTED"}}}' ttp://testnode1.example.com:8080/api/v1/clusters/NewCluster/hosts/testnode2.example.com/host_components/KAFKA_BROKER The API call to Stop Kafka Broker on Node "testnode2.example.com" can be achieved as following: # curl -iskH "X-Requested-By: ambari" -u admin:admin -X PUT -d '{"RequestInfo":{"context":"Stop Kafka Broker","operation_level":{"level":"HOST_COMPONENT","cluster_name":"NewCluster","host_name":"testnode2.example.com","service_name":"KAFKA"}},"Body":{"HostRoles":{"state":"INSTALLED"}}}' ttp://testnode1.example.com:8080/api/v1/clusters/NewCluster/hosts/testnode2.example.com/host_components/KAFKA_BROKER . .
... View more
02-27-2020
03:46 AM
@cc1 There can be various reasons ...starting from - Low disk space, Low memory , - No jvm present , - Insufficient resources available on the namenode host, - Corrupted fsimage, - namenode port already in use - Corrupted Edits log file - Insufficient permissions to the NN startup user, - Incorrect configurations ... ...etc such various kind of reasons. .... Hence it will be good to first check the NameNode logs to find out what kind of error is it showing and then accordingly it can be troubleshooted.
... View more
02-27-2020
03:43 AM
@Sud Ambari will not allow any user creation without having a valid ambari crdentials. However, I have responded to a very similar query of yours in the other thread which you have opened here: https://community.cloudera.com/t5/Support-Questions/Create-new-Ambari-user-without-Ambari-admin-credential/m-p/290601 Please take a look at that thread , to reset the password of any ambari local user to 'admin' and then later it can be changed to any desired password.
... View more
02-27-2020
03:40 AM
@Sud You can try resetting the ambari 'admin' user's password to default 'admin' by running the following queries in the Ambari DB. Then you should be able to login to Ambari UI as user: "admin" and password: "admin". Then you can change the password to your desired one. 1). Stop Ambari Server # ambari-server stop 2). Take a Backup of Ambari DB (just for a safe copy) 3). Login to your Database like using psql/mysql/sqlplus utility and connect to Ambari DB then run any of the below quwery based on your Ambari Version. If you are using Ambari 2.6 Or previous Ambari versions then you can run the following SQL Query in your Ambari DB UPDATE users SET user_password='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' WHERE user_name='admin' For Ambari 2.7.x you can run the following query in Ambari DB UPDATE user_authentication SET authentication_key='538916f8943ec225d97a9a86a2c6ec0818c1cd400e09e03b660fdaaec4af29ddbb6f2b1033b81b00' WHERE user_id IN (SELECT user_id FROM users WHERE user_name='admin'); 4). Restart Ambari Server and then login to ambari UI with admin/admin credential. # ambari-server start . Using above approach you can change any users password to 'admin' and then later from ambari UI the password can be changed.
... View more
02-24-2020
10:04 PM
1 Kudo
@mike_bronson7 Thank you for sharing the screenshot .. it is very clean now. Please replace the following: "$CLUSTER_NAME" with "'"$CLUSTER_NAME"'" "$service" with "'"$service"'" "_PARSE_.STOP.$service" with "'"_PARSE_.STOP.$service"'" In general, Replace any value which has $ABCD with a single quote and then quote mark as '"$ABCD"' So the over all change will be "$ABCD" ----> "'"$ABCD"'" Stop Kafka Service: # curl -iLv -u "admin:admin" -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"'"_PARSE_.STOP.$service"'","operation_level":{"level":"SERVICE","cluster_name":"'"$CLUSTER_NAME"'","service_name":"'"$service"'"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://$HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/$service Start Kafka Service # curl -iLv -u "admin:admin" -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo":{"context":"'"_PARSE_.START.$service"'","operation_level":{"level":"SERVICE","cluster_name":"'"$CLUSTER_NAME"'","service_name":"'"$service"'"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' http://$HOST:8080/api/v1/clusters/$CLUSTER_NAME/services/$service
... View more