Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2018 | 04-27-2020 03:48 AM | |
3991 | 04-26-2020 06:18 PM | |
3228 | 04-26-2020 06:05 PM | |
2581 | 04-13-2020 08:53 PM | |
3836 | 03-31-2020 02:10 AM |
05-14-2020
03:06 PM
1 Kudo
@ansharma1 You can run the following query in Ambari DB SELECT view_instance_id,resource_id,view_name, cluster_handle,cluster_type FROM viewinstance; Above query will show that the view which is causing the problem might not be associated with any cluster_handle. (cluster_handle is basically the cluster_id, which you can see in the clusters table). If cluster_handle for a view is not correctly updated then you might see that kind of message: org.apache.ambari.server.view.IllegalClusterException: Failed to get cluster information associated with this view instance If you want to use the same old View to work fine (instead of creating a new Instance of that view) then you might have to make sure to update the cluster_handle for that view instance is set correctly. Like 1. Take ambari DB dump (latest dump for backup), As we are going to change the DB manually. 2. Stop ambari-server 3. Run the following queries in the amabri DB. NOTE: Following is just a dummy query the values for 'cluster_handle' and 'view_instance_id' in that query may vary. UPDATE viewinstance SET cluster_handle = 4 WHERE view_instance_id=3;
... View more
04-27-2020
03:48 AM
1 Kudo
@mike_bronson7 You can achieve it in a similar way described on the following thread: https://community.cloudera.com/t5/Support-Questions/AMBARI-how-to-set-value-in-json-REST-API/td-p/290385 Example: AMBARI_FQDN=newhwx1.example.com
CLUSTER_NAME=NewCluster
DATANODES=newhwx1.example.com,newhwx2.example.com,newhwx3.example.com,newhwx5.example.com
# curl -s -u admin:admin -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop DataNodes","operation_level":{"level":"SERVICE","cluster_name":"'"$CLUSTER_NAME"'"},"query":"HostRoles/component_name=DATANODE&HostRoles/host_name.in('$DATANODES')&HostRoles/maintenance_state=OFF"},"Body":{"HostRoles":{"state":"INSTALLED"}}}' "http://$AMBARI_FQDN:8080/api/v1/clusters/$CLUSTER_NAME/host_components" . .
... View more
04-26-2020
06:18 PM
2 Kudos
@mike_bronson7 This like talks about the command link options to Stop various HDP components manually using CLI. (Including HS2 and Hive Metastore) https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_reference/content/stopping_hdp_services.html
... View more
04-26-2020
06:05 PM
1 Kudo
@mike_bronson7 If you want to get the HA Status (Active/Standby) for ResourceManager then you can make the following call: # curl -s -u admin:admin -H "X-Requested-By: ambari" -X GET "h ttp://$AMBARI_FQDN:8080/api/v1/clusters/$CLUSTER_NAME/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state.in(ACTIVE,STANDBY)" If you just want to know that which host the ResourceManager is in Active State then: # curl -s -u admin:admin -H "X-Requested-By: ambari" -X GET "h ttp://$AMBARI_FQDN:8080/api/v1/clusters/$CLUSTER_NAME/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=ACTIVE" .
... View more
04-13-2020
08:53 PM
1 Kudo
@sarm What is your HDFS version? Is it Hadoop 2.8.0, 3.0.0-alpha1 or higher? # hadoop version Quick check on what the JAR contains? # javap -cp /usr/hdp/3.1.0.0-78/hadoop/client/hadoop-hdfs-client.jar org.apache.hadoop.hdfs.web.resources.PutOpParam.Op | grep -i ALLOW
public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op ALLOWSNAPSHOT;
public static final org.apache.hadoop.hdfs.web.resources.PutOpParam$Op DISALLOWSNAPSHOT; For example i am able to use the same WebHDFS API call without any issue as following: # curl -i -X PUT "http://kerlatest1.example.com:50070/webhdfs/v1/tmp/aaaa_bbbb?op=DISALLOWSNAPSHOT&user.name=hdfs"
HTTP/1.1 200 OK
Date: Tue, 14 Apr 2020 03:45:24 GMT
Cache-Control: no-cache
Expires: Tue, 14 Apr 2020 03:45:24 GMT
Date: Tue, 14 Apr 2020 03:45:24 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: hadoop.auth="u=hdfs&p=hdfs&t=simple&e=1586871924286&s=xxxxxxxx/yyyyyyyyy="; Path=/; HttpOnly
Content-Type: application/octet-stream
Content-Length: 0 Please refer to the following JIRA to verify if you are using the correct version (like 2.8.0, 3.0.0-alpha1 or higher) of HDFS where this option is available? Reference: https://issues.apache.org/jira/browse/HDFS-9057 https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8.0+Release (look for HDFS-9057)
... View more
03-31-2020
02:10 AM
1 Kudo
@mike_bronson7 A good explanation for some delay is mentioned in JIRA AMBARI-20220: The ambari-server start and ambari-server restart commands are currently hard coded to wait a maximum of 50 seconds for the Jetty server to bind to port 8080 (or whatever the configured port is). Under normal circumstances, this value should be fine. However, since Jetty loads classes from views, the more views which are installed increases the total load time before Jetty binds to the server port. There could be few other reasons like a slightly high system resource utilisation when ambari was restarting could also cause little delay in opening the ambari api port. So you should try the following to fix this: - Edit the "/etc/ambari-server/conf/ambari.properties" and increase the following property value to 120 or 150 seconds. server.startup.web.timeout=120 - Then restart the ambari-server again. # ambari-server restart Reference: 1. https://issues.apache.org/jira/browse/AMBARI-20220 2. https://community.cloudera.com/t5/Support-Questions/change-the-port-for-ambari-server/m-p/214911#M176823
... View more
03-11-2020
09:36 PM
@prakashpunj By default it uses sqllite as Database (no password). However, you can try checking the following files for more details on the Superset host. # grep 'SUPERSET_DATABASE_PASSWORD' /etc/superset/conf/superset_config.py
# grep 'SUPERSET_DATABASE_PASSWORD' /var/lib/ambari-agent/data/command-* . Via ambari Ui you can check Ambari UI --> Superset --> Configs --> 'SUPERSET META DATA STORAGE CONFIG' (tab) Search for "Superset Database password" section in the above page and verify if it is empty or password is set. .
... View more
03-11-2020
04:02 AM
2 Kudos
@Gaurang1 Good to know that after enabling Port forwarding for 7180 port you are able to access the http://localhost:7180 properly and it is working fine. Regarding your SSH issue , i think you should map the port 22 to something else because your Laptop where you are running virtual Box also might be using that default port 22 If you still face a SSH port access issue while using VirtualBox then it can be discussed as part of. separate thread as the Original issue which you posted in the original thread is resolved. If your original is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
03-10-2020
07:27 PM
@Gaurang1 As you are using VirtualBox to CM so can you please chck if you have configured Port Forwarding in VirtualBox in order to be able to access the port 7180 running inside the VM to outside of VirtualBox? Reference: (from google search) https://www.simplified.guide/virtualbox/port-forwarding https://www.howtogeek.com/122641/how-to-forward-ports-to-a-virtual-machine-and-use-it-as-a-server/
... View more
03-04-2020
03:28 PM
@san_t_o In addition to my previous comment: Which path do you see when you run following on failing NodeManager node? # source /etc/hadoop/conf/yarn-env.sh
# echo $JAVA_LIBRARY_PATH
:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
(OR)
# echo $HADOOP_OPTS
-Dyarn.id.str= -Dyarn.policy.file=hadoop-policy.xml -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir If that path writable for the yarn user? Or if that user belongs to the correct group? # id yarn .
... View more