Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2024 | 04-27-2020 03:48 AM | |
4012 | 04-26-2020 06:18 PM | |
3231 | 04-26-2020 06:05 PM | |
2594 | 04-13-2020 08:53 PM | |
3847 | 03-31-2020 02:10 AM |
12-22-2016
11:54 AM
1 Kudo
@Ye Jun Do you see the view related Jars present in the following directory of your ambari and has proper read permission? /var/lib/ambari-server/resources/views/ Also ambari extracts those view jars and deploys it inside the following directory so can you please see if you are able to list the "work" directory: # ls -l /var/lib/ambari-server/resources/views/work/
- The user who is running the ambari should have read access to the view jars. - Do you see the view instance initialization message in your ambari-server.log as following: Example: 20 Dec 2016 10:36:48,201 INFO [main] ViewRegistry:1811 - Auto creating instance of view TEZ for cluster qnhadoop.
20 Dec 2016 10:36:48,205 INFO [main] ViewRegistry:1689 - View deployed: TEZ{0.7.0.2.5.0.0-22}.
20 Dec 2016 10:36:48,211 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/ambari-admin-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,223 INFO [main] ViewRegistry:1689 - View deployed: ADMIN_VIEW{2.4.1.0}.
20 Dec 2016 10:36:48,228 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/capacity-scheduler-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,257 INFO [main] ViewRegistry:1747 - setting up logging for view CAPACITY-SCHEDULER{1.0.0} as per property file view.log4j.properties
20 Dec 2016 10:36:48,281 INFO [main] ViewRegistry:1811 - Auto creating instance of view CAPACITY-SCHEDULER for cluster qnhadoop.
20 Dec 2016 10:36:48,281 INFO [main] ViewRegistry:1689 - View deployed: CAPACITY-SCHEDULER{1.0.0}.
20 Dec 2016 10:36:48,288 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/zeppelin-view-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,325 INFO [main] ViewRegistry:1689 - View deployed: ZEPPELIN{1.0.0}.
20 Dec 2016 10:36:48,330 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/wfmanager-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,368 INFO [main] ViewRegistry:1689 - View deployed: Workflow Manager{1.0.0}.
20 Dec 2016 10:36:48,378 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/hueambarimigration-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,412 INFO [main] ViewRegistry:1747 - setting up logging for view HUETOAMBARI_MIGRATION{1.0.0} as per property file view.log4j.properties
20 Dec 2016 10:36:48,420 INFO [main] ViewRegistry:1689 - View deployed: HUETOAMBARI_MIGRATION{1.0.0}.
20 Dec 2016 10:36:48,427 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/pig-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,481 INFO [main] ViewRegistry:1747 - setting up logging for view PIG{1.0.0} as per property file view.log4j.properties
20 Dec 2016 10:36:48,543 INFO [main] ViewRegistry:1689 - View deployed: PIG{1.0.0}.
20 Dec 2016 10:36:48,552 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/files-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,590 INFO [main] ViewRegistry:1747 - setting up logging for view FILES{1.0.0} as per property file view.log4j.properties
20 Dec 2016 10:36:48,620 INFO [main] ViewRegistry:1811 - Auto creating instance of view FILES for cluster qnhadoop.
20 Dec 2016 10:36:48,620 INFO [main] ViewRegistry:1689 - View deployed: FILES{1.0.0}.
20 Dec 2016 10:36:48,626 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/storm-view-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,629 INFO [main] ViewRegistry:1747 - setting up logging for view Storm_Monitoring{0.1.0} as per property file view.log4j.properties
20 Dec 2016 10:36:48,634 INFO [main] ViewRegistry:1689 - View deployed: Storm_Monitoring{0.1.0}.
20 Dec 2016 10:36:48,639 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/hive-jdbc-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,710 INFO [main] ViewRegistry:1747 - setting up logging for view HIVE{1.5.0} as per property file view.log4j.properties
20 Dec 2016 10:36:48,832 INFO [main] ViewRegistry:1811 - Auto creating instance of view HIVE for cluster qnhadoop.
20 Dec 2016 10:36:48,832 INFO [main] ViewRegistry:1689 - View deployed: HIVE{1.5.0}.
20 Dec 2016 10:36:48,837 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/slider-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,879 INFO [main] ViewRegistry:1747 - setting up logging for view SLIDER{2.0.0} as per property file view.log4j.properties
20 Dec 2016 10:36:48,892 INFO [main] ViewRegistry:1689 - View deployed: SLIDER{2.0.0}.
20 Dec 2016 10:36:48,897 INFO [main] ViewRegistry:1656 - Reading view archive /var/lib/ambari-server/resources/views/hive-2.4.1.0.22.jar.
20 Dec 2016 10:36:48,959 INFO [main] ViewRegistry:1747 - setting up logging for view HIVE{1.0.0} as per property file view.log4j.properties
20 Dec 2016 10:36:49,064 INFO [main] ViewRegistry:1689 - View deployed: HIVE{1.0.0}.
.
... View more
12-22-2016
11:48 AM
3 Kudos
- If we want to run a service check for a particular service using Ambari APIs then one most important thins we will need to know in order to perform the service check using the APIs, that is the request payload. For every service the Payload of the request can be something in the following format: {
"RequestInfo":
{
"context":"HDFS Service Check",
"command":"HDFS_SERVICE_CHECK"
},
"Requests/resource_filters":[
{
"service_name":"HDFS"
}
]
} . If we do not know what will be the exact payload for different services then we can simply get it by using the "Browser Debugger Tools". It should show the "Form Data" section. - Now once we know the payload that we can use to POST for the service check execution to ambari then we can simply tun the command as following: Syntax: curl -u admin:admin -H "X-Requested-By: ambari" -X POST -d @/PATH/TO/hdfs_service_check_payload.txt http://AMBARI_HOST:8080/api/v1/clusters/CLUSTER_NAME/requests . Example: $ curl -u admin:admin -H "X-Requested-By: ambari" -X POST -d @/Users/jsensharma/Cases/Articles/Service_Checks_Using_APIs/hdfs_service_check_payload.txt http://erie1.example.com:8080/api/v1/clusters/ErieCluster/requests
{
"href" : "http://erie1.example.com:8080/api/v1/clusters/ErieCluster/requests/424",
"Requests" : {
"id" : 424,
"status" : "Accepted"
}
} . Here the file "hdfs_service_check_payload.txt" contains the payload mentioned above. .
... View more
Labels:
12-22-2016
08:43 AM
@Christian van den Heever 1. Can you please check when you hit the following URL then what cluster name do you get? http://AMBARI_HOST:8080/api/v1/clusters/ 2. Also which version of Ambari are you using? 3. Have you followed the basic settings mentioned in the following link for the file view: https://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-views/content/configuring_your_cluster_for_files_view.html 4. Are you facing this issue with most of other views as well? 5. Have you tried instiating a new instance of file view and then check? .
... View more
12-22-2016
07:21 AM
@priyanshu bindal In your "krb5.conf" how have you defined the expiration? I can see it working like following in /etc/krb5.conf: [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = EXAMPLE.COM
ticket_lifetime = 30m
- See here i am setting [ticket_lifetime = 30m] 30 minute and i can see the following in/etc/krb5.conf : [root@kjss1 ~]# kdestroy
[root@kjss1 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-JoyCluster@EXAMPLE.COM
[root@kjss1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-JoyCluster@EXAMPLE.COM
Valid starting Expires Service principal
12/22/16 07:18:12 12/22/16 07:48:12 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 12/22/16 07:18:12 . Similarly for 30 seconds i did the following [ticket_lifetime = 30s] in /etc/krb5.conf [root@kjss1 ~]# kdestroy
[root@kjss1 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-JoyCluster@EXAMPLE.COM
[root@kjss1 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-JoyCluster@EXAMPLE.COM
Valid starting Expires Service principal
12/22/16 07:22:12 12/22/16 07:22:42 krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 12/22/16 07:22:12 - http://web.mit.edu/Kerberos/krb5-1.12/doc/basic/date_format.html#duration .
... View more
12-22-2016
05:23 AM
@priyanshu bindal Can you please check if your Java Program is pointing to the correct krb5.conf? Normally in Linux environment it's value is "/etc/krb5.conf". However we can locate it as per "Locating the krb5.conf Configuration File" : https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html Usually we set the path for this file using java property: "-Djava.security.krb5.conf"
- Also we can debug what's going on using the "-Dsun.security.krb5.debug=true" Java option. .
... View more
12-21-2016
07:44 PM
@Wael Horchani As you mentioned that you have a single host cluster. So I guess NameNode also will be running on the same host. So in that case the port "50070" should be opened. As the netstat output is showing that no such port is opened which means your NameNode is down. Please bring it On. Also please check NameNode log to see if it ever started successfully or not? and why it did not open the port.
... View more
12-21-2016
04:24 PM
2 Kudos
@Dmitry Otblesk it looks related to an already reported issue: https://issues.apache.org/jira/browse/AMBARI-19264 Please use the workaround to manually assign the permission to the mentioned directory, until the fix. mkdir -p /var/run/zeppelin
chown -R zeppelin:zeppelin /var/run/zeppelin .
... View more
12-21-2016
03:39 PM
@Ritesh jain Is it happening with all the browsers? Is that host name accessible and port is opened?
... View more
12-21-2016
03:32 PM
@Ritesh jain Can you please share the screenshot of the blank page so that we can get to know the exact URL that you are hitting?
... View more
12-21-2016
03:07 PM
@Wael Horchani Once the History Server will start it should write in the mentioned log location. However it should at lease have written the out file there. Anyway now we know that the cause is "Failed connect to vds002.databridge.tn:50070; Connection refused" Can you please make sure that the "vds002.databridge.tn:50070" host and portare accessible from the History Server Host. Are you able to do from History Server (for remote testing)
telnet vds002.databridge.tn 50070 However as you mentioned that the cluster is on a single host hence can you please share what is the output of the following command:
1) FQDN
hostname -f 2). Port "50070" is opened or not ?
netstat -tnlpa | grep 50070 Also can you also please share the value for the property "dfs.namenode.http-address" from "Custom hdfs-site". You can get that value form ambari. Please check the value of that mentioned property uses the correct FQDN .
... View more