Member since
06-29-2015
47
Posts
8
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
482 | 02-08-2021 08:52 AM | |
747 | 03-16-2017 04:52 PM |
05-18-2021
08:00 AM
I was able to find out number of mappers used by distcp using below command: MAPPERS=`yarn container -list $app | grep 'Total number of containers' | awk -F: '{print $2}'` Next step is to only look for distcp job which is doing a copy from/to hdfs (and not to S3). What's the best way to get around it?
... View more
05-06-2021
02:08 PM
distcp - How to determine number of mappers used by distcp job(s) at cluster level? Sometime we run into network bandwidth issue caused by distcp job(s) running too many mappers or too many distcp jobs. Our plan is to trigger DataDog alert when the total number of mappers used by distcp jobs (at cluster level) reach at defined number (ex: 100). We are open to explore the "-bandwidth" option. We have many users who will be submitting a job from diff edge nodes. so, we don't want to use the "ps" command at server level. Please help us rectify the issue. Thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
02-08-2021
08:52 AM
Sorry, I forgot to post earlier. I was able to fix my own issue. I had to restart Ranger and Solr. One of the Solr instance failed on restart. But I was able to see Audit tab and other setting on Ranger. Thanks for looking into it.
... View more
02-05-2021
12:49 PM
The "Audit" tab is missing from Ranger UI. It was working fine before. We have not made any changes to the cluster. I'm unable to find any article regarding this issue. I'm hoping go them some help from the community. Thanks. The audit logs are working coming to HDFS just fine. [hdfs@mynode~]$ hdfs dfs -ls /ranger/audit/
Found 7 items
drwx------ - hdfs hdfs 0 2021-02-05 00:00 /ranger/audit/hdfs
drwx------ - hive hive 0 2019-05-22 15:18 /ranger/audit/hive2
drwx------ - hive hive 0 2021-02-05 00:00 /ranger/audit/hiveServer2
drwxr-x--- - kms kms 0 2021-02-05 00:00 /ranger/audit/kms
drwx------ - knox knox 0 2017-01-19 15:48 /ranger/audit/knox
drwxr-x--- - nifi nifi 0 2021-02-04 18:00 /ranger/audit/nifi
drwx------ - yarn yarn 0 2021-02-05 00:00 /ranger/audit/yarn Components version: HDP: 2.6.5.0
Ranger: 0.7.0
Ambari Infra: 0.1.0 Ranger Audit Config:
... View more
Labels:
- Labels:
-
Apache Ranger
10-28-2019
12:19 PM
I'm getting the same error. This is the response I received form Cloudera support " Only the dfs commands such as ls/put/mv etc works on wasb using the wasb connector. Admin commands such as dfsadmin as well fsck works only with native hadoop/hdfs implementation"
... View more
04-26-2019
02:51 PM
Do you have latest recommendations? Most of our hadoop processing is on Hive/Tez and Spark.
... View more
10-12-2017
07:02 PM
@Sandeep More I logged in and ran kinit. So have valid ticket and able to run other hdfs commands.
... View more
10-10-2017
08:55 PM
@Sandeep More I'm getting below error on HDP 2.6.1, please see if you can help me. I'm able to browse "webhdfs" without knox. curl -k -i --negotiate -u : https://myhost.mydomain:8443/gateway/default/webhdfs/v1/tmp?op=LISTSTATUS
HTTP/1.1 401 Authentication required
Date: Tue, 10 Oct 2017 20:41:13 GMT
WWW-Authenticate: Negotiate
Set-Cookie: hadoop.auth=; Path=gateway/default; Domain=mydomain.com; Secure; HttpOnly
Content-Type: text/html; charset=ISO-8859-1
Cache-Control: must-revalidate,no-cache,no-store
Content-Length: 320
Server: Jetty(9.2.15.v20160210)
HTTP/1.1 403 org.apache.hadoop.security.authentication.client.AuthenticationException
Date: Tue, 10 Oct 2017 20:41:14 GMT
Set-Cookie: hadoop.auth=; Path=gateway/default; Domain=mydomain.com; Secure; HttpOnly
Content-Type: text/html; charset=ISO-8859-1
Cache-Control: must-revalidate,no-cache,no-store
Content-Length: 314
Server: Jetty(9.2.15.v20160210)
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 403 Forbidden</title>
</head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /gateway/default/webhdfs/v1/tmp. Reason:
<pre> Forbidden</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html><br> <provider>
<role>authentication</role>
<name>HadoopAuth</name>
<enabled>true</enabled>
<param>
<name>config.prefix</name>
<value>hadoop.auth.config</value>
</param>
<param>
<name>hadoop.auth.config.signature.secret</name>
<value>/etc/security/http_secret</value>
</param>
<param>
<name>hadoop.auth.config.type</name>
<value>kerberos</value>
</param>
<param>
<name>hadoop.auth.config.simple.anonymous.allowed</name>
<value>false</value>
</param>
<param>
<name>hadoop.auth.config.token.validity</name>
<value>1800</value>
</param>
<param>
<name>hadoop.auth.config.cookie.domain</name>
<value>mydomain.com</value>
</param>
<param>
<name>hadoop.auth.config.cookie.path</name>
<value>gateway/default</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.principal</name>
<value>HTTP/_HOST@TEST.COM</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.keytab</name>
<value>/etc/security/keytabs/spnego.service.keytab</value>
</param>
<param>
<name>hadoop.auth.config.kerberos.name.rules</name>
<value>DEFAULT</value>
</param>
</provider><br>
... View more
07-11-2017
04:37 PM
Thanks for the quick response. I might have to go with opt #2 as the process might be running on a single or multiple nodes but not all. I will keep you posted.
... View more
07-11-2017
02:54 PM
Please help me configure a custom script based alert for a non-Hadoop process running on single or multiple nodes I’ve refereed to these articles already. https://github.com/apache/ambari/blob/branch-2.1/ambari-server/docs/api/v1/alert-definitions.md
https://cwiki.apache.org/confluence/display/AMBARI/Customizing+the+Alert+Template Script: XYZ-process-chk.sh #!/bin/sh
PROCESS_CHK=`code`
if [ code ]
then
echo "process is NOT rounning"
fi
JSON file: {
"AlertDefinition" : {
"cluster_name" : "NAME",
"alert.hasHostName" : "host1", "host2", # I may need this property
"component_name" : "XYZ",
"description" : null,
"enabled" : true,
"help_url" : null,
"ignore_host" : false,
"interval" : 1,
"label" : "XYZ Process",
"name" : "XYZ_process",
"repeat_tolerance" : 1,
"repeat_tolerance_enabled" : false,
"scope" : "HOST",
"service_name" : "XYZ",
"source" : {
"path" : "/var/lib/ambari-server/resources/host_scripts/XYZ-process-chk.sh",
"type" : "SCRIPT"
}
}
... View more
Labels:
- Labels:
-
Apache Ambari
04-13-2017
06:01 PM
I'm facing the same issue: YARN:
Memory allocated for all YARN containers on a node = 16G
Minimum Container Size (Memory) = 2G
Maximum Container Size (VCores) = 3
Hive:
% of cluster capacity = 40%
Memory per daemon = 8192
Number of LLAP Daemons = 1
Memory per daemon = 8192
In-Memory Cache per Daemon = 2048
Maximum CPUs per Daemon = 3
I do see this error messages on the RM UI: Diagnostics: Unstable Application Instance : - failed with
component LLAP failed 'recently' 6 times (4 in startup); threshold is 5 - last failure: Failure container_e29_1492031103210_0001_01_000007 on host host1.fqdn (0): http://host1.fqdn:19888/jobhistory/logs/host1.fqdn:45454/container_e29_1492031103210_0001_01_000007/ctx/hive
... View more
04-10-2017
08:51 PM
@Neeraj Sabharwal Please see if you can help me.
... View more
04-07-2017
07:50 PM
I'm using Kerberos 1.10.3. krb5-server-1.10.3-42.el6.x86_64
krb5-libs-1.10.3-42.el6.x86_64
krb5-workstation-1.10.3-42.el6.x86_64
... View more
04-07-2017
06:35 PM
1 Kudo
I've setup Apace drill 1.10.0 on RHEL 6.7, HDP 2.5.3, kerberos enabled. I'm getting below error while running "drill-conf" as user "drill" which is configured in the "drill-override.conf" file and it has valid keytab/ticket. drill@host:/opt/drill/bin> drill-conf
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.NonTransientRpcException: javax.security.sasl.SaslException: Authentication failed: Server requires authentication using [kerberos, plain]. Insufficient credentials? [Caused by javax.security.sasl.SaslException: Server requires authentication using [kerberos, plain]. Insufficient credentials?] (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.NonTransientRpcException: javax.security.sasl.SaslException: Authentication failed: Server requires authentication using [kerberos, plain]. Insufficient credentials? [Caused by javax.security.sasl.SaslException: Server requires authentication using [kerberos, plain]. Insufficient credentials?]
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:166)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:72)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:143)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.NonTransientRpcException: javax.security.sasl.SaslException: Authentication failed: Server requires authentication using [kerberos, plain]. Insufficient credentials? [Caused by javax.security.sasl.SaslException: Server requires authentication using [kerberos, plain]. Insufficient credentials?]
at org.apache.drill.exec.rpc.user.UserClient.connect(UserClient.java:157)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:432)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:379)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:157)
... 18 more
Caused by: javax.security.sasl.SaslException: Authentication failed: Server requires authentication using [kerberos, plain]. Insufficient credentials? [Caused by javax.security.sasl.SaslException: Server requires authentication using [kerberos, plain]. Insufficient credentials?]
at org.apache.drill.exec.rpc.user.UserClient$3.mapException(UserClient.java:204)
at org.apache.drill.exec.rpc.user.UserClient$3.mapException(UserClient.java:197)
at com.google.common.util.concurrent.AbstractCheckedFuture.checkedGet(AbstractCheckedFuture.java:85)
at org.apache.drill.exec.rpc.user.UserClient.connect(UserClient.java:155)
... 21 more
Caused by: javax.security.sasl.SaslException: Server requires authentication using [kerberos, plain]. Insufficient credentials?
at org.apache.drill.exec.rpc.user.UserClient.getAuthenticatorFactory(UserClient.java:285)
at org.apache.drill.exec.rpc.user.UserClient.authenticate(UserClient.java:216)
... 22 more
apache drill 1.10.0
"this isn't your grandfather's sql"
Error from "sqlline.log": 2017-04-07 14:18:22,564 [main] WARN o.a.drill.exec.metrics.DrillMetrics - Removing old metric since name matched newly registered metric. Metric name: drill.allocator.root.used
2017-04-07 14:18:22,565 [main] WARN o.a.drill.exec.metrics.DrillMetrics - Removing old metric since name matched newly registered metric. Metric name: drill.allocator.root.peak
2017-04-07 14:18:22,615 [main] INFO o.a.drill.common.config.DrillConfig - Configuration and plugin file(s) identified in 50ms.
Base Configuration:
- jar:file:/opt/drill/jars/drill-common-1.10.0.jar!/drill-default.conf
Intermediate Configuration and Plugin files, in order of precedence:
- jar:file:/opt/drill/jars/drill-java-exec-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-common-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-storage-hive-core-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-memory-base-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-mongo-storage-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-jdbc-storage-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-logical-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-storage-hbase-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-gis-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-kudu-storage-1.10.0.jar!/drill-module.conf
- jar:file:/opt/drill/jars/drill-hive-exec-shaded-1.10.0.jar!/drill-module.conf
Override File: file:/opt/drill/conf/drill-override.conf
2017-04-07 14:18:22,625 [main] WARN o.a.drill.exec.metrics.DrillMetrics - Removing old metric since name matched newly registered metric. Metric name: drill.allocator.root.used
2017-04-07 14:18:22,625 [main] WARN o.a.drill.exec.metrics.DrillMetrics - Removing old metric since name matched newly registered metric. Metric name: drill.allocator.root.peak
2017-04-07 14:18:22,671 [main] ERROR o.a.drill.exec.client.DrillClient - Connection to host1.com:31010 failed with error javax.security.sasl.SaslException: Authentication failed: Server requires authentication using [kerberos, plain]. Insufficient credentials? [Caused by javax.security.sasl.SaslException: Server requires authentication using [kerberos, plain]. Insufficient credentials?]. Not retrying anymore The drill web console is working with with PAM authentication. i was able to login with local user "drill" Here is my "drill-override.conf" file: drill.exec: {
cluster-id: "drillbits1",
zk.connect: "host1:2181,host2:2181,host3:2181",
security: {
user.auth.enabled: true,
user.auth.impl: "pam",
user.auth.pam_profiles: [ "sudo", "login" ],
packages += "org.apache.drill.exec.rpc.user.security",
auth.mechanisms: ["KERBEROS","PLAIN"],
auth.principal: "drill/labhdp@LAB.COM",
auth.keytab: "/opt/drill/.keytab/drill.keytab"
}
} Please help me resolve the issue.
... View more
04-05-2017
03:12 PM
@ Constantin Stanca I thought the proper way to do the maintenance on the data node is to decommission it, so it can do the following tasks:
Data Node - safely replicates the
HDFS data to other DNs Node Manager - stop accepting new job
requests Region Server - turns on drain mode In a urgent situation, I could agree on your suggestion. However, please advise me the right approach in a scenario where you have luxury to choose the maintenance window.
... View more
03-20-2017
07:14 PM
1 Kudo
I'm doing the Rolling Upgrade from HDP 2.4.3 to 2.5.3.0. Ambari version: 2.4.1.0 Kerberos enabled: yes During the upgrade process I'm seeing below message. Can someone suggest me which tests I should be running?
... View more
Labels:
03-16-2017
04:52 PM
1 Kudo
Add below into “custom logsearch-properties” logsearch.roles.allowed = AMBARI.ADMINISTRATOR,
CLUSTER.ADMINISTRATOR, CLUSTER.USER
... View more
03-16-2017
04:48 PM
1 Kudo
Users from "Cluster User" role are not able to log into "Log Search" UI Ambari version: 2.4.2.0
... View more
Labels:
- Labels:
-
Apache Ambari
03-14-2017
01:52 PM
@ Artem Ervits Sorry it's a typo. I see Ambari and Grafana both have same version. But Solr still has 2.4.1 I've updated Ambari about couple of days ago to 2.4.2 ambari-agent-2.4.2.0-136.x86_64
ambari-metrics-monitor-2.4.2.0-136.x86_64
ambari-infra-solr-2.4.1.0-22.x86_64
ambari-server-2.4.2.0-136.x86_64
ambari-metrics-grafana-2.4.2.0-136.x86_64
ambari-metrics-hadoop-sink-2.4.2.0-136.x86_64
ambari-metrics-collector-2.4.2.0-136.x86_64
ambari-infra-solr-client-2.4.1.0-22.x86_64
... View more
03-13-2017
09:05 PM
@ Aravindan Vijayan I'm facing the same issue. All other dashboards are showing metrics data. I do not see any metrics error either in hivemetastore or hiveserver2 logs. All database properties are filled in and Hive is working fine. Ambari version: 2.4.2.0 HDP Stack: 2.4.3.0 Grafana version: 2.4.1.0-22 Datasource: Ambari Metrics
... View more
02-15-2017
09:33 PM
Yes it seems to be related. I'll keep eyes on it. Actually, I've just opened a case with Hortonworks too.
... View more
02-10-2017
09:35 PM
@Aravindan Vijayan, please help...
... View more
02-10-2017
08:23 PM
1 Kudo
Everytime I try to access Grafana and/or browse through it, I see below error in "grafana.log": 2017/02/10 15:14:14 [middleware.go:145 initContextWithBasicAuth()] [E] Invalid Basic Auth Header: Invalid basic auth header
2017/02/10 15:14:14 [I] Completed 10.128.211.106 admin "GET /api/datasources/proxy/1/ws/v1/timeline/metrics HTTP/1.1" 401 Unauthorized 39 bytes in 43us
If I make "auth.basc=false" I'm getting below error: Connecting (POST) to myserver.com:3000/api/datasources
Http response: 403 Forbidden
Http data: {"message":"Permission denied"}
Ambari Metrics Grafana data source creation failed.
It's a new install on HDP stack: 2.4.3.0 with Kerberos and SPNEGO is enabled I had such issue with HDP 2.3.4 but the suggested workaround is not working with 2.4.3.0: https://community.hortonworks.com/questions/59160/grafana-initcontextwithbasicauth-e-invalid-basic-a.html
... View more
Labels:
10-13-2016
06:03 PM
1 Kudo
Hi, I'm running a distcopy from one cluster to another and getting below error in the destination namenode log file. 2016-10-13 12:48:09,260 WARN
blockmanagement.BlockPlacementPolicy
(BlockPlacementPolicyDefault.java:chooseTarget(385)) - Failed to place enough
replicas, still in need of 1 to reach 2 (unavailableStorages=[DISK],
storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK],
creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All
required storage types are unavailable:
unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7,
storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]} In the destination cluster, I'm able to write to hdfs. Cluster health is green. I also see these .distcp.tmp* file in the destination cluster: hdfs dfs -ls /mstr/f_opt_wifi_ap_mth_chc/
Found 24 items
-rw-r----- 3 hdfs hdfs 0 2016-10-13 11:00 /mstr/f_opt_wifi_ap_mth_chc/.distcp.tmp.attempt_1476280811282_0259_m_000000_0
-rw-r----- 3 hdfs hdfs 0 2016-10-13 11:00 /mstr/f_opt_wifi_ap_mth_chc/.distcp.tmp.attempt_1476280811282_0259_m_000001_0
-rw-r----- 3 hdfs hdfs 0 2016-10-13 11:00 /mstr/f_opt_wifi_ap_mth_chc/.distcp.tmp.attempt_1476280811282_0259_m_000002_0
... View more
Labels:
- Labels:
-
Apache Hadoop
10-11-2016
01:43 PM
@Aravindan Vijayan That worked. Thanks for your help.
... View more
10-06-2016
10:40 PM
@Aravindan Vijayan Yes. Actually our Ambari server is kerberoised too. hadoop.http.filter.initializers = org.apache.hadoop.security.AuthenticationFilterInitializer hadoop.http.authentication.type = kerberos
... View more
10-06-2016
06:52 PM
@Aravindan Vijayan Sorry i forgot mention, i did restart the ambari server and agent on all nodes. Just now, I've restarted all metrics-collector and still facing the same error. root@host:ambari-agent> grep -A3 'XmlConfig("core-site.xml"' /var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
XmlConfig("core-site.xml",
conf_dir=params.ams_collector_conf_dir,
configurations=params.config['configurations']['core-site'],
configuration_attributes=params.config['configuration_attributes']['core-site'],
--
XmlConfig("core-site.xml",
conf_dir=params.hbase_conf_dir,
configurations=params.config['configurations']['core-site'],
configuration_attributes=params.config['configuration_attributes']['core-site'],
--
XmlConfig("core-site.xml",
conf_dir=params.ams_collector_conf_dir,
configurations=truncated_core_site,
configuration_attributes=params.config['configuration_attributes']['core-site'],
--
XmlConfig("core-site.xml",
conf_dir=params.hbase_conf_dir,
configurations=truncated_core_site,
configuration_attributes=params.config['configuration_attributes']['core-site'],
... View more
10-05-2016
03:24 PM
@Aravindan Vijayan Thanks for your response. Based on the JIRA the fix is replacing the "ams.py" script. Actually I've replaced the script in /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ and restarted Ambari Metrics. However I'm still facing the same issue. ambari-metrics-monitor> grep -A3 'XmlConfig("core-site.xml"' /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/ams.py
XmlConfig("core-site.xml",
conf_dir=params.ams_collector_conf_dir,
configurations=params.config['configurations']['core-site'],
configuration_attributes=params.config['configuration_attributes']['core-site'],
--
XmlConfig("core-site.xml",
conf_dir=params.hbase_conf_dir,
configurations=params.config['configurations']['core-site'],
configuration_attributes=params.config['configuration_attributes']['core-site'],
--
XmlConfig("core-site.xml",
conf_dir=params.ams_collector_conf_dir,
configurations=truncated_core_site,
configuration_attributes=params.config['configuration_attributes']['core-site'],
--
XmlConfig("core-site.xml",
conf_dir=params.hbase_conf_dir,
configurations=truncated_core_site,
configuration_attributes=params.config['configuration_attributes']['core-site'], I've also noticed below error in ambari-metrics-monitor.out file. 2016-09-21 21:35:18,778 [INFO] emitter.py:91 - server: http://ip:6188/ws/v1/timeline/metrics
2016-09-21 21:35:18,780 [WARNING] emitter.py:74 - Error sending metrics to server. HTTP Error 401: Authentication required
2016-09-21 21:35:18,780 [WARNING] emitter.py:80 - Retrying after 5 ... https://issues.apache.org/jira/browse/AMBARI-14384 Our metrics collector server is part of the cluster.
... View more
10-03-2016
03:05 PM
Yes, SPENGO is enabled. I do not see any useful logs with "Debug" logging. root@myhost:conf> grep ';level' ams-grafana.ini # ;level = Info ;level = Debug
... View more
09-29-2016
07:33 PM
I've installed Grafana through Ambari 2.4.1.0 on HDP 2.3.4. The datasource is "AMBARI_METRICS". When I try to validate the datasource, I'm getting below error: 2016/09/28 17:05:56 [middleware.go:145 initContextWithBasicAuth()] [E] Invalid Basic Auth Header: Invalid basic auth header
2016/09/28 17:05:56 [I] Completed 10.128.211.106 admin "GET /api/datasources/proxy/1/ws/v1/timeline/metrics/metadata HTTP/1.1" 401 Unauthorized 39 bytes in 102us datasource info: Url = http://hostip:6188 access = proxy http Auth = unchecked with credentials= unchecked ------- If I change the auth.basic=false, Grafana service fails to start. We have kerberos installed in our cluster.
... View more
Labels: