Member since
09-29-2015
186
Posts
63
Kudos Received
12
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3304 | 08-11-2017 05:27 PM | |
2235 | 06-27-2017 10:58 PM | |
2333 | 04-09-2017 09:43 PM | |
3310 | 04-01-2017 02:04 AM | |
4514 | 03-13-2017 06:35 PM |
04-01-2017
02:04 AM
1 Kudo
@Karthik Shivanna Set Ambari server for Kerberos: https://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_Ambari_Security_Guide/content/_optional_set_up_kerberos_for_ambari_server.html And make sure all the settings for Kerberos for views is in place: http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.1/bk_ambari_views_guide/content/section_kerberos_setup_tez_view.html
... View more
03-23-2017
09:25 PM
2 Kudos
This is for HDP 2.5 only. If you are seeing the same error HDP 2.6, there could be something else that has failed before this stage. Please check the full log.
After enabling Hive LLAP, it fails to start with:
ERROR impl.LlapZookeeperRegistryImpl: Unable to start curator PathChildrenCache. Exception: {}
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /llap-sasl/user-hive
at org.apache.zookeeper.KeeperException.create(KeeperException.java:121) ~[zookeeper-3.4.6.2.5.0.0-1245.jar:3.4.6-1245--1]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[zookeeper-3.4.6.2.5.0.0-1245.jar:3.4.6-1245--1]
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) ~[zookeeper-3.4.6.2.5.0.0-1245.jar:3.4.6-1245--1]
at org.apache.curator.utils.ZKPaths.mkdirs(ZKPaths.java:232) ~[curator-client-2.7.1.jar:?]
at org.apache.curator.utils.EnsurePath$InitialHelper$1.call(EnsurePath.java:148) ~[curator-client-2.7.1.jar:?]
Steps to fix:
1. /usr/hdp/current/zookeeper-server/bin/zkCli.sh -server `hostname`
2. create /llap-sasl "" sasl:hive:cdrwa,world:anyone:r
3. create /llap-sasl/user-hive "" sasl:hive:cdrwa,world:anyone:r
4. create /llap-sasl/user-hive/llap0 "" sasl:hive:cdrwa,world:anyone:r
5. create /llap-sasl/user-hive/llap0/workers "" sasl:hive:cdrwa,world:anyone:r
Note: If Kerberos is enabled:
su as zookeeper
kinit as hive
... View more
03-13-2017
06:35 PM
2 Kudos
In the zookeeper-env.sh, add Dzookeeper.skipACL=yes export SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Dzookeeper.skipACL=yes"
... View more
03-04-2017
01:38 AM
@swagle
Yes I can see other HBase metrics. http://hdp242-s1.openstacklocal:3000/dashboard/db/hbase-performance I do not see "regionserver.Server.ScanTime" in the metadata. ambarimetrics-metadata.txt
... View more
02-28-2017
10:13 PM
1 Kudo
Ambari: 2.2.2 HDP : 2.4.2 Why do only these graphs show No datapoints?
... View more
Labels:
- Labels:
-
Apache HBase
12-23-2016
09:49 PM
1 Kudo
PROBLEM: When we go to Grafana UI: Under HBase - Tables: 1. NUM FLUSHES 2. NUM WRITE REQUESTS 3. NUM READ REQUESTS Under HBase - Users: 1. Num Get Requests 2. Num Scan Next Requests We just see: Problem! java.lang.Exception: Invalid number of functions specified. grafana.log show: [I] Completed X.X.X.X - "GET /ws/v1/timeline/metrics HTTP/1.1" 400 Bad Request 144 bytes in 7653us
[I] Completed X.X.X.X - "GET /ws/v1/timeline/metrics HTTP/1.1" 400 Bad Request 144 bytes in 3316us
[I] Completed X.X.X.X - "GET /ws/v1/timeline/metrics HTTP/1.1" 400 Bad Request 144 bytes in 1734us
RESOLUTION: 1. Login with Grafana admin. 2. Set transform=none in panels.
... View more
Labels:
12-23-2016
02:39 AM
SYMPTOMS:
Sometimes the Hive CLI hangs and the gives no response. At the same time, the /var/log/hive/hivemetastore.log reports timeouts ERROR [<hostname>-47]: txn.TxnHandler (TxnHandler.java:getDbConn(984)) - There is a problem with aconnection from the pool, retrying(rc=9): Timed out waiting for a free available connection.
(SQLState=08001,ErrorCode=0)
java.sql.SQLException: Timed out waiting for a free available connection. ROOT CAUSE:
The Hive Metastore service is waiting for the database connections to get free. At the time of error, not enough concurrent connections were available.
RESOLUTION: (Note: This is for mysql DB only). To fix this problem, the number of max database connection should be increased. 1. Open the /etc/my.cnf in the text editor. vi /etc/my.cnf 2. Under [mysqld] section, add: max_connections = 250 3. Save the file and restart the mysqld service service mysqld restart
... View more
Labels:
12-23-2016
02:32 AM
HDP Stack Version: 2.4.0 SYMPTOMS: WARN Error while fetching metadata [{TopicMetadata for topic <topic-name> -> No partition metadatafor topic <topic-name> due to kafka.common.TopicAuthorizationException}] for topic <topic-name>: class kafka.common.TopicAuthorizationException (kafka.producer.BrokerPartitionInfo)
....
ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: <topic-name> (kafka.producer.async.DefaultEventHandler)
ROOT CAUSE:
At the moment we can’t use user/group based access to authorize Kafka access over a non-secure channel. This is because it is not possible to assert client’s identity over the non-secure channel. It is ip based in a non-secure environment.
Reference: https://cwiki.apache.org/confluence/display/RANGER/Kafka+Plugin
See: Authorizing Kafka access over non-authenticated channel via Ranger
RESOLUTION:
The policy doesn't work if, IP address is not configured. This address will be of producer and consumer.
... View more
Labels:
12-23-2016
02:14 AM
1 Kudo
Run hive shell debug so it prints detailed error:
hive -hiveconf hive.log.file=hivecli_tez.log -hiveconf hive.log.dir=/tmp/hivecli -hiveconf hive.execution.engine=tez -hiveconf hive.root.logger=DEBUG,DRFA
SYMTOMS:
DEBUG [main]: amazonaws.request (AmazonHttpClient.java:handleErrorResponse(1152)) - Received error response: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: null; Status Code: 403; Error Code: 403 Forbidden; Request ID: 85BA6566D33A519B), S3 Extended Request ID: 228pqAjcCjTHo+ExpZ+86INHAhkIeE+DQoicPLkan8GDaraxsklIuHwK3f+QmjtIBzw/z5OSWaM=
WARN [main]: avro.AvroSerDe (AvroSerDe.java:determineSchemaOrReturnErrorSchema(169)) - Encountered AvroSerdeException determining schema. Returning signal schema to indicate problem org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Unable to read schema from given path: s3a://<file-path>
WORKAROUND:
01. mkdir joda-backup
02. cd /tmp
03. wget http://central.maven.org/maven2/joda-time/joda-time/2.8.1/joda-time-2.8.1.jar
04. mv /usr/hdp/current/hive/lib/joda-time-2.5.jar /joda-backup
05. mv /usr/hdp/current/hive2/lib/joda-time-2.5.jar /joda-backup
06. cp /tmp/joda-time-2.8.1.jar /usr/hdp/2.5.0.0-1245/hive/lib/
07. cp /tmp/joda-time-2.8.1.jar /usr/hdp/2.5.0.0-1245/hive2/lib/
08. unzip joda-time-2.8.1.jar -d /unzip-joda
09. cd ./unzip-joda
Take a backup of hive-exec-*.jar
10. jar -uf /usr/hdp/current/hive/lib/hive-exec-1.2.1000.2.5.0.0-1245.jar ./org
11. jar -uf /usr/hdp/current/hive2/lib/hive-exec-2.1.0.2.5.0.0-1245.jar ./org
Permissions on this jar should be:
-rw-r--r--. 1 root root joda-time-2.8.1.jar
... View more
Labels:
12-23-2016
02:03 AM
SYMTOMPS: Ambari server log shows: WARN [qtp-client-41858] ObjectGraphWalker:209 - The configured limit of 1,000 object references was reached while attempting to calculate the size of the object graph. Severe performance degradation could occur if the sizing operation continues. This can be avoided by setting the CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to "abort" or adding stop points with @IgnoreSizeOf annotations. If performance degradation is NOT an issue at the configured limit, raise
the limit value using the CacheManager or Cache <sizeOfPolicy> elements maxDepth attribute. For more information, see the Ehcache configuration documentation.
WORK-AROUND
Disable cache by setting below property in in /etc/ambari-server/conf/ambari.properties server.timeline.metrics.cache.disabled = true
ROOT CAUSE See: AMBARI-13517 Disabling cache is not going have adverse effect. It was introduced in Ambari 2.1.2 to create a Caching layer that provides sliding window behavior for metric requests to Ambari.
... View more
Labels: