Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1395 | 05-23-2018 05:29 AM | |
4172 | 05-08-2018 03:06 AM | |
1224 | 02-09-2018 02:22 AM | |
2198 | 01-24-2018 08:37 PM | |
5182 | 01-24-2018 05:43 PM |
01-16-2018
08:34 PM
@Dmitro Vasilenko Did you get chance to check these URL's? https://discuss.pivotal.io/hc/en-us/articles/217537028-How-to-update-thresholds-for-the-Ambari-alert-Ambari-Agent-Disk-Usage- https://community.hortonworks.com/articles/27763/how-to-change-ambari-alert-threshold-values-for-di.html
... View more
01-09-2018
04:49 AM
@Sudheer Velagapudi Try setting hive.server2.logging.operation.level=EXECUTION; Below are some other values for the same parameter:
NONE: Ignore any logging. EXECUTION: Log completion of tasks. PERFORMANCE: Execution + Performance logs. VERBOSE: All logs. Hope this helps you.
... View more
01-09-2018
04:35 AM
@prarthana basgod As the official HBase book states: You may need to find a sweet spot between a low number of RPCs and the memory used on the client and server. Setting the scanner caching higher will improve scanning performance most of the time, but setting it too high can have adverse effects as well: each call to next() will take longer as more data is fetched and needs to be transported to the client, and once you exceed the maximum heap the client process has available it may terminate with an OutOfMemoryException. When the time taken to transfer the rows to the client, or to process the data on the client, exceeds the configured scanner lease threshold, you will end up receiving a lease expired error, in the form of a ScannerTimeoutException being thrown. So it would be better not to avoid the exception by the above configuration, but to set the caching of your Map side lower, enabling your mappers to process the required load into the pre-specified time interval. Even you can increase <property>
<name>hbase.regionserver.lease.period</name>
<value>300000</value>
</property> Hope this helps you.
... View more
01-06-2018
05:42 AM
1 Kudo
@Carol Elliott You can grant workaround by creating Ranger policy on _dummy_database. Ranger doesn't really check the database, just grant full access on _dummy_database to all. Guess you are hitting the same HIVE-11498 bug which I have already experienced. Hope this helps you.
... View more
11-04-2017
09:08 PM
@Jay Patel You can see them in Namenode UI under 'Datanode volume failures'. Below will be URL for your namenode: http://<Active Namenode FQDN>:50070/dfshealth.html#tab-datanode-volume-failures Hope this helps you.
... View more
11-03-2017
03:37 AM
@vishwa Right now I don't think we have can configure multiple Hive LLAP instances as it holds cached data.
... View more
10-20-2017
01:21 AM
@Dhiraj I hope your just changing the ports here. Already rpc calls should be enabled on the cluster. If thats the case "Changing the service RPC port settings requires a restart of the NameNodes, DataNodes and ZooKeeper Failover Controllers to take full effect. If you have NameNode HA setup, you can restart the NameNodes one at a time followed by a rolling restart of the remaining components to avoid cluster downtime." Hope this helps you.
... View more
10-19-2017
06:37 PM
1 Kudo
@Dhiraj
Yes, it will be the same for even Kerberos environment and you need to have valid hdfs keytab before running the command. Execute the following command on NN1: hdfs zkfc -formatZK This command creates a znode in ZooKeeper. The automatic failover system stores uses this znode for data storage. Try this in lower environments before trying in prod. Whats the reason to format znode?
... View more
10-06-2017
05:20 AM
@Winnie Philip Check the umask permission on your Linux machines. Even you can remove all the info under /appl/hadoop/yarn/local/usercache/* on your each node where you have resource manager so that it will recreate the file with new permissions. Hope this helps you.
... View more
09-30-2017
01:22 AM
@frank policano May I know what version of HDP are you using? HDFS-6621 and officially released as part of Apache Hadoop 2.6.0. Since this is a bug in the Balancer itself, it is possible to run an updated version of the Balancer without upgrading your cluster. Datanodes will limit the number of threads used for balancing so as to not eat up all the resources of the cluster/datanode. This is what causes the WARN statement you're seeing. By default the number of threads is 5. This was not configurable prior to Apache Hadoop 2.5.0. HDFS-6595added this proeprty dfs.datanode.balance.max.concurrent.moves to allow you to control the number of threads used for balancing. Since this is a datanode side property, this will require an upgrade to your cluster if you want to use this setting. https://stackoverflow.com/questions/25222633/hadoop-balancer-command-warn-messages-threads-quota-is-exceeded Hope this article helps in resolving balancer issue by running from commandline https://community.hortonworks.com/questions/19694/help-with-exception-from-hdfs-balancer.html Hope this helps you.
... View more