Member since
01-21-2016
290
Posts
76
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3209 | 01-09-2017 11:00 AM | |
1283 | 12-15-2016 09:48 AM | |
5551 | 02-03-2016 07:00 AM |
03-17-2017
09:59 AM
1 Kudo
@ARUN The whole purpose of “balancer” utility is to help balance the blocks across DataNodes in the cluster. So it should do the job, if there is no major issue at the cluster level. It is usually recommend to run the balancer periodically during times when the cluster load is expected to be lower than usual. Also please refer to the following article that explains the importance of balancer and the performance improvement facts: https://community.hortonworks.com/articles/43615/hdfs-balancer-1-100x-performance-improvement.html You can run the HDFS balancer in Maintenance window as well as Without a Maintenance window. Few things you should keep in mind while running the balancer as mentioned in : https://community.hortonworks.com/articles/43849/hdfs-balancer-2-configurations-cli-options.html
... View more
01-09-2017
11:00 AM
Hi all, Luckily i was able to find the reason and fixed the issue. Just followed the steps in this url http://lecluster.delaurent.com/kill-zombie-dead-regionservers/ Hope it helps
... View more
01-06-2017
12:39 PM
Thanks @Jay SenSharma. it really helps
... View more
12-22-2016
06:16 AM
one quick question @Rajkumar Singh, can this property be set in the advnaced yarn-site section in ambari old
<property>
<name>yarn.resourcemanager.state-store.max-completed-applications</name>
<value>${yarn.resourcemanager.max-completed-applications}</value>
</property>
new
<property>
<name>yarn.resourcemanager.state-store.max-completed-applications</name>
<value>1000</value>
</property>
... View more
12-22-2016
03:00 AM
@ARUN As @Xi Wang responded and suggested. I assess her response as appropriate. Please see also doc link (assuming you use Ambari 2.1.1.0, change to the proper link if you use an earlier version): http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_Ambari_Users_Guide/content/_adding_hosts_to_a_cluster.html Also, keep in mind that only new data will use the new data nodes, unless you execute rebalance hdfs command, then you have a distribution of all the data across all data nodes. Default threshold is 10, but you could change it to your desired threshold. You may want to execute it during off-peak hours.
... View more
01-05-2017
06:04 AM
@ARUN Here the instruction is to disable (exclude) HBase per region metrics to avoid data flooding. That can be done by explicitly adding the following lines to the end of the file: *.source.filter.class=org.apache.hadoop.metrics2.filter.GlobFilter
hbase.*.source.filter.exclude=*Regions* .
... View more
12-15-2016
09:48 AM
Luckily i found out the cause. The property i was trying to set is phoenix.schema.isNamespaceMappingEnabled. The property had phoenix. which is a prefix for the phoenix interpreter. so it was not set properly on the interpreter side. so the right property i need to set is phoenix.phoenix.schema.isNamespaceMappingEnabled so the prefix gets correctly parsed. I found out this from the logs JDBCInterpreter.java[open]:142) - key: phoenix, value: schema.isNamespaceMappingEnabled (before the change) JDBCInterpreter.java[open]:142) - key: phoenix, value: phoenix.schema.isNamespaceMappingEnabled (after the change) Hope this helps
... View more
12-07-2016
01:09 PM
1 Kudo
@ARUN I do not see any option from ambari side to do that. However you should be able to set the ACL via HBase itself as described in: http://hbase.apache.org/0.94/book/hbase.accesscontrol.configuration.html The "hbase:acl" table defines Access Control Lists which helps us in to limiting the privileges of users to hbase table. You must set the ACLs for all those users who will be responsible
for create/update/delete operations in HBase. As by default every once
can access others table. HBase ACLs support the following privileges: a)Read b)Write c)Create tables d)Administrator Example: 1. Start the HBase shell. On the HBase Master host: hbase shell 2. Set ACLs using the HBase shell: grant '$USER', '$permissions' Ranger: You can also create Ranger policies by issuing grant/revoke commands via hbase shell. As described in Page-25 in the following slide: http://www.slideshare.net/Hadoop_Summit/securing-hadoop-with-apache-ranger .
... View more
11-11-2016
01:06 PM
3 Kudos
@ARUN You can redirect console output to some file --> grep application ID from that output file --> use yarn command get the job information #Run job [hdfs@prodnode1 ~]$ /usr/hdp/current/hadoop-client/bin/hadoop jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples-2.7.1.2.4.2.0-258.jar pi 10 10 1>/tmp/op 2>/tmp/op & #Grep Application ID [hdfs@prodnode1 ~]$ grep 'Submitted application' /tmp/op |rev|cut -d' ' -f1|rev
application_1478509018160_0003
[hdfs@prodnode1 ~]$ #Get status [hdfs@prodnode1 ~]$ yarn application -status application_1478509018160_0003
16/11/11 13:06:07 INFO impl.TimelineClientImpl: Timeline service address: http://prodnode3.openstacklocal:8188/ws/v1/timeline/
16/11/11 13:06:07 INFO client.RMProxy: Connecting to ResourceManager at prodnode3.openstacklocal/172.26.74.211:8050
Application Report :
Application-Id : application_1478509018160_0003
Application-Name : QuasiMonteCarlo
Application-Type : MAPREDUCE
User : hdfs
Queue : default
Start-Time : 1478869426329
Finish-Time : 1478869463505
Progress : 100%
State : FINISHED
Final-State : SUCCEEDED
Tracking-URL : http://prodnode3.openstacklocal:19888/jobhistory/job/job_1478509018160_0003
RPC Port : 42357
AM Host : prodnode1.openstacklocal
Aggregate Resource Allocation : 129970 MB-seconds, 228 vcore-seconds
Log Aggregation Status : SUCCEEDED
Diagnostics :
[hdfs@prodnode1 ~]$ Hope this information helps! 🙂
... View more
11-10-2016
10:56 AM
1 Kudo
No. During the start of the MR job you may see message like: mapreduce.MultiHfileOutputFormat: Configuring 20 reduce partitions to match current region count That's exactly the number of reducers that will be created. How many of them will be running in parallel depends on the MR engine configuration.
... View more