Member since
09-12-2016
39
Posts
45
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1362 | 09-20-2016 12:17 PM | |
12694 | 09-19-2016 11:18 AM | |
892 | 09-15-2016 09:54 AM | |
1772 | 09-15-2016 07:39 AM |
09-28-2016
07:24 AM
1 Kudo
@Muthyalapaa, You can follow this link for tunning YARN http://crazyadmins.com/tag/tuning-yarn-to-get-maximum-performance/
... View more
09-23-2016
10:13 AM
1 Kudo
@Jasper, You can use, docker run -v hadoop:/hadoop --memory="8g" --name sandbox --hostname "sandbox.hortonworks.com"--privileged -d \ OR docker run -v hadoop:/hadoop -m 8g --name sandbox --hostname "sandbox.hortonworks.com"--privileged -d \
... View more
09-23-2016
10:11 AM
1 Kudo
@Akhil, OpenJDK 64-Bit Server VM, Java 1.7.0_101 is used in HDP-2.5.
... View more
09-20-2016
12:58 PM
You might added multiple repositories in the Advanced repo section in 4.select stack section. There you have to add only the single repo for which you are going to install Ambari.
... View more
09-20-2016
12:55 PM
You are using sles12 for installation of HDP-2.5 right or other ones.
... View more
09-20-2016
12:40 PM
2 Kudos
@Dheeraj, We cant run snapshot in multiple iterations but we can use copytable and copy data from one timestamp to other timestamp like, http://hbase.apache.org/0.94/book/ops_mgt.html#copytable CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The usage is as follows: $ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable[--starttime=X][--endtime=Y][--new.name=NEW][--peer.adr=ADR] tablename
... View more
09-20-2016
12:17 PM
2 Kudos
@Balkrishna, The problem is with special characters in the KAFKA service metrics file. We use this file as a part of stack_advisor calculations for AMS split points. Following is a grep on non ASCII characters that reveals the problem :- grep --color='auto' -P -n "[\x80-\xFF]" /var/lib/ambari-server/resources/common-services/AMBARI_METRICS/0.1.0/package/files/service-metrics/KAFKA.txt This will show you the ascii characters present in the file as, Output:-43:��kafka.network.RequestMetrics.RequestsPerSec.request.OffsetFetch.count��
45:��kafka.network.RequestMetrics.RequestsPerSec.request.OffsetCommit.count
47:kafka.network.RequestMetrics.RequestsPerSec.request.LeaderAndIsr.1MinuteRate�� Use /var/lib/ambari-server/resources/scripts/configs.sh to modify and get values from ambari-server as, /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 get localhost CLUSTER_NAME ams-site Check the value of "timeline.metrics.cluster.aggregate.splitpoints" and "timeline.metrics.host.aggregate.splitpoints", Look for special non-ascii characters example: "dfs.datanode.ReplaceBlockOpAvgTime,kafka.network.RequestMetrics.RequestsPerSec.request.JoinGroup.1MinuteRate ,master.Master.ProcessCallTime_num_ops,regionserver.Server.blockCacheEvictionCount" Here after 1MinuteRate there is space which will shown as a special character on browser through API call as, "http://AMBARI_SERVER_HOSTS:8080/api/v1/clusters/CLUSTER_NAME/configurations?type=ams-site" Take latest tag from it at the bottom of page. and put it on browser you will see the special character as, "timeline.metrics.cluster.aggregate.splitpoints":"dfs.datanode.ReplaceBlockOpAvgTime,kafka.network.RequestMetrics.RequestsPerSec.request.JoinGroup.1MinuteRate
,master.Master.ProcessCallTime_num_ops,regionserver.Server.blockCacheEvictionCount" To resolve this issue use, /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 set localhost CLUSTER_NAME ams-site timeline.metrics.cluster.aggregate.splitpoints dfs.datanode.ReplaceBlockOpAvgTime,kafka.network.RequestMetrics.RequestsPerSec.request.JoinGroup.1MinuteRate,master.Master.ProcessCallTime_num_ops,regionserver.Server.blockCacheEvictionCount To change second property paramater use:- /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 get localhost CLUSTER_NAME ams-site /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p ADMIN_PASSWORD -port 8080 set localhost CLUSTER_NAME ams-site timeline.metrics.host.aggregate.splitpoints EventTakeSuccessCount,cpu_idle,dfs.FSNamesystem.ExcessBlocks,dfs.datanode.ReadBlockOpNumOps,disk_total,jvm.JvmMetrics.LogError,kafka.controller.ControllerStats.LeaderElectionRateAndTimeMs.99percentile,kafka.network.RequestMetrics.RequestsPerSec.request.FetchFollower.5MinuteRate,kafka.network.RequestMetrics.RequestsPerSec.request.UpdateMetadata.1MinuteRate,kafka.server.BrokerTopicMetrics.FailedFetchRequestsPerSec.meanRate,master.AssignmentManger.ritCount,master.FileSystem.MetaHlogSplitTime_95th_percentile,mem_shared,proc_total,regionserver.Server.Append_median,regionserver.Server.Replay_95th_percentile,regionserver.Server.totalRequestCount,rpcdetailed.rpcdetailed.GetBlockLocationsAvgTime,write_bps
### Restart the Service
... View more
09-20-2016
11:46 AM
Then snapshot the best among them.
... View more
09-20-2016
10:31 AM
2 Kudos
I am getting an error in hbase while starting it in HDP. I am using HDP-2.3.2 2016-09-17 04:47:58,238 FATAL [master:hb-qa:60000] master.HMaster: Master server abort: loaded coprocessors are: []
2016-09-17 04:47:58,239 FATAL [master:hb-qa:60000] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.hbase.TableExistsException: hbase:namespace
at org.apache.hadoop.hbase.master.handler.CreateTableHandler.prepare(CreateTableHandler.java:133)
at org.apache.hadoop.hbase.master.TableNamespaceManager.createNamespaceTable(TableNamespaceManager.java:232)
at org.apache.hadoop.hbase.master.TableNamespaceManager.start(TableNamespaceManager.java:86)
at org.apache.hadoop.hbase.master.HMaster.initNamespace(HMaster.java:1046)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:925)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:605)
at java.lang.Thread.run(Thread.java:745) Can someone help me with this?
... View more
Labels:
- Labels:
-
Apache HBase
09-20-2016
10:03 AM
Are you copying data on same cluster or different cluster?
... View more
09-20-2016
10:02 AM
This is simplest and best method for this. But you can HTable API for backup as well, HTable API (such as a custom Java application) As is always the case with Hadoop, you can always write your own custom application that utilizes the publicAPI and queries the table directly. You can do this through MapReduce jobs in order to utilize that framework’s distributed batch processing advantages, or through any other means of your own design. However, this approach requires a deep understanding of Hadoop development and all the APIs and performance implications of using them in your production cluster.
... View more
09-20-2016
09:31 AM
4 Kudos
@Dheeraj, Hbase snapshot is the best method for disaster, backup and recovery procedure, snapshot 'sourceTable', 'sourceTable-snapshot'
clone_snapshot 'sourceTable-snapshot', 'newTable'
... View more
09-20-2016
07:26 AM
2 Kudos
@muthyalapaa, You can solve it by increasing the value of "mapreduce.map.memory.mb". Dont know this will solve your problem or not for sure.
... View more
09-19-2016
01:04 PM
2 Kudos
@AMIT, Before using all the methods please take a backup of destination clusters Table by using Snapshot method like, On destination cluster, =>hbase shell =>snapshot "DEST_TABLE_NAME","SNAPSHOT_DEST_TABLE_NAME" So that your data on DESTINATION cluster will not be lost. To keep your data safe on Destination clutser you can use this this method. After your use you can revert it back as, =>hbase shell =>disable "DEST_TABLE_NAME" =>restore_snapshot "SNAPSHOT_DEST_TABLE_NAME"
... View more
09-19-2016
11:18 AM
5 Kudos
@ARUN, Both the mathods "Copytable" and "Import/Export of table" are good for this but they will degrade the performance of regionserver while copying. I would preffer "Snapshot" mathod best for Backup and Recovery. Note:- Snapshot method will only work if both cluster are of same version of Hbase. I tried it. If your both cluster hbase versions are different then you can use Copytable method. Snapshot method, Go to hbase-shell and Take a snapshot of table, =>hbase shell =>snapshot "SOURCE_TABLE_NAME","SNAPSHOT_TABLE_NAME" Then you can Export that snapshot to other cluster like, =>bin/hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot SNAPSHOT_TABLE_NAME -copy-to hdfs://DESTINATION_CLUSTER_ACTIVE_NAMENODE_ADDRESS:8020/hbase -mappers 16 After this you can restore the table on DESTINATION Cluster as,On Dest_Cluster, =>hbase shell =>disable "DEST_TABLENAME" =>restore_snapshot "SNAPSHOT_TABLE_NAME" Done your table will be copied.
... View more
09-15-2016
10:07 AM
1 Kudo
Add these configurations in command as well, -D mapreduce.output.fileoutputformat.compress=true -D mapreduce.output.fileoutputformat.compress.type=BLOCK -D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec
... View more
09-15-2016
09:54 AM
2 Kudos
@Arkaprova, Please use, --compress --compression-codec org.apache.hadoop.io.compress.SnappyCodec in the command, you will get the result in proper format.
... View more
09-15-2016
09:48 AM
If this solved your question please accept the answer, it will closed this issue then.
... View more
09-15-2016
08:08 AM
Install slave component on slave node as well.
... View more
09-15-2016
08:08 AM
Install all master components on master machine which has more than 10GB of memory and 50GB of Disk space. After that On slave node add more disk space depending on the usage requirement and 4GB memory will be sufficient for that.
... View more
09-15-2016
07:57 AM
add more space to Slave node and add memory to Master node. What services you want to install on hdp?
... View more
09-15-2016
07:39 AM
4 Kudos
Hi, Please follow this document for requirements of HDP, https://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.0/bk_Installing_HDP_AMB/content/_memory_requirements.html
... View more
09-14-2016
08:19 AM
2 Kudos
Looks like the credential issue. Check the credentials entered are correct.
... View more