Member since
10-28-2015
61
Posts
10
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1690 | 09-25-2017 11:22 PM | |
6119 | 09-22-2017 08:04 PM | |
5435 | 02-03-2017 09:28 PM | |
3867 | 05-10-2016 05:04 AM | |
1101 | 05-04-2016 08:22 PM |
04-22-2016
06:13 AM
I haven't seen the actual source code of major compaction. For all practical reasons i have not seen any hbase client able to perform any transaction during major compaction.
... View more
04-21-2016
06:26 PM
add "&" in the end of hive-metastore start command.It will keep you metastore process running in background even if you close the terminal
... View more
04-20-2016
10:33 PM
@K. Karray Kill any stale amabri-agent on effected nodes. ( ps -ef|grep ambari-agent) Restart the ambari-agent manually. (sudo systemctl start ambari-agent) If issue persists share the amabri-agent logs
... View more
04-20-2016
10:21 PM
2 Kudos
If a HBase table is undergoing major compaction client may encounter very low read/write throughput. Eventually clients may face connection timeout until major compaction is over.
In case of Minor compaction table is available for read and writes.
For more details refer this link
... View more
04-19-2016
12:07 AM
1 Kudo
Consider following points to decide salting buckets: No of region servers available Expected write throughput HBase key itself (If it is random enough(not to cause hotspots) than i will suggest pre-splitting without salting to get better scans) Increasing salt buckets to high number may result in slower scans(depending on table size and scan)
... View more
04-18-2016
11:11 PM
@nejm hadjmbarek Remove the space inside connection string. Add zookeeper znode in the connection string . jdbc:phoenix:<quorom>:<port>:[zk_rootNode] If this doesn't work plz paste whole stack trace.
... View more
04-18-2016
10:47 PM
@Divya Gehlot Are you specifying start and stop key in scans? Open ended scan which doesn't specify start and stop key usually ends up with complete table scan and hence becomes slow. As @Randy Gelhausen mentioned optimal rowkey design will help you in specifying start and stop key.
... View more
04-18-2016
06:03 PM
for old api set mapred.reduce.tasks=N for new api set mapreduce.job.reduces=N
... View more
12-15-2015
11:02 PM
Have you tried setting mapreduce.jobtracker.maxtasks.perjob for your pig application?
Alternatively you can use node labels to run your pig job on specific nodes.
... View more
- « Previous
- Next »