Member since
03-22-2017
11
Posts
1
Kudos Received
0
Solutions
05-13-2019
06:20 PM
please refer to this, just change the provider of the failover to the one that comes with the hadoop-yarn-common.jar: https://community.hortonworks.com/content/supportkb/178800/errorclass-orgapachehadoopyarnclientrequesthedging.html
... View more
04-11-2019
09:11 PM
this looks like you have data skew issue, meaning your group by key is skewed, resulting in unbalanced data between partitions. you can inspect your key distribution, if skewness is real, you need to change key or add salt into the groupby so data can be evenly distributed.
... View more
04-08-2019
04:30 PM
nice post, we tried this and it works like a charm! we have shared the way we did this here: https://github.com/linehrr/hbase-multi-hosting hopefully this can help the others who's trying to achieve similar thing.
... View more
01-10-2018
06:20 PM
remember when you do: val sc = new SparkContext()
you are opening a new context/session, instead of altering the old one, it's immutable.
... View more
01-10-2018
06:18 PM
that's probably because your own spark session completed and then you can see it from history server. default spark session from Zeppelin is long-lived session which will run until you kill it.
... View more
01-10-2018
04:16 PM
I solved the same problem, and the root cause is like what @Pradeep Bhadani said. Hive shell needs access to whichever Yarn container is running the Hive session process. and Yarn container could be running anywhere in the cluster(as long as that node has nodemanager). so make sure you have access to all nodes. also check if the hive shell client box has DNS resolution on all hostnames, because that container node is returned as hostname not IP.
... View more
10-29-2017
04:56 PM
tmpfs is from RAM anyway, so if you already needed to swap out to swap partition, you won't have any space in RAM to spill to the tmpfs anyway. tmpfs is used when you have huge RAM available and also you need to cache something fast and ephemeral, then tmpfs allows you to mount some amount of RAM into filesystem. so you can use it as if it's a FS mount.
... View more
09-20-2017
06:50 AM
restarting ambari agents solved mine. service ambari-agent restart
sometimes you have to wait for a while for the ambari server to get update from agent.
... View more
05-29-2017
03:55 AM
if you are using hbase shell for scanning, you can try: > scan '<table>', CACHE => 1000 this CACHE will tell hbase RS to cache some certain number of rows before return, which can save lots of RPC calls.
... View more