Member since
06-14-2017
11
Posts
1
Kudos Received
0
Solutions
07-07-2017
04:05 AM
Hi Jais, What is the state of the Yarn application launched for processing the Hive query ? Is it : accepted, running, success, failed, killed ? If it's in the state "accepted" then this is a Yarn configuration issue. For the other states, you will need to check the logs in order to understand what is happening : - of each tasks - of the HiveServer2 server used
... View more
07-07-2017
01:46 AM
And if your sqoop command launch a map/reduce job then you also need to add the IP of all the data-nodes (where there is a Yarn NodeManager).
... View more
07-07-2017
01:43 AM
Ok, So you are experiencing GC issue. First I would investigate the "client" for any unclosed connections or equivalent that could explain the "leak". Then only, I would investigate on the "server" side. We have not run on that issue but we are not using the same CDH version and with an HBase configuration more powerful (more memory). Good luck !
... View more
07-07-2017
01:27 AM
HOST and PORT should target an active "HiveServer2" rôle. The default port is likely to be 10000. As for the host, well it depends on your cluster.
... View more
07-07-2017
01:21 AM
Not sure was to say. The graphic do not show any issue (on its own). That is expected that the memory increase overtime and will decrease only when the Garbage Collector trigger itself (when needed - and as per the configuration). There would be an issue, only if the memory increase leads to an OutOfMemory OR to much CPU consume by the GC. Are you experiencing that ?
... View more
06-19-2017
04:39 AM
For Impala there is a parameter that limit the amount of memory each deamon can use. Impala > Impala Daemon Default Group > Resource Management > mem_limit. If you set 10 Go hear then each Impala daemon will use up to 10Go of memory. This amount of memory is not part of the memory allocated to Yarn. So you should be careful.
... View more