Member since
04-13-2017
46
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 19279 | 01-11-2019 06:26 AM | |
| 11247 | 11-13-2017 11:31 AM | |
| 83833 | 11-13-2017 11:27 AM |
05-06-2019
08:14 AM
Worked for me too. Thank you.
... View more
03-15-2018
10:23 AM
Thank you, @Lars Volker. Definitely learned something new there!
... View more
02-05-2018
10:07 PM
Hi, referring to the last step, do you encounter the Permission denied error when doing scp? sudo scp user@cluster:/etc/hadoop/conf/* /etc/hadoop/conf I managed to copy all the files inside /conf except for container-executor.cfg which shows the message in terminal below: scp: /etc/hadoop/conf/container-executor.cfg: Permission denied
... View more
12-29-2017
09:49 AM
Sorry for the late reply; glad you did it and now you know it was the perfect solution. Cheers!
... View more
11-13-2017
11:31 AM
I continued the resolution of this issue in another thread specific to the error: ls: Operation category READ is not supported in state standby The solution is marked on that thread, however, a quick summary was that I needed to add the Failover Controller role to a node in my cluster, enable Automatic Failover, and then restart the cluster for it all to kick in.
... View more
11-13-2017
11:27 AM
As noted in the previous reply, I did not have any nodes with the Failover Controller role. Importantly, I also had not enabled Automatic Failover despite running in an HA configuration. I went ahead and added the Failover Controller role to both namenodes - the good one and the bad one. After that, I attempted enable the Automatic Failover using the link shown in the screenshot from this post. To do that, however, I needed to first start Zookeeper. At that point, If I recall correctly, the other namenode was still not active but I then restarted the entire cluster and the automatic failover kicked in, using the other namenode as the active one and leaving the bad namenode in a stopped state.
... View more
04-18-2017
05:38 PM
On the Impala dev team we do plenty of testing on machines with 16GB-32GB RAM (e.g. my development machine has 32GB RAM). So Impala definitely works with that amount of memory. It's just that with that amount of memory it's not too hard to run into capacity problems if you have a reasonable number of concurrent queries with larger data sizes or more complex queries. It sounds like maybe the smaller memory instances work well for your workload.
... View more