Member since
02-04-2016
132
Posts
52
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6229 | 07-25-2018 10:53 AM | |
1763 | 07-25-2018 05:15 AM | |
1828 | 10-03-2017 12:08 PM | |
3096 | 04-04-2017 05:36 AM | |
3335 | 11-29-2016 05:40 PM |
08-12-2018
06:15 AM
Hi @Bhushan Kandalkar It seems as if you crossed a vCore limit - not memory. In Yarn Queue Manager what calculator are you using ? Default or Dominant ? If Dominant - swithc to Default. In Ambari >>> Yarn >>> Configs >>> Settings - make sure CPU Scheduling & CPU Isolation are off (unless you have switched them on intentionally and you are using cGroups accordingly). Also in In Ambari >>> Yarn >>> Configs >>> Settings make sure you sufficient "number of virtual cores" & "Maximum Container Size" (it should be the number of cores you have in your datanodes minus 4-6 cores for OS).
... View more
08-06-2018
10:13 AM
Hi @Patrick Picard I don't know if it still relevant for you but as far as i know it is possible to kill llap queries from HDP 2.6.3 and above. Using the following: 1. Identify the query which needs to be killed
2. Using beeline, connect to LLAP 3. execute the command below in beeline # kill query "<hive query ID>" The command would be for example # kill query "hive_20180104093525_f90a6496-42fc-46bb-8e4a-8edc638b193d" I haven't tried it myself but it should do the trick.
... View more
07-26-2018
05:07 AM
Please see the following: https://community.hortonworks.com/content/supportkb/196409/error-error-unable-to-run-the-custom-hook-script-w.html https://community.hortonworks.com/questions/161830/error-unable-to-run-the-custom-hook-script.html
... View more
07-25-2018
10:53 AM
1 Kudo
If anyone stumbles upon this error - the solution is increasing the maximum heap size of the datanode This error can occur if there are pauses in the JVM's garbage collection.
... View more
07-25-2018
05:37 AM
If it states "Decommissioning" like so Then it means blocks are being replicated to other nodes. Be patient - it takes time. When finished it will change to "Decommissioned"
... View more
07-25-2018
05:15 AM
If this is an Ambari managed cluster then Decommission is done using Ambari ui. Go to Hosts Page >>> Find and click the FQDN of the host on which the DataNode component resides >>> Using the Actions control, click Selected Hosts > DataNodes > Decommission. Same for decommission nodemanager. The Decommissioning might take between minutes to hours depends on how much data is replicated to the other nodes. You can still keep other services running on that node.
... View more
07-24-2018
09:31 AM
Hi @Abhinav Phutela Thank you for taking the time to respond. The issue in hand is not produced by manual operations so i have no control over opeining or closing connections. It seems to be an issue with this specific datanode under normal workload. I do not receive this Ambari alert in other clusters or on other hosts. Just this host. It is definitely an error because it is logged as such In /var/log/hadoop/hdfs/hadoop-hdfs-datanode-<dn_name>-drp.log of that specific datanode: "ERROR DefaultPromise.rejectedExecution (Slf4JLogger.java:error(181)) - Failed to submit a listener notification task. Event loop shut down?" The node resides in the same rack using the same switch as other nodes which don't have this issue... Could this be an issue with a faulty NIC maybe ? Adi
... View more
07-24-2018
06:45 AM
Hello We have a testing HDP 2.6 cluster and we receive an Ambari alert regarding connection to a specific DN web ui (50075) several times during the day (between 20-50 times) - even when the node is in idle state or in minimal workload. The alert is:
DataNode Web UI
Connection failed to http://<DN_NAME>:50075
([Errno 104] Connection reset by peer)
I restarted the DataNode service and even the entire host but still problem remains The clue regarding this issue appears in the datanode's log in a perfect correlation to the alert: "ERROR DefaultPromise.rejectedExecution (Slf4JLogger.java:error(181)) - Failed to submit a listener notification task. Event loop shut down?" (In /var/log/hadoop/hdfs/hadoop-hdfs-datanode-<dn_name>-drp.log) I googled of course but haven't found any relevant info about this error.... The /var/log/messages of the server is error-free. Any ideas what can cause this intermittent behavior ? Thanks in advance Adi
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
07-10-2018
06:33 AM
Hello After installing & Configuring Atlas i've successfully imported all hive entities into Atlas using import-hive.sh. Now all hive entities appear in Atlas. Maybe i missed this trivial thing - but what happens with new hive entities ? How exactly should new hive entities such as DBs & tables sync to Atlas ? Thanks in advance Adi
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache Hive
07-08-2018
11:31 AM
@Ilia K What maintenance is needed ? You can use user mapping in order to set which user will use which queue....
... View more