Member since
02-29-2016
37
Posts
48
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
27554 | 12-22-2016 12:55 PM | |
1022 | 12-22-2016 10:49 AM |
12-27-2016
03:01 PM
1 Kudo
Issue: When you are trying to run queries from Tableau, SQuirrel etc.. connecting to Hiveserver2 via JDBC/ODBC driver in a Knox environment, you might intermittently come across the above error. This error pops-up especially when the query has more columns in the select statement. Interestingly, the same query works fine from hive cli and beeline cli. Resolution: This is a bug in Knox - https://issues.apache.org/jira/browse/KNOX-492 Log into Ambari > Knox > Config > Add the below under "Advanced Topology" file section <service>
<role>HIVE</role>
<url>http://localhost:10001/cliservice</url>
<param>
<name>replayBufferSize</name>
<value>16</value>
</param>
</service>
... View more
Labels:
12-27-2016
02:32 PM
1 Kudo
When you apply aggregate function on a column whose data type is "void" you get the above error. How you get "void" as datatype? When you create a table from another table(CTAS) & hardcode NULL as column values in the base table, you get the datatype as "void" for those columns. Consider the following example. This is a bug https://issues.apache.org/jira/browse/HIVE-11217 hive> create table test1 as select null as col1, null as col2;
Query ID = hive_20161227141826_d746f7f8-f867-4376-8b58-2e03e63c9763
Total jobs = 1
Launching Job 1 out of 1
Status: Running (Executing on YARN cluster with App id application_1481936603759_0026)
--------------------------------------------------------------------------------
VERTICES STATUS TOTAL COMPLETED RUNNING PENDING FAILED KILLED
--------------------------------------------------------------------------------
Map 1 .......... SUCCEEDED 1 1 0 0 0 0
--------------------------------------------------------------------------------
VERTICES: 01/01 [==========================>>] 100% ELAPSED TIME: 6.96 s
--------------------------------------------------------------------------------
Moving data to: hdfs://mrt1.openstacklocal:8020/apps/hive/warehouse/test1
Table default.test1 stats: [numFiles=1, numRows=1, totalSize=6, rawDataSize=5]
OK
Time taken: 8.553 seconds
hive> desc test1;
OK
col1 void
col2 void
Time taken: 0.469 seconds, Fetched: 2 row(s)
Resolution: The fix for HIVE-11217 simply disallows the table from being created in the first place if the columns are void. The fix is only available in the Hive-2 line (HSI-Hive Server Interactive) in HDP 2.5
... View more
Labels:
12-25-2016
11:03 AM
4 Kudos
Reference: a) Github b) AMBARI-13205 1) Install SNMP on a sandbox or test environment yum install net-snmp net-snmp-utils net-snmp-libs –y 2) Modify the script /etc/snmp/snmptrapd.conf to include "disableAuthorization yes" # Example configuration file for snmptrapd
#
# No traps are handled by default, you must edit this file!
#
# authCommunity log,execute,net public
# traphandle SNMPv2-MIB::coldStart /usr/bin/bin/my_great_script cold
disableAuthorization yes
3) Copy APACHE-AMBARI-MIB.txt file from the above github or jira reference a) cp APACHE-AMBARI-MIB.txt /usr/share/snmp/mibs
b) chmod 777 /usr/share/snmp/mibs/APACHE-AMBARI-MIB.txt
4) Copy snmp_mib_script.sh script from the above github or jira reference and set appropriate permission to execute the script cp snmp_mib_script.sh /tmp/snmp_mib_script.sh 5) Run the below command nohup snmptrapd -m ALL -A -n -Lf /tmp/traps.log & 6) Invoke a test trap to ensure that the snmptrapd is logging appropriately to '/tmp/traps.log' snmptrap -v 2c -c public localhost '' APACHE-AMBARI-MIB::apacheAmbariAlert alertDefinitionName s "definitionName" alertDefinitionHash s "definitionHash" alertName s "name" alertText s "text" alertState i 0 alertHost s "host" alertService s "service" alertComponent s "component" 7) Check and see the alert in the trap file '/tmp/traps.log' 😎 Kill the snmptrapd process(step 5) and set up the alert notification on Ambari UI 9) Below is the notification I setup on Ambari 10) Run the below command & invoke a test trap again nohup snmptrapd -m ALL -A -n -Lf /tmp/traps.log &
snmptrap -v 2c -c public localhost '' APACHE-AMBARI-MIB::apacheAmbariAlert alertDefinitionName s "definitionName" alertDefinitionHash s "definitionHash" alertName s "name" alertText s "text" alertState i 0 alertHost s "host" alertService s "service" alertComponent s "component" 11) Test the alert by manually stopping a component, in this example yarn node manager and ambari-agent was stopped which showed up on both /var/log/ambari-server/ambari-alerts.log and also in /tmp/traps.log file.
I have attached the screenshot of the snmp alert on the left side and ambari alert on the right side of the screen shot. Please note sometimes the alert might take a while to show up. Hope this helps! Thanks
... View more
Labels:
12-25-2016
07:33 AM
Ambari Server Performance - This alert appears when the size of the below API call does not respond back within the threshold limit configured in the alert notifications. The reason for this could be the content of the API response just got bigger over the releases.
curl: http://<host>:<port>api/v1/clusters/<clustername>;
Ambari alert sample:
Alert 1: (Time : 01 December 2016 19:42)
CRITICAL Ambari Server Performance
Performance Overview: Database Access (Request By Status): 1ms (OK) Database Access (Task Status Aggregation): 1ms (OK) REST API (Cluster): 16,454ms (CRITICAL)
Cluster: <clustername>
Alert2 : (Time : 02 December 2016 10:40 )
Ambari Server Performance
Performance Overview: Database Access (Request By Status): 1ms (OK) Database Access (Task Status Aggregation): 1ms (OK) REST API (Cluster): 8,544ms (CRITICAL)
Cluster: <clustername>
Possible fix:
Increase the threshold limit configured in the alert notification or disable the alert. In future releases of Ambari, this alert will be modified to ask for less information.
Also, please refer this link for more details on Ambari Alerts. Thanks!
... View more
Labels:
12-23-2016
03:06 PM
2 Kudos
1. Configure the FTP client (Below is a screenshot
from Cyberduck) 2. Refer the below link for the HDP stack
version http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.1.0/bk_ambari-installation/content/hdp_stack_repositories.html For ex: I'm interested in this repo http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.0.0/hdp.repo 3. Navigate to the above path manually, right click the respective hdp.repo and select
HTTPS URL. In the below screen print I have showed the same and you'll download the tar
file in HTTPS. Hope this helps! Thanks
... View more
12-23-2016
12:00 PM
@Sankar T Please read Wangda's blog if you are planning to setup yarn node labels - https://wangda.live/2016/04/16/suggestions-about-how-to-better-use-yarn-node-label/
... View more
12-22-2016
04:15 PM
1 Kudo
Labels:
- Labels:
-
Apache Hive
12-22-2016
12:55 PM
3 Kudos
@Yukti Agrawal There is a chance that your job might be waiting for resources to be released by other jobs running in the cluster. Its worth checking in RM UI once you execute the query until the state changes to "RUNNING" - where most of the time is being spent.
... View more
12-22-2016
11:56 AM
@Huahua Wei Refer - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_hive-performance-tuning/content/ch_hive_llap.html
... View more
12-22-2016
10:49 AM
5 Kudos
@Phylis Ruchitric I think you are hitting this bug - https://issues.apache.org/jira/browse/AMBARI-13831 Workaround: Add the KERBEROS service via API as below and try installing the SmartSense service via Ambari 2.2 1. Curl to add KERBEROS service. Run the below command from the ambari server
curl -H "X-Requested-By:ambari" -u admin:admin -i -X POST http://<ambari-server-host>:8080/api/v1/clusters/<cluster-name>/services/KERBEROS
You'll notice a Kerberos service included in the left side window of the dashboard page in Ambari
2. Install SmartSense installation following the Add service wizard in Ambari
You'll notice installation progressing and gets completed till the end
3. Once the smartsense installation is complete we can run the below command and delete the KERBEROS service
curl -H "X-Requested-By:ambari" -u admin:admin -i -X DELETE http://<ambari-server-host>:8080/api/v1/clusters/<cluster-name>/services/KERBEROS
... View more
- « Previous
- Next »