Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2826 | 04-27-2020 03:48 AM | |
| 5499 | 04-26-2020 06:18 PM | |
| 4679 | 04-26-2020 06:05 PM | |
| 3709 | 04-13-2020 08:53 PM | |
| 5617 | 03-31-2020 02:10 AM |
06-26-2019
01:31 PM
@Krishna Srinivas Can you please share the exact SQL queries which you executed in Ambari DB? After deleting the HBase Service from Ambari DB when you restarted Ambari you did not see HBase service (it looks normal as you deleted it from Ambari DB so you do not see it) But if you have preserved the HBase configs then you can "Add Service" HBase from Ambari UI on the same host and then apply the same configs via UI. Isn't that working for you?
... View more
06-21-2019
04:02 AM
1 Kudo
@Alampally Vinith 1. Can you please run the following command on the following host to see if by any chance the "hive" user account is locked? # chage -l hive Example Output: # chage -l hive
Last password change : Jun 21, 2019
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7 2. Are you able to switch to that user? # su - hive Just in case of if you see the account is locked due to inactivity then Increase number of days of inactivity after a password has expired before the account is locked by setting INACTIVE option. Passing the number -1 as the INACTIVE option will remove the account’s inactivity so user can go through password change process anywhere in the future. # chage -I 30 hive Please read more about this INACTIVE=-1 will disable this feature meaning user can change the password anytime after password expires. And then according to your requirement you can set the user inactivity. If you are running Hive Metastore with some other user then in the above commands use that user name instead of "hive" Please check on that host what is the default INACTIVITY set to? Please share the output of the below command as well. # cat /etc/default/useradd Example: # cat /etc/default/useradd
# useradd defaults file
GROUP=100
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes .
... View more
06-20-2019
01:28 PM
1 Kudo
@Gulshan Agivetova At least one problem i see in your config which is causing the following error: Executable command awk ended in an error: awk: fatal: cannot open file `print $0}" for reading (No such file or directory Thsis is because in your . "Command Arguments" you are using semicolon. And in "ExecuteStreamCommand" the "Argument Delimiter" is also set to ";" (which is default delimiter) May be you can try changing the "Argument Delimiter" to something else then check if you are still getting the same error or not?
... View more
06-18-2019
02:30 AM
1 Kudo
@Sandeep Gunda One aproach may be to use the "EvaluateJsonPath" Processor as following to get the total number of results (basically result is an Array here) so we can try something like following to store the size of the array to a new attribute resultCount = $.result.length() Example: Then later you can read the attribute resultCount in some other processor as following: ${resultCount} .
... View more
06-17-2019
01:01 PM
@Abhishek Rawat . If you want to use Ubuntu 16 then please try Ambari 2.7 + HDP Search 4.0 combination
... View more
06-17-2019
12:58 PM
1 Kudo
@Abhishek Rawat I looked at the SupportMatrix for Ambari 2.6 and i can see that it does not Support Ubuntu 16. Same goes for HDP Search 3.0 (which is compatible with Ambari 2.6) does have Ubuntu 16 as Supported OS. So i guess package dependencies can not be resolved if they are not present. https://supportmatrix.hortonworks.com/ In the above link please click on the Ambari 2.6 versions and then scroll down to see the supported HDP search version along with the tested and certified OS.
... View more
06-16-2019
11:02 PM
1 Kudo
@Michael Bronson I do not remember/think of any specific idem to check at this point, But as long as you are able to run your Hive Queries without any issue and HiveService checks are also running fine. I think we should be good.
... View more
06-16-2019
10:54 PM
1 Kudo
@Michael Bronson To clean up the Hive scratch directory manually may not be a safe option for a multi-user environment (where multiple users might be executing the hive queries concurrently) since it will accidentally remove the scratch directory in use.
... View more
06-16-2019
10:51 PM
1 Kudo
@Michael Bronson "hive.scratchdir.lock" : When true, holds a lock file in the scratch directory. If a Hive process dies and accidentally leaves a dangling scratchdir behind, the cleardanglingscratchdir tool will remove it. When false, does not create a lock file and therefore the cleardanglingscratchdir tool cannot remove any dangling scratch directories. Regarding your query "second is it safe to delete the folder - /tmp/hive/hive" >>> I do not think that we should do it on our own. As the whole purpose of the following JIRA was to introduce some tool like "cleardanglingscratchdir" to safely remove the scratch contents. https://issues.apache.org/jira/browse/HIVE-13429 .
... View more
06-16-2019
10:31 PM
1 Kudo
@Michael Bronson As per the apache hive docs there seems to be some parameters and tools available to deal with such issue. Although i have not personally tested those tools. But looks like they were introduced to deal with similar issue long back as part of https://issues.apache.org/jira/browse/HIVE-13429 For example i see that the Hive Config "hive.exec.scratchdir" points to the "/tmp/hive" dir. Can you please check and let us know what is the value set for the following parameter "hive.scratchdir.lock". (if not set then default value will be "false"? Additionally you might want to refer about "hive.server2.clear.dangling.scratchdir" and "hive.start.cleanup.scratchdir" parameters of Hive Server config. Please refer to [1] the following link to know more about those parameters. There is a tool "cleardanglingscratchdir" mentioned as part of the link [2] may be you would like to read more about it. # hive --service cleardanglingscratchdir [-r] [-v] [-s scratchdir]
-r dry-run mode, which produces a list on console
-v verbose mode, which prints extra debugging information
-s if you are using non-standard scratch directory . [1] https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.scratchdir.lock. [2] https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2#SettingUpHiveServer2-ClearDanglingScratchDirTool
... View more