Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2448 | 04-27-2020 03:48 AM | |
4885 | 04-26-2020 06:18 PM | |
3976 | 04-26-2020 06:05 PM | |
3220 | 04-13-2020 08:53 PM | |
4926 | 03-31-2020 02:10 AM |
06-26-2019
01:31 PM
@Krishna Srinivas Can you please share the exact SQL queries which you executed in Ambari DB? After deleting the HBase Service from Ambari DB when you restarted Ambari you did not see HBase service (it looks normal as you deleted it from Ambari DB so you do not see it) But if you have preserved the HBase configs then you can "Add Service" HBase from Ambari UI on the same host and then apply the same configs via UI. Isn't that working for you?
... View more
06-21-2019
04:02 AM
1 Kudo
@Alampally Vinith 1. Can you please run the following command on the following host to see if by any chance the "hive" user account is locked? # chage -l hive Example Output: # chage -l hive
Last password change : Jun 21, 2019
Password expires : never
Password inactive : never
Account expires : never
Minimum number of days between password change : 0
Maximum number of days between password change : 99999
Number of days of warning before password expires : 7 2. Are you able to switch to that user? # su - hive Just in case of if you see the account is locked due to inactivity then Increase number of days of inactivity after a password has expired before the account is locked by setting INACTIVE option. Passing the number -1 as the INACTIVE option will remove the account’s inactivity so user can go through password change process anywhere in the future. # chage -I 30 hive Please read more about this INACTIVE=-1 will disable this feature meaning user can change the password anytime after password expires. And then according to your requirement you can set the user inactivity. If you are running Hive Metastore with some other user then in the above commands use that user name instead of "hive" Please check on that host what is the default INACTIVITY set to? Please share the output of the below command as well. # cat /etc/default/useradd Example: # cat /etc/default/useradd
# useradd defaults file
GROUP=100
HOME=/home
INACTIVE=-1
EXPIRE=
SHELL=/bin/bash
SKEL=/etc/skel
CREATE_MAIL_SPOOL=yes .
... View more
06-20-2019
01:28 PM
1 Kudo
@Gulshan Agivetova At least one problem i see in your config which is causing the following error: Executable command awk ended in an error: awk: fatal: cannot open file `print $0}" for reading (No such file or directory Thsis is because in your . "Command Arguments" you are using semicolon. And in "ExecuteStreamCommand" the "Argument Delimiter" is also set to ";" (which is default delimiter) May be you can try changing the "Argument Delimiter" to something else then check if you are still getting the same error or not?
... View more
06-20-2019
07:23 AM
@John If you see your "/spark2-history" log folder has many old orphan files then probably these are the files left over due to spark-driver failures / crashes ...etc. You can check your Spark configs following parameters. spark.history.fs.cleaner.enabled=true
spark.history.fs.cleaner.interval
spark.history.fs.cleaner.maxAge NOTE: However, there are some issues reported already for older version of Spark where the spark.history.fs.cleaner required some improvements. As part of https://issues.apache.org/jira/browse/SPARK-8617 fix Spark 2.2 should function properly. Also please check if the ownership of the files present inside the "/spark2-history" is correctly set or not? If not then please set it correctly according to your setup. # hdfs dfs -chown spark:hadoop /spark2-history
.
... View more
06-19-2019
10:40 PM
@Nani Bigdata Earlier you mentioned that you could not find port 10001 being opened. Hence you were getting "(Connection refused) (state=08S01,code=0)" error. # netstat -tnlpa | grep `cat /var/run/hive/hive-server.pid` Now if you are curently able to see that Port 10001 is listening then in that case you can try to login using default "hive" / "hive" to see if it works? Ideally any valid user who has access to Hive DB should be able to connect via beeline. . If you still face any error then please share the complete stack trace of the error. Also if your current issue is different from the Originally reported issue "(Connection refused) (state=08S01,code=0)" then please open a separate HCC thread and mark this HCC thread as answered by clicking on the "Accept" button on correct/helpful answer.
... View more
06-19-2019
09:38 PM
@Nani Bigdata There are better other options as well to find out the Active/StandBy ResourceManagers. 1. Using command line as following. Please replace the "rm1" and "rm2" according to your config "yarn.resourcemanager.ha.rm-ids". # su - yarn -c "yarn rmadmin -getServiceState rm1"
standby
[root@newhwx2 ~]# su - yarn -c "yarn rmadmin -getServiceState rm2"
active 2. Using Ambari API calls: # curl -s -u admin:admin -H "X-Requested-By: ambari" -X GET "http://<ambari-host>:<ambari-port>/api/v1/clusters/<cluster-name>/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=ACTIVE" | grep -B1 -i "host_name" # curl -s -u admin:admin -H "X-Requested-By: ambari" -X GET "http://<ambari-host>:<ambari-port>/api/v1/clusters/<cluster-name>/host_components?HostRoles/component_name=RESOURCEMANAGER&HostRoles/ha_state=STANDBY" | grep -B1 -i "host_name" . .
... View more
06-18-2019
02:30 AM
1 Kudo
@Sandeep Gunda One aproach may be to use the "EvaluateJsonPath" Processor as following to get the total number of results (basically result is an Array here) so we can try something like following to store the size of the array to a new attribute resultCount = $.result.length() Example: Then later you can read the attribute resultCount in some other processor as following: ${resultCount} .
... View more
06-18-2019
12:11 AM
@Nani Bigdata Is there any specific reason that you are looking out for that particular property "ActiveStandbyElector" Path? Is your ResourceManager HA not functioning properly? Although you can find that path "ActiveStandbyElector" here Example: 1). Connect to your Zookeeper Quorum using "zkCli.sh" as following: # /usr/hdp/current/zookeeper-client/bin/zkCli.sh -server newhwx1.example.com,newhwx2.example.com,newhwx3.example.com:2181 2). List the contents of "ls /yarn-leader-election/yarn-cluster" [zk: newhwx1.example.com,newhwx2.example.com,newhwx3.example.com:2181(CONNECTED) 2] ls /yarn-leader-election/yarn-cluster
[ActiveBreadCrumb, ActiveStandbyElectorLock] 3). "" Get. [zk: newhwx1.example.com,newhwx2.example.com,newhwx3.example.com:2181(CONNECTED) 3] get /yarn-leader-election/yarn-cluster/ActiveStandbyElectorLoc
yarn-cluster rm2
cZxid = 0x9000aee67
ctime = Mon Jun 17 03:24:12 UTC 2019
mZxid = 0x9000aee67
mtime = Mon Jun 17 03:24:12 UTC 2019
pZxid = 0x9000aee67
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x26ab8da9f808066
dataLength = 19
numChildren = 0
... View more
06-17-2019
01:01 PM
@Abhishek Rawat . If you want to use Ubuntu 16 then please try Ambari 2.7 + HDP Search 4.0 combination
... View more
06-17-2019
12:58 PM
1 Kudo
@Abhishek Rawat I looked at the SupportMatrix for Ambari 2.6 and i can see that it does not Support Ubuntu 16. Same goes for HDP Search 3.0 (which is compatible with Ambari 2.6) does have Ubuntu 16 as Supported OS. So i guess package dependencies can not be resolved if they are not present. https://supportmatrix.hortonworks.com/ In the above link please click on the Ambari 2.6 versions and then scroll down to see the supported HDP search version along with the tested and certified OS.
... View more