Member since
04-16-2019
373
Posts
7
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
24016 | 10-16-2018 11:27 AM | |
8064 | 09-29-2018 06:59 AM | |
1234 | 07-17-2018 08:44 AM | |
6861 | 04-18-2018 08:59 AM |
09-03-2018
07:22 AM
It seems to me problem of /usr/bin/hdp-select since in other cluster if i run the same command it list out the HDP services but here it shows below error : File "/usr/bin/hdp-select", line 205
print "ERROR: Invalid package - " + name
^
SyntaxError: Missing parentheses in call to 'print'
... View more
09-03-2018
07:01 AM
I have upgraded ambari from 2.5.1.0 to 2.6.2.2 , but I am not able to start few services , zeppelin , hbase ..etc . I am getting below error when I restart the services : 2018-09-03 08:44:39,459 - Could not determine stack version for component zeppelin-server by calling '/usr/bin/hdp-select status zeppelin-server > /tmp/tmp9iDTwc'. Return Code: 1, Output: .
2018-09-03 08:44:39,497 - The 'zeppelin-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.5.0.0-1245). This is the version that will be reported.
2018-09-03 08:44:39,642 - Could not determine stack version for component zeppelin-server by calling '/usr/bin/hdp-select status zeppelin-server > /tmp/tmpoOZ1_0'. Return Code: 1, Output: .
2018-09-03 08:44:39,681 - The 'zeppelin-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.5.0.0-1245). This is the version that will be reported.
2018-09-03 08:44:40,949 - Getting jmx metrics from NN failed. URL: http://<namenodehsot>:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem However there is symlink and files fro zeppelin-server : ls -l /usr/hdp/current | grep -i zeppelin-server lrwxrwxrwx 1 root root 30 Sep 3 08:44 zeppelin-server -> /usr/hdp/2.5.0.0-1245/zeppelin
... View more
Labels:
- Labels:
-
Apache Ambari
07-31-2018
06:55 AM
@Jay Kumar SenSharma Hi Jay , if group is being sync with ambari from ldap in this case will the directory be created for the all the members in the group ?
... View more
07-30-2018
02:14 PM
Is there some rest ap i available to fetch the yarn memory and cpu usage for the cluster ?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache YARN
07-24-2018
07:21 AM
how can i check which user has killed application from yarn cli ?
... View more
Labels:
- Labels:
-
Apache YARN
07-22-2018
06:37 AM
I am not able to connect beeline from user tom but I am able to connect from hive . Hive : !connect jdbc:hive2://noescape.c.test:10000/default;principal=hive/noescape.c.test@EXAMPLE.COM here i am connected from hive but when I go form tom it is not getting connected . !connect jdbc:hive2://noescape.c.test:10000/default;principal=tom@EXAMPLE.COM Kerberos principal should have 3 parts: tom@EXAMPLE.COM I tried different way also but no luck : from root I did kinit tom then beeline and !connect jdbc:hive2://noescape.c.test:10000/default it prompted for username and password , username tom and provided same password which i did while executing kinit tom . but this way below error : Could not open client transport with JDBC Uri: jdbc:hive2://<host>:10000/default;: Peer indicated failure: Unsupported mechanism type PLAIN (state=08S01,code=0)
... View more
Labels:
- Labels:
-
Apache Hive
07-17-2018
08:44 AM
I have solved the issue , actually resource path was not correct . If the path is incorrect admin can see the policy but delegate admin would not be able to do so .
... View more
07-17-2018
05:59 AM
I have created a ranger (HDFS )policy for the user and made him delegate admin , but the policy is not visible for him .
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Ranger
07-15-2018
02:28 PM
I have installed single node hadoop cluster with disk space 150 GB , however hdfs disk space is very less . value of dfs.datanode.data.dir is /hadoop/hdfs/data for which space allocated is 2.1 G du -sh /hadoop/hdfs/data 2.1G / hadoop/hdfs/data i want to increase the size of this folder or hdfs space . my second query is how only 2.1g space got allocated by default for hdfs ?
... View more
Labels:
- Labels:
-
Apache Hadoop
07-14-2018
05:15 PM
I am installing single node cluster but I am getting the Permission denied (publickey,gssapi-keyex,gssapi-with-mic) . however I have set PasswordAuthentication yes
PermitRootLogin yes
in the /etc/ssh/sshd_config
service sshd restart i have done below steps from root : selinux=disabled
service iptables stop
service ip6tables stop
chkconfig iptables off
chkconfig ip6tables off
service ntpd start
chkconfig ntpd on
ssh-keygen -t rsa
cd .sshcat id_rsa >> authorized_keys
chmod 700 ~/.ssh when i try to [root@instance-5 .ssh]# ssh host
root@104.196.221.168's password: but earlier trying the same root password on the same host required . Thanks Anurag
... View more