Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2439 | 04-27-2020 03:48 AM | |
4870 | 04-26-2020 06:18 PM | |
3974 | 04-26-2020 06:05 PM | |
3212 | 04-13-2020 08:53 PM | |
4918 | 03-31-2020 02:10 AM |
07-04-2017
10:19 AM
@Jay SenSharma No! I do not have a Kerberzed environment, then what step should be taken. And the tez-site.xml has following entry: <property>
<name>tez.am.view-acls</name>
<value>*</value> </property>
... View more
06-08-2017
07:32 AM
Thanks Jay
... View more
06-07-2017
02:43 PM
@Rohit Sharma
As per the release notes of Ambari 2.5 , Zeppelin View is removed from Ambari, So you can not use it. You will have to use the Zeppelin UI instead. If you are in Ambari 2.4 then you can try using the lower version of JDK to avoid the issue that you reported. Because Zeppelin view doesn't work with JDK 1.8_91+ : https://issues.apache.org/jira/browse/AMBARI-18918
... View more
06-04-2017
06:53 AM
Hi @Chandan Kumar Can you elaborate when you say loading file. What exactly are you doing?
... View more
06-01-2017
03:59 AM
1 Kudo
@John Cleveland
Good to know that your issue is resolved. Few details: hadoop.proxyuser.<USER>.hosts From File View perspective: Here we need to replace the <USER> with the username who is actually running the Ambari Server (Or Standalone Ambari FileView Server) . So if you are running ambari server as "root" user then you will need to set the property for "root" user as "hadoop.proxyuser.root.groups". The Value of this property can be a comma separated list of addresses where you are running the ambari server (OR Standalone View Ambari Server for hosting the View) . Because View Server will actually send requests to Hadoop. So hadoop need to allow access from the host where the FileView is running. Setting * means you can use the FileView (Standalone Ambari View Server) which is installed to any host. (In a kerberized environment we need to replace the <USER> with the ambari server kerberos principal name). From Generic Hadoop Perspective: In general, By Using proxy user using properties "hadoop.proxyuser.$superuser.hosts" along with either or both of "hadoop.proxyuser.$superuser.groups" and "hadoop.proxyuser.$superuser.users".
By specifying as below in core-site.xml, the superuser named super can connect only from host1 and host2 to impersonate a user belonging to group1 and group2.
Following document explains it with examples: https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/Superusers.html
... View more
11-24-2017
08:04 PM
1 Kudo
The issue was that the reverse dns was not correctly configured and adding all the hosts to /etc/hosts made it work. I still wonder why this was working before the stop, wait one week, start of the cluster VMs. A cache that was hiding the problem ?
... View more
06-12-2017
01:19 AM
Thanks @Jay SenSharma and @Paul Bere. You saved the day for me ! Much appreciated ! , Thanks @Jay SenSharma and @Paul Bere you saved the day for me ! Much apprecited...
... View more
05-28-2017
07:20 AM
@Jonathan Turner
If this helped you in getting the solution then please mark this thread as "Answered" (By clicking on the "Accept" link) This helps many HCC users to quickly find out the correct answer.
... View more
05-28-2017
03:37 AM
@Sadegh, even you can try to SSH as localhost Ex: ssh root@localhost -p 2222
... View more