Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 867 | 06-04-2025 11:36 PM | |
| 1440 | 03-23-2025 05:23 AM | |
| 720 | 03-17-2025 10:18 AM | |
| 2592 | 03-05-2025 01:34 PM | |
| 1718 | 03-03-2025 01:09 PM |
04-10-2018
01:24 PM
@Mustafa Kemal MAYUK I guess you run the Kerberos wizard through Ambari if so the corresponding keytabs must have already been generated so no need for any action. The Zeppelin daemon needs a Kerberos account and keytab to run in a Kerberized cluster. Have a look at %spark interpreter like the property spark.yarn.keytabs or spark.yarn.principal they should already be filled. All the configuration is in the shiro.ini, you can even map local users and restart Zeppelin these users should be able to login Zeppelin UI. These are the default users [users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...)
# check the shiro doc at http://shiro.apache.org/configuration.html
# Configuration-INI Sections
admin = admin, admin
user1 = user1, role1, role2
user2 = user2, role3
user3 = user3, role2
# Added user John/John
John = John, role1, role2 But your spark queries won't necessarily run after logging in as one of these. For spark queries to run, the user needs to be a local user on the Linux box. Hence these are just default logins which you can change yourself. For simple configs, you can add more username/password in text format in [users] section in the above example I added John = John, role1, role2 And could log on to zeppelin UI as John/John
... View more
04-10-2018
09:07 AM
@Mustafa Kemal MAYUK The answer is YES but there are trade off's LDAP authentication is used for holding authoritative information about the accounts, such as what they're allowed to access (authorization), the user's full name and uid for centralized authentication, meaning you have to log in to every service, but if you change your password it changes everywhere. Kerberos is used to manage credentials securely (authentication) and is single sign-on (SSO), meaning you log in once and get a token and don't need to login to other services. There's a trade-off: LDAP is less convenient but simpler. Kerberos is more convenient but more complex. Secure things are simple and convenient. There's no right answer. If you need SSO use Kerberos. Else LDAP.
... View more
04-10-2018
07:58 AM
@Dinesh Jadhav The error Server not found in Kerberos database usually occurs when KDC is unable to identify the entry for service principal requested when connecting to the service. (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER) Can you share your the below-modified files;
- krb5.conf - kdc.conf - kadm5.acl Wat values do you have for these params: oozie.service.HadoopAccessorService.kerberos.enabled
local.realm
oozie.service.HadoopAccessorService.keytab.file
oozie.service.HadoopAccessorService.kerberos.principal
oozie.authentication.type
oozie.authentication.kerberos.principal
oozie.authentication.kerberos.name.rules oozie uses jaas configuration for kerberos login can you share it
... View more
04-10-2018
12:16 AM
@Nikhil Vemula Please check here
... View more
04-09-2018
11:56 PM
@Nikhil Vemula Impossible did you choose N.Virginia? can I upload a video to your private email?
... View more
04-09-2018
09:52 PM
@Nikhil Vemula You will need first to select the AMI then on the page that pops up use the filter by: Click on current generation and select All generations and you will see your m3.2xlarge with SSD so could be expensive see attached screenshot
... View more
04-09-2018
09:30 PM
@Liana Napalkova Are the other components running? Have you increased the memory as earlier discussed? If so can you try starting the components manually just copy and paste NameNode # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" DataNode # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode" Could you also check the value of dfs.namenode.http-address
... View more
04-09-2018
09:24 PM
1 Kudo
@Anurag Mishra fs.defaultFS The fs.defaultFS makes HDFS a file abstraction over a cluster, so that its root is not the same as the local system's. You need to change the value in order to create the distributed file system. The fs.defaultFS in core-site.xml gives the datanode address of namenode. The datanode looks here for the namenode address and tries to contact it using RPC. Without setting the fs.defaultFS the command $ hdfs dfs -ls / would initially show the local root filesystem as below $ hdfs hadoop fs -ls /
Warning: fs.defaultFS is not set when running "ls" command.
Found 21 items
dr-xr-xr-x - root root 4096 2017-05-16 20:03 /boot
drwxr-xr-x - root root 3040 2017-06-07 18:31 /dev
drwxr-xr-x - root root 8192 2017-06-10 07:22 /etc
drwxr-xr-x - root root 56 2017-06-10 07:22 /home
................
.............
drwxr-xr-x - root root 167 2017-06-07 19:43 /usr
drwxr-xr-x - root root 4096 2017-06-07 19:46 /var dfs.namenode.http-address The location for the NameNode URL in the hdfs-site.xml configuration file e.g <property>
<name>dfs.namenode.http-address</name>
<value>node1.texas.us:50070</value>
<final>true</final>
</property> The NameNode HTTP server address is controlled by configuration property dfs.namenode.http-address in hdfs-site.xml. Typically this specifies a hostname or IP address, and this maps to a single network interface like above but you can tell it to bind to all network interfaces by setting property dfs.namenode.http-bind-host to 0.0.0.0 (the wildcard address, matching all network interfaces). This is the base port where the dfs namenode web ui will listens to.It's good to make the name node HTTP server listen on all interfaces by setting it to 0.0.0.0 this will require a reboot/restart of NameNode Hope that clarifies for you the difference
... View more
04-09-2018
04:38 PM
@Nikhil Vemula Yes, that's true, you have only N.Virginia, N.California and Oregon to pick, it's weird that the images were not deployed to your region. Some guys from Hortonworks should get this alert.
... View more
04-09-2018
04:21 PM
@Liana Napalkova Can you try starting the components manually # su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode"
# su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-datanode/../hadoop/sbin/hadoop-daemon.sh start datanode" Validate that the port is not blocked by the firewall # iptables -nvL If you don't see TCP ports 8020 and 50070 add them following this syntax # iptables -I INPUT 5 -p tcp --dport 50070 -j ACCEPT Can you restart the cluster that looks a bizzare case. Please revert
... View more