Member since
11-07-2019
71
Posts
1
Kudos Received
0
Solutions
08-31-2021
01:52 PM
Hi, do you have apache ranger installed ? if yes, check that the right policies are added under yarn service and the ranger user sync service is configured and syncing AD users and groups. Best Regards
... View more
02-04-2020
04:02 PM
@asmarz,
One of our members posted a reply on how to add users in the thread you posted a similar question to later the same day.
As this is an older thread which was previously marked 'Solved', you would have a better chance of receiving a resolution by starting a new thread. This will also provide the opportunity for you to provide details specific to your environment about what you did in an attempt to add the relevant user accounts that could aid others in providing a more relevant, accurate answer to your question.
... View more
02-04-2020
03:07 PM
@asmarz,
As this is an older thread which was previously marked 'Solved', you would have a better chance of receiving a resolution by starting a new thread. This will also provide the opportunity to provide details specific to your environment (for example, what happened once you added the affected user accounts with "normal" command add user on all nodes) that could aid others in providing a more relevant, accurate answer to your question.
... View more
02-03-2020
11:42 PM
Set the value of the dfs.webhdfs.enabled property in hdfs-site.xml to true. <property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property> Create an HTTP service user principal. kadmin: addprinc -randkey HTTP/$<Fully_Qualified_Domain_Name>@$<Realm_Name>.COM where: Fully_Qualified_Domain_Name: Host where the NameNode is deployed. Realm_Name: Name of your Kerberos realm. Create a keytab file for the HTTP principal. kadmin: xst -norandkey -k /etc/security/spnego.service.keytab HTTP/$<Fully_Qualified_Domain_Name> Verify that the keytab file and the principal are associated with the correct service. klist –k -t /etc/security/spnego.service.keytab Add the dfs.web.authentication.kerberos.principal and dfs.web.authentication.kerberos.keytab properties to hdfs-site.xml. <property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/$<Fully_Qualified_Domain_Name>@$<Realm_Name>.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/security/spnego.service.keytab</value>
</property> Restart the NameNode and the DataNodes.
... View more
01-10-2020
06:04 PM
1 Kudo
@asmarz On the edgenode Just to validate your situation I have spun up single node cluster Tokyo IP 192.168.0.67 and installed an edge node Busia IP 192.168.0.66 I will demonstrate the spark client setup on the edge node and evoke the spark-shell First I have to configure the passwordless ssh below my edge node Passwordless setup [root@busia ~]# mkdir .ssh [root@busia ~]# chmod 600 .ssh/ [root@busia ~]# cd .ssh [root@busia .ssh]# ll total 0 Networking not setup The master is unreachable from the edge node [root@busia .ssh]# ping 198.168.0.67 PING 198.168.0.67 (198.168.0.67) 56(84) bytes of data. From 198.168.0.67 icmp_seq=1 Destination Host Unreachable From 198.168.0.67 icmp_seq=3 Destination Host Unreachable On the master The master has a single node HDP 3.1.0 cluster, I will deploy the clients to the edge node from here [root@tokyo ~]# cd .ssh/ [root@tokyo .ssh]# ll total 16 -rw------- 1 root root 396 Jan 4 2019 authorized_keys -rw------- 1 root root 1675 Jan 4 2019 id_rsa -rw-r--r-- 1 root root 396 Jan 4 2019 id_rsa.pub -rw-r--r-- 1 root root 185 Jan 4 2019 known_hosts Networking not setup The edge node is still unreachable from the master Tokyo [root@tokyo .ssh]# ping 198.168.0.66 PING 198.168.0.66 (198.168.0.66) 56(84) bytes of data. From 198.168.0.66 icmp_seq=1 Destination Host Unreachable From 198.168.0.66 icmp_seq=2 Destination Host Unreachable Copied the id-ira.pub key to the edgenode [root@tokyo ~]# cat .ssh/id_rsa.pub | ssh root@192.168.0.215 'cat >> .ssh/authorized_keys' The authenticity of host '192.168.0.215 (192.168.0.215)' can't be established. ECDSA key fingerprint is SHA256:ZhnKxkn+R3qvc+aF+Xl5S4Yp45B60mPIaPpu4f65bAM. ECDSA key fingerprint is MD5:73:b3:5a:b4:e7:06:eb:50:6b:8a:1f:0f:d1:07:55:cf. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.0.215' (ECDSA) to the list of known hosts. root@192.168.0.215's password: Validation the passwordless ssh works [root@tokyo ~]# ssh root@192.168.0.215 Last login: Fri Jan 10 22:36:01 2020 from 192.168.0.178 [root@busia ~]# hostname -f busia.xxxxxx.xxx xxxxxx Single node Cluster [root@tokyo ~]# useradd asmarz [root@tokyo ~]# su - asmarz On the master as user asmarz I can access the spark-shell and execute any spark code Add the edge node to the cluster Install the clients on the edge node On the master as user asmarz I have access to the spark-shell Installed Client components on the edge-node can be seen in the CLI I chose to install all the clients on the edge node just to demo as I have already install the hive client on the edge node without any special setup I can now launch the hive HQL on the master Tokyo from the edge node After installing the spark client on the edge node I can now also launch the spark-shell from the edge node and run any spark code, so this demonstrates that you can create any user on the edge node and he /she can rive Hive HQL, SPARK SQL or PIG script. You will notice I didn't update the HDFS , YARN, MAPRED,HIVE configurations it was automatically done by Ambari during the installation it copied over to the edge node the correct conf files !! The asmarz user from the edge node can also acess HDFS Now as user asmarz I have launched a spark-submit job from the edge node The launch is successful on the master Tokyo see Resource Manager URL, that can be confirmed in the RM UI This walkthrough validates that any user on the edge node can launch a job in the cluster this poses a security problem in production hence my earlier hint of Kerberos. Having said that you will realize I didn't do any special configuration after the client installation because Ambari distributes the correct configuration of all the component and it does that for every installation of a new component that's the reason Ambari is a management tool If this walkthrough answers your question, please do accept the answer and close the thread. Happy hadooping
... View more
11-21-2019
06:41 PM
@asmarz Please try using "spark-submit" instead of "spark-shell" as mentioned in: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/running-spark-applications/content/running_sample_spark_2_x_applications.html Example: # /usr/hdp/current/spark2-client/bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn \
--num-executors 1 \
--driver-memory 512m \
--executor-memory 512m \
--executor-cores 1 \
--deploy-mode cluster \
/usr/hdp/current/spark2-client/examples/jars/spark-examples_2.11-2.3.2.3.1.4.0-315.jar 10
... View more
11-20-2019
11:42 AM
@asmarz The error is below in bold I /usr/bin/hdp-select set oozie-client 3.1.4.0-315' returned 1. symlink target /usr/hdp/current/oozie-client for oozie already exists and it is not a symlink. Possible cause the symlink is broken or exists and is pointing to different location or version so removing symlink and reinstalling the client should resolve the issue Validate Can you validate that the link exists? Move the symlink mv /usr/hdp/current/oozie-client /usr/hdp/current/oozie-client_back Recreate the symlink ln -s /usr/hdp/3.1.0.0-78/oozie /usr/hdp/current/oozie-client My version above is 2.1.0.0.x this should match your exact version
... View more
11-13-2019
09:00 AM
Should be the noexec ?
... View more