Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 608 | 06-04-2025 11:36 PM | |
| 1168 | 03-23-2025 05:23 AM | |
| 578 | 03-17-2025 10:18 AM | |
| 2172 | 03-05-2025 01:34 PM | |
| 1369 | 03-03-2025 01:09 PM |
08-22-2017
08:59 PM
@John Wright I notice in your error log its the SSL on Ambari server causing the problem.
INFO 2017-08-22 13:33:55,016 NetUtil.py:67 - Connecting to https://F.Q.D.N:8440/ca
ERROR 2017-08-22 13:33:55,093 NetUtil.py:93 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-08-22 13:33:55,093 NetUtil.py:94 - SSLError: Failed to connect. Please check openssl library versions. Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.WARNING 2017-08-22 13:33:55,095 NetUtil.py:121 - Server at https://F.Q.D.N:8440 is not reachable, sleeping for 10 seconds...
See attached RHEL Bugzilla Can you temporarily disable it Option 1. Set Up Two-Way SSL Between Ambari Server and Ambari Agents On the Ambari Server host, open /etc/ambari-server/conf/ambari.properties with a text editor. Add the following property: security.server.two_way_ssl = true Start or restart the Ambari Server. ambari-server restart The Agent certificates are downloaded automatically during Agent Registration. Option 2. Disable HTTPS If SSL is enabled you will get the option to disable when you run the below tool
# ambari-server setup-security
Using python /usr/bin/python2.6
Security setup options...
===========================================================================
Choose one of the following options:
[1] Enable HTTPS for Ambari server.
[2] Encrypt passwords stored in ambari.properties file.
[3] Setup Ambari kerberos JAAS configuration.
[4] Setup truststore.
[5] Import certificate to truststore.
===========================================================================
After disable and default the port number to 8080. Let me know
... View more
08-22-2017
08:10 AM
@sachin gupta Then change it to the USER_1 and GROUP_1 and retest
... View more
08-22-2017
05:25 AM
1 Kudo
@Amithesh Merugu It seems there is a confusion here is the F: Linux directory? . The designation looks a window directory if so then copy using winscp /FileZilla to you /tmp on the on the sandbox or linux box first. As root switch to hdfs user # su - hdfs Make sure the permission are correct on the /user/maria_dev check the owner $hdfs dfs -ls /user/maria_dev The output should give the owner to spark eg maria_dev drwxr-xr-x - maria_dev hdfs 0 2017-08-08 23:55 /user/maria_dev Now copy the file to the hdfs directory as the hdfs user $ hdfs dfs -CopyFromLocal /tmp/sparktest.jar /user/toto Now the file should be available in hdfs you can list the file: $ hdfs dfs -ls /user/maria_dev Now you can run your spark job as see the progress in the YARN UI choose running on the left pane http://ambari_host:8088/cluster Hope that helps
... View more
08-21-2017
05:43 PM
@John Wright The easiest way to resolve this issue is to install ambari-agent on all the hosts including the ambari host # yum install -y ambari-agent Then edit the ambari.ini found in # vi /etc/ambari-agent/conf/ambari-agent.ini Look for the below first entry and make sure the hostname below is the FQDN of your ambari server [server]
hostname={your_ambari_server_host}
url_port=8440
secured_url_port=8441 Then start the agents on all the servers # ambari-agent start Now go back to your Ambari UI in the Target hosts your add all the FQDN of hosts in your cluster including the Ambari server.I n the Host Registration Information Select Perform manual registration on hosts and do not use SSH Click next all should register successfully Voila
Have a look at this HCC thread too by Jay
... View more
08-21-2017
11:56 AM
@sachin gupta I have seen your attached kms-acls.xml.Have you changed the values? If so can you copy and past the specifi entry below? <property>
<name>hadoop.kms.acl.DECRYPT_EEK</name>
<value>*</value>
<description>
ACL for decryptEncryptedKey CryptoExtension operations.
</description>
</property
... View more
08-21-2017
09:34 AM
@Kishore Kumar Good to know that your Service hdfs check failed-From Ambari issue was resolved. Can you then accept my answer and open a new thread for the Smart Sense view issue. This ensures that a thread doesn't span x pages and HCC members usually ignore very old threads. Rewarding answers also encourages members to respond and resolves issues .
... View more
08-20-2017
01:01 PM
Yes open one for the RM/YARN UI people usually ignore a thread that has been there for ages, with long thread.
... View more
08-20-2017
12:30 PM
@Anup Shirolkar Good progress can you paste here the screenshot? Now that the first problem was solved I would advice you accept my answer and open a new thread with the YARN UI issue otherwise this thread will be too long to follow. Thanks
... View more
08-20-2017
11:58 AM
@Kishore Kumar I am happy you can smile and progress with your project! Can you accept my answer and its advisable to open a new thread for the Smart Sense view. If you are using admin as the login for Smartsense view make sure you have done the following Add these two property settings in core-site.xml. You can find that in the Ambari HDFS config section. hadoop.proxyuser.admin.hosts=*
hadoop.proxyuser.admin.groups=* As root user # su - hdfs Create the admin user directory in hdfs $hdfs dfs -mkdir /user/admin Create permissions admin user directory in hdfs $hdfs dfs -chown admin:hdfs Please revert
... View more
08-20-2017
09:51 AM
@Kishore Kumar Add these two property settings in core-site.xml. You can find that in the Ambari HDFS config section. hadoop.proxyuser.hdp.hosts=*
hadoop.proxyuser.hdp.groups=* As root user # su - hdfs Create the hdp user directory in hdfs $hdfs dfs -mkdir /user/hdp Create permissions hdp user directory in hdfs $hdfs dfs -chown hdp:hdfs For your information HDFS is a distributed File system so needless to say once created its accessible form all the cluster hosts using hdfs user !
... View more