Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2030 | 04-27-2020 03:48 AM | |
4021 | 04-26-2020 06:18 PM | |
3249 | 04-26-2020 06:05 PM | |
2599 | 04-13-2020 08:53 PM | |
3866 | 03-31-2020 02:10 AM |
12-30-2019
04:32 PM
@Koffi If your DataNodes are unevenly distributed/loaded then HDFS provides an option to Balance them using the "HDFS Balancer" utility. HDFS balancer utility helps to balance the blocks across DataNodes in the cluster. Via Ambari: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/managing-and-monitoring-ambari/content/amb_rebalance_hdfs_blocks.html Further Details: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/data-storage/content/balancer_commands.html .
... View more
12-20-2019
09:08 PM
@Bindal
The "/user" directory permission and ownership is shows as "hdfs:hdfs:drwxr-xr-x" Which means it can be written only by superuser "hdfs". However in ambari you have logged in as "admin" user which is NOT superuser "hdfs". So you may have to change the ownership of this dir (Which is not recommended) so that 'admin' user can create a directory inside it ....
But if you want to perform the directory creation inside "/user" via file view only then you can try this:
1. In Ambari UI Navigate to
"admin" → "Manage Ambari" → Users → "Add Users" Button
2. Create a user with name "hdfs" (you can choose your own desired password for this user and specify in the UI form)
3. You can create this user as Cluster Admin ...etc based on the type needed in the "User Access" section. Similarly you can make that user as Ambari Admin if needed. (But this is up to your requirement)
4. Now give the View Access permission to this user "hdfs"
"admin" → "Manage Ambari" → Views → "AUTO_FILES_INSTANCE" (click on "edit" button)
5. At the end of the "AUTO_FILES_INSTANCE" view definition you will find section named as "Permissions" please add user "hdfs" there.
6. Now try to login to a freshly opened browser as "hdfs" user and then you should be able to create Folders inside "/user" directory using File View.
.
Later you can disable the "hdfs" user account anytime in the Ambari UI.
... View more
12-19-2019
06:48 PM
1 Kudo
@ypc812164921 The following link talks about how to obtain the password. Authentication credentials for new customers and partners are provided in an email sent from Cloudera to registered support contacts. Existing users can file a non-technical case within the support portal (<a href="<a href="https://my.cloudera.com" target="_blank">https://my.cloudera.com</a>" target="_blank"><a href="https://my.cloudera.com</a" target="_blank">https://my.cloudera.com</a</a>>) to obtain credentials. https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-installation/content/access_ambari_paywall.html
... View more
12-12-2019
10:38 PM
@rvillanueva In addition to my previous comment also please refer to: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/running-spark-applications/content/setting_path_variables_for_python.html
... View more
12-12-2019
10:35 PM
@rvillanueva There seems to be couple of issues: Issue-1. The other issue seems to be related to Python3. Because Python3 does not support print statements without parentheses. Thats why you are getting this error: File "/bin/hdp-select", line 255 print "ERROR: Invalid package - " + name
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("ERROR: Invalid package - " + name)? Please refer to the following thread for similar discussion. https://community.cloudera.com/t5/Support-Questions/Spark-submit-error-with-Python3-on-Hortonworks-sandbox-VM/td-p/230117 https://community.cloudera.com/t5/Support-Questions/HDP3-0-livy-server-cannot-start/td-p/231126 Try using Python2.7 (Instead of Python 3) because the script "/bin/hdp-select" contains many "print" statements without parentheses. But Python3 expects that all the 'print' statements must be in parentheses. # grep 'print ' /bin/hdp-select . Issue-2. The following line indicates that somewhere in your code or "../venv/bin/activate" or "sparksubmit.test.py " script you might have set incorrect Path. ls: cannot access /usr/hdp//hadoop/lib: No such file or directory This is because the correct path should be "/usr/hdp/current/hadoop/lib". NOTICE the "current" is missing in your case. (In your environment looks like some where it is coming as Blank "/usr/hdp//hadoop/lib") . Issue-3. The "ClassNotFoundException" related errors are side effect of the above point where we see that the corret lib directory path is not present because in your printed path "current" is missing in "/usr/hdp/current/hadoop/lib" so the correct JARs are not getting included in the CLASSPATH.. Caused by: java.lang.ClassNotFoundException: com.sun.jersey.api.client.config.ClientConfig .
... View more
12-12-2019
09:21 PM
1 Kudo
@Love-Nifi What is your NiFi Version? From NiFi 1.2.0 release it should be default to TLS 1.2 as per https://issues.apache.org/jira/browse/NIFI-3720 Snippet from JIRA: Users/client connecting to NiFi through the UI or API now protected with TLS v1.2. TLSv1/1.1 are no longer supported. https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.2.0 Snippet from the Doc: Security
Users/client connecting to NiFi through the UI or API now protected with TLS v1.2 due to upgrade to Jetty version 9.4.2 So if you are using HDF then please check the NiFi Version. For example HDF3.0 (NiFi 1.2.0) will allow TLS 1.2 for all in coming connections. Other TLS versions will still be used for outgoing connections. . In General, One option to disable all TLS protocols except TLSv1.2, can be achieved by editing "$JAVA_HOME/jre/lib/security/java.security" file. Here JAVA_HOME is the one which is used by NiFi process and changing the "jdk.tls.disabledAlgorithms" property value to something like following as mentioned in https://java.com/en/configure_crypto.html Example: jdk.tls.disabledAlgorithms=SSLv3, RC4, MD5withRSA, DH keySize, SSLv2Hello, TLSv1, TLSv1.1 < 768 . You can further validate your NiFi by using the OpenSSL commands as following to attempt to connect to it using different options like following # openssl s_client -connect <NiFIhostname>:<port>
# openssl s_client -connect <NiFIhostname>:<port> -tls1_2
# openssl s_client -connect <NiFIhostname>:<port> -tls1_2
# openssl s_client -connect <NiFIhostname>:<port> -tls1
# openssl s_client -connect <NiFIhostname>:<port> -ssl3 .
... View more
12-11-2019
09:39 PM
@Amrutha Currently released Apache Knox version is 1.3.0. https://knox.apache.org/ . However, you can try to build the desired version / 1.4.0 version by following the instructions mentioned in . https://cwiki.apache.org/confluence/display/KNOX/Build+Process By coning from https://github.com/apache/knox/tree/v1.4.0-branch
... View more
12-11-2019
09:19 PM
@Amrutha Is it possible if you can try with Knox 1.4.0 version once to see if it works well or not? as mentioned in the JIRA fix version is 1.4.0.
... View more
12-11-2019
09:13 PM
@Amrutha Which knox version is it? We see the error as following: Caused by: java.lang.NoSuchFieldError: DEFAULT_XML_TYPE_ATTRIBUTE . At a glance it looks very similar to the one issue reported with Knox https://issues.apache.org/jira/browse/KNOX-1987
... View more
12-11-2019
08:00 PM
@vendevu Unfortunately the classes present inside the "org.apache.ambari.server.state.alert" package does not contain DEBUG messages hence you can not see those debugs. But you can get the Alert Notification related DEBUGs. As you have referred to other Community threads where you see the "Notification" related DEBUG messages like following: DEBUG [alert-dispatch-5] EmailDispatcher:142 - Successfully dispatched email to [@.com], so it seems like Ambari is sending email but it is struck somewhere. So if you want to see "Alert Notification" (like Email Notification/ SMTP notification) related DEBUGs inside your "ambari-server.log" then please try this: Can you please try this: 1). Edit the "/etc/ambari-server/conf/log4j.properties" file and then Add the following line somewhere at the end of this file. log4j.logger.org.apache.ambari.server.notifications=DEBUG 2). Restart Amabri Server # ambari-server restart 3). Tail the ambari-server.log file. # tail -f /var/log/ambari-server/ambari-server.log 4). As soon as some new alert notification (Like SMTP Alert Notification OR SNMP Alert Notification) will be triggered we should see some logging in the above log file. . .
... View more