Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1732 | 09-11-2019 10:19 AM | |
| 9322 | 11-26-2018 07:04 PM | |
| 2484 | 11-14-2018 12:10 PM | |
| 5313 | 11-14-2018 12:09 PM | |
| 3141 | 11-12-2018 01:19 PM |
05-08-2018
12:33 PM
Is the environment kerberized? Do you have configured SSSD to sync users at the os level?
... View more
05-07-2018
09:19 PM
@ed day If you found this answer helped address your question, please take a moment to login and click the "accept" link on the answer.
... View more
05-07-2018
07:57 PM
1 Kudo
@Pratik Shirbhate I think answer to your issue is described here: https://stackoverflow.com/questions/15809414/could-not-find-function-getnamespace Here is the brief of it: "The .getNamespace function is part of R 3.0.0. The warning message states that the package you installed was built for R 3.0, not 2.15. The package is trying to use the .getNamespace function, but does not find it as it is not part of R 2.15. You can either upgrade to R 3.0 (which seems to be a bit experimental right now) or install the R 2.xx version of the package." *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
05-07-2018
06:15 PM
1 Kudo
@Ankita Shukla While SSL and Kerberos help address other aspects of security such as wire encryption and authentication, impersonation help in resolving a different problematic. Impersonation means performing actions on behalf of the requested user. Certain services such as Knox/Livy or Hive (when doAs=true) require to impersonate end users when performing access to resources like Yarn and HDFS. Only valid users are allowed to impersonate other users. Impersonation in hadoop is setup using hadoop.proxyuser.* configuration on core-site.xml - And only listed users in core-site.xml will be allowed to impersonate certain hosts and groups. A common example for impersonation is Hive, when configured to run as end user instead of Hive user ( hive.server2.enable.doAs=true ) - Knox gateway and Livy are also other good examples. And there are other examples as well. Important aspects when using impersonation are: 1) All access to underlying resources (like HDFS) will be made as end user instead of user hive. This helps when you like to perform all authorization checks on hdfs posix level. 2) Applications launched on yarn (if any) will be launched as end user instead of hive/knox/livy user. This way you can make use of capacity scheduler to map users to certain queues with different resource limitations. If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer. HTH
... View more
05-07-2018
04:57 PM
@ed day When you say HDP shows Kafka service running OK, do you mean Ambari UI is showing broker service is running fine? Could you confirm on which host is the broker service running (on ambari you can click > Kafka > Summary tab > Broker link) ? Then you can open a shell console to that host and confirm the process is running by issuing ps -ef | grep -i kafka Maybe this helps you in finding from where exactly the kafka broker is started and maybe figuring where you can find the kafka-topics.sh HTH
... View more
05-04-2018
05:15 PM
@Liana Napalkova You should set correct permissions for /user/centos directory. hdfs dfs -chown centos:centos /user/centos If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
05-04-2018
05:10 PM
1 Kudo
@Liana Napalkova I advice against changing ownership of hdfs /usr directory You should set correct permissions for /user/centos directory. hdfs dfs -chown centos:centos /user/centos HTH
... View more
05-04-2018
12:54 PM
2 Kudos
@Prakhar Agrawal AFAIK you need to manually resolve the problem and there is no automatic resolution for a difference in flow.xml.gz file as of now.
... View more
05-04-2018
12:48 PM
@Liana Napalkova The graph.jar will be automatically copied to hdfs and distribute by the spark client. You only need to point to the location of graph.jar in the local file system. For example: spark-submit --master yarn --deploy-mode cluster --driver-memory 6g--executor-memory 4g--executor-cores 2--class org.tests.GraphProcessor /path/to/graph.jar HTH *** If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer.
... View more
05-03-2018
01:27 PM
@Arnaud Bohelay Please let me know the above has helped answer your question. Thanks.
... View more