Member since
06-09-2016
529
Posts
129
Kudos Received
104
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1388 | 09-11-2019 10:19 AM | |
8400 | 11-26-2018 07:04 PM | |
1976 | 11-14-2018 12:10 PM | |
4107 | 11-14-2018 12:09 PM | |
2682 | 11-12-2018 01:19 PM |
06-01-2018
09:02 AM
@Ravi Kumar LankeThis is valuable information that will help alot of people ! Maybe you should create a seperate 'faq' just like the one above so everybody can see this 🙂 !
... View more
05-31-2018
02:00 PM
@Felix Alban that's good to know! I think it is important to know how to create virtual environments as it will be important for separate projects. If you know how, can you please list the instructions to create a virtual envirnment for a project ? thank you
... View more
06-21-2019
12:11 PM
Kafka Consumer Demarcator could be one solution.Use Shift+enter and try persisting into HDFS.
... View more
05-18-2018
04:38 PM
@John Doe When you like to accept an answer you should look for this Then Accept HTH
... View more
05-08-2018
05:59 PM
@Khouloud Landari Do you see it stuck after those WARN Service SparkUI could not bind to port 4041? If that is the case I think the problem maybe is not able to start an application on yarn. What happens is spark2 pyspark launches a yarn application on your cluster and I think this is what is probably failing. Try this command and let me know if this works: SPARK_MAJOR_VERSION=2 pyspark --master local --verbose Also I would advise you to check the Resource Manager logs. RM logs can be found on RM host under /var/log/hadoop-yarn This will probably show what the problem is with yarn and why your zeppelin user is not able to start applications on the hadoop cluster. HTH
... View more
05-07-2018
06:15 PM
1 Kudo
@Ankita Shukla While SSL and Kerberos help address other aspects of security such as wire encryption and authentication, impersonation help in resolving a different problematic. Impersonation means performing actions on behalf of the requested user. Certain services such as Knox/Livy or Hive (when doAs=true) require to impersonate end users when performing access to resources like Yarn and HDFS. Only valid users are allowed to impersonate other users. Impersonation in hadoop is setup using hadoop.proxyuser.* configuration on core-site.xml - And only listed users in core-site.xml will be allowed to impersonate certain hosts and groups. A common example for impersonation is Hive, when configured to run as end user instead of Hive user ( hive.server2.enable.doAs=true ) - Knox gateway and Livy are also other good examples. And there are other examples as well. Important aspects when using impersonation are: 1) All access to underlying resources (like HDFS) will be made as end user instead of user hive. This helps when you like to perform all authorization checks on hdfs posix level. 2) Applications launched on yarn (if any) will be launched as end user instead of hive/knox/livy user. This way you can make use of capacity scheduler to map users to certain queues with different resource limitations. If you found this answer addressed your question, please take a moment to login and click the "accept" link on the answer. HTH
... View more
05-04-2018
03:57 PM
@Prakhar Agrawal @Felix Albani is correct. There is no way to automatically have a node delete his flow.xml.gz in favor of the clusters flow. If we allowed that it could lead to unexpected data loss. Lets assume a node was taken out of the cluster do perform some side work and the user tries to rejoin it to cluster, if it just took the clusters flow, any data queued in a connection that doe snot exist in clusters flow would be lost. It would be impossible for Nifi to know if the joining of this node to this cluster was a mistake or intended, so NiFi simply informs you there is a mismatch and expects you to resolve the issue. - Also noticed you mentioned "NCM" (NiFi Cluster Manager). NIFi moved away from having a NCM staring with Apache NIFi 1.x version. Newer version have a zero master cluster where any connected node can be elected as the cluster's coordinator. - Thanks, Matt
... View more
05-08-2018
01:11 AM
Thanks, I checked that information.
... View more
05-31-2018
09:34 AM
@Felix Albani Hi felix, you installed 3.6.4, but according to the document spark2 can only support up to 3.4.x, Can you kindly explain how does this work ?
... View more