Member since
01-25-2017
77
Posts
6
Kudos Received
0
Solutions
05-17-2019
01:41 PM
Hi, We have preemption enabled for all queues in our prod cluster. We need to run a big spark job without preemption enabled. Can someone help me on running a spark job without preemption.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
-
Apache YARN
08-06-2018
09:11 AM
Check whether the time is in sync between knox server and ambari server. Check whether ntp service is running in both the machines
... View more
07-31-2018
02:17 AM
Hi, We need to create data flow between two kafka clusters (cluster1 and cluster2). We are using Nifi sitting at cluster1 to send the messages from cluster1 kafka to cluster2 kafka. Here we wanted to use the external load balancer in front of cluster2 Kafka. However, it is not working properly. The Nifi is unable to send the data to cluster2 kafka if we use LB url in the kafka brokers section. If we put the actual kafka brokers, it is working properly. Can any one help me on this asap.
... View more
Labels:
07-11-2018
05:26 PM
@Sandeep Nemuri Thanks
... View more
07-11-2018
05:21 PM
@Sandeep Nemuri Thanks for providing the information. Do we have any link or page that mentioned the same ? I need to show this to client
... View more
07-11-2018
05:14 PM
We are using HDP2.6.3 and Ambari 2.6.0
... View more
07-11-2018
05:12 PM
Hi, Can anyone let me know that is it possible to configure SSL for Spark1 history server ? Can any one help me on this asap
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
03-27-2018
10:30 AM
Hi, When dynamic allocation is enabled, most of the times we are facing issues while fetching the blocks RetryingBlockFetcher: Retrying fetch (1/3) for 1 outstanding blocks after 5000ms Error RetyringBlockFetcher: Exception while beginning fetch of 1 outstanding blocks (after 1 retries) java.io.IOException: Failed to connect to <host>:<some port> Caused by java.net.ConnectException: Connection refused: <host>:<some port> We are seeing these errors continuously in the executors when we run a big spark jobs. During this time nothing is being processed and after some time these errors are getting disappeared and the processing gets resumed. This is impacting our job SLAs. Can any one help me on this
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
02-08-2018
01:23 PM
5 Kudos
When is the release date of HDP3.0 and the features of HDP3.0 ?
... View more
Labels:
01-19-2018
07:28 AM
@Jay Kumar SenSharma Thanks it worked well...
... View more