Member since
06-07-2016
923
Posts
322
Kudos Received
115
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3319 | 10-18-2017 10:19 PM | |
3673 | 10-18-2017 09:51 PM | |
13376 | 09-21-2017 01:35 PM | |
1367 | 08-04-2017 02:00 PM | |
1794 | 07-31-2017 03:02 PM |
02-02-2017
10:07 AM
@Jay SenSharma: Can you please help me to get dir last accessed time also. Above one is not working for dir.
... View more
01-31-2017
04:55 PM
@Prasanna G your putty is an ssh client and not an hdfs client. once you ssh to your sandbox, then you are able to run hdfs command because sandbox is where HDFS is installed including on your shell commands.
This is simialr to the fact that you cannot run "ls /some directory" from putty before you ssh into the box.
... View more
02-03-2017
04:29 PM
@Avijeet Dash I agree with you. It is much more reliable if after your streaming job, your data lands in Kafka and then written to HBase/HDFS. This decouples your streaming job from writing. I wouldn't recommend using Flume. Go with the combination of Nifi and Kafka.
... View more
02-02-2017
03:19 PM
1 Kudo
Latencies for each component excludes queue wait time and transfer latency between worker. 'complete latency' means all the nodes in tuple tree are acked, so it reflects slowest path of the tree. Btw, behind the scene, 'complete latency' includes waiting time for spout to handle ack from acker, and if your spout spends long time in nextTuple it will heavily affect 'complete latency'. It is fixed from STORM-1742 and will be included to next release. (Storm 1.0.3 for Apache side, but not sure which HDP / HDF versions contain this fix. Please let me know your HDP/HDF version.) Hope this helps.
... View more
01-23-2017
06:37 AM
The only thing you can do is limit which IP's can access your cluster. Basically specifying security rules for inbound traffic (or outbound also). http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#ec2-classic-security-groups
... View more
01-09-2017
06:10 AM
@Hoang Le No, Ambari UI will set it for future files that you will create. It will not run setrep command for you. That you will have to run from shell as described above.
... View more
12-20-2016
05:34 PM
@jzhang good call, I changed to yarn-cluster mode for the Livy interpreter and was not able to reproduce the error in HDP 2.5.
... View more
01-23-2018
07:02 AM
Thanks. In the above solution, how will the external system (syslog source) identify as to which Nifi node to be sent with the messages? Will it be ZK url or Primary Node itself? In case of latter, how will a PN fail-over can be made aware to external source?
... View more
12-14-2016
12:56 PM
Hi @mqureshi, Here i want more explain my problem I want use the cloudera CDH 5.8 for Talend Open Studio Big Data 6.3.0. For this , i must to connect to hadoop manualy. i must fill the different fields. (picture 1) for field 1, 3 and 5 i don't have problem. i find here value on core-site.xml and mapred-site.xml. For field 2 and 3 , i don't have her value in yarn-site.xml, i leaved the default values. When i chek services, i obtain 2 error log and i can't see them.(picture 2) what do you think i should do here ? thank you
... View more
12-13-2016
03:36 PM
@Junaid Rao No, this is not possible even when Windows was a supported platform. Now, HDP support for Windows has been deprecated. We strongly suggest using Linux platforms.
... View more