Member since
02-24-2016
175
Posts
56
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1300 | 06-16-2017 10:40 AM | |
11225 | 05-27-2016 04:06 PM | |
1295 | 03-17-2016 01:29 PM |
06-16-2017
10:40 AM
Well, this worked "As is" in North Virginia region. ! Earlier I was using a different region.
... View more
04-26-2017
08:32 PM
Hi @William Gonzalez, I cleared the exam on 26/March/2017,I have not had received any communication from Hortonworks about the badge? After that I wrote and cleared HDPCA on 23/April, for HDPCA I got the digital badge but not for HCA. Wrote 4 emails to certification at hortonwork dot com. Got the ticket numbers from zendesk! But unfortunately I have failed to receive any communication! Kindly help. Best regards.
... View more
02-07-2017
10:25 PM
Restart Spark, Livy , Zeppelin servers and interpreters. It worked for me. Ram Baskaran
... View more
11-03-2016
12:02 PM
You are right Arpit. It is a deeper issue, related to groups in LDAP.
... View more
12-16-2016
07:39 AM
@Smart Solutions Not sure of the answer to that, but if you're concerned about tmp data being unencrypted/intercepted then you may consider copying it over in it's unencrypted form. This will also reduce the encryption/re-encryption overhead. The link below talks about the different options to do this. https://community.hortonworks.com/articles/51909/how-to-copy-encrypted-data-between-two-hdp-cluster.html
... View more
11-15-2016
11:53 PM
The HDP Spark Component Guide (versions 2.5.0+) has been updated per Bikas's clarification, http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_spark-component-guide/content/spark-encryption.html
... View more
10-11-2016
01:23 AM
4 Kudos
@Smart Solutions The two main options for replicating the HDFS structure are Falcon and distcp. The distcp command is not very feature rich, you give it a path in the HDFS structure and a destination cluster and it will copy everything to the same path on the destination. If the copy fails, you will need to start it again, etc. Another method for maintaining a replica of your HDFS structure is Falcon. There are more data movement options and you can more effectively manage the lifecycle of all of the data on both sides. If you're moving Hive table structures, there is some more complexity to making sure the tables are created on the DR side, but moving the actual files is done the same way You excluded distcp as an option. As such, I suggest to look at Falcon. Check this: http://hortonworks.com/hadoop-tutorial/mirroring-datasets-between-hadoop-clusters-with-apache-falcon/ +++++++ if any response addressed your question, please vote and accept best answer.
... View more
10-20-2017
01:46 PM
@Smart Solutions I am trying to implement similar thing. I am trying to connect to kafka (0.10) from java producer program outside edge node. I tested my produce program in edge node it is working. But it is not working outside edge node. I have valid kerberos ticket outside edge node and passed jaas_conf file? Can you explain your approach or any example you took as reference.
... View more
08-23-2016
01:40 PM
1 Kudo
1. This can be controlled through configuration, please see http://spark.apache.org/docs/latest/configuration.html#memory-management 2. No, you cannot disable non-memory caching, but you could choose only MEMORY related storage level to avoid spilling to disk when memory is full. 3. No, the data is not encrypted, and there's no way to encrypt spilled data currently. 4. It depends on different streaming sources you choose. For Kafka it supports ssl or sasl encryption. 5. same as #2.
... View more