Member since
09-16-2015
30
Posts
13
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3800 | 08-01-2017 06:24 AM | |
912 | 02-13-2017 02:33 AM | |
3561 | 02-13-2017 02:22 AM |
09-06-2018
08:33 AM
Hi @Vincent Jiang, How you solved this issue. Bcz i am facing same issue on Ambari server. could you post your config or solution here. Thanks.
... View more
02-13-2017
02:36 PM
Good point. For my Sandbox testing, I decided to just use the steps provided in http://stackoverflow.com/questions/40550011/zeppelin-how-to-restart-sparkcontext-in-zeppelin to stop the SparkContext when I need to do something outside of Zeppelin. Not ideal, but working good enough for some multi-framework prototyping I'm doing.
... View more
01-14-2016
05:52 PM
Afraid not. The same keytab could be used if you had a local copy of it when you submitted work. Otherwise, when you submit a Spark job to the YARN cluster, it picks up your credentials, grabbing a Hive and HBase token if needed, and uses them for the duration of the job. Note that because those tokens expire after a day or two, you can't do long-lived applications that way. You will need a keytab, and spark 1.5, which is where keytab-based Spark application support went in,
... View more
12-23-2015
03:03 PM
1 Kudo
A primary benefit of using Knox is that it insulates the clients from needing to be aware of Kerberos. However, if the HDP cluster is configured with Kerberos then Knox will need to be configured to interact with the cluster securely via Kerberos with the cluster. The clients however will be unaffected.
... View more
01-30-2017
02:09 PM
@Amber Kulkarni please open this as a new thread.
... View more
12-09-2015
03:00 PM
In order to simplify the firewall rules i would create one edge host to use as a gateway using ssh-tunnels, iptables or another network type software package to forward the requests using that hosts ip only. You can also approach your network team and get a NAT assigned to your hosts so they all appear to be the same IP when making outgoing requests.
... View more
12-08-2015
07:16 AM
1 Kudo
Hi, yes you are right, .hbck is for offline meta repair. I guess you know the hbck tool: To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster: $ ./bin/hbase hbck
At the end of the commands output it prints OK or tells you the number of INCONSISTENCIES present. As you said after the deletion you don't have the option to do a offline meta repair, but in my opinion I would still keep the fail and delete some other stuff.
... View more
09-21-2017
12:36 PM
Any solutions that work long term?
... View more
01-05-2018
11:52 AM
There are some parameters to manage the log cleaner: - log.retention.check.interval.ms --> Interval to check log segments according the policies configured. - log.retention.bytes --> To define the size of the topic - log.retention.hours --> To define the time to store a message in a topic.
... View more
10-07-2015
12:41 AM
1 Kudo
You can delete Hive tables by calling "drop table <tablename> purge;", this will skip the trash. If this is for testing purposes you can temporarily set fs.trash.interval to 0 and restart namenode. This will globally disable trash collection on HDFS so should only be employed during testing. On your last question about the support of TDE feature, it was available starting HDP 2.3.
... View more