Member since
09-24-2015
816
Posts
488
Kudos Received
189
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2571 | 12-25-2018 10:42 PM | |
11830 | 10-09-2018 03:52 AM | |
4131 | 02-23-2018 11:46 PM | |
1796 | 09-02-2017 01:49 AM | |
2113 | 06-21-2017 12:06 AM |
03-30-2017
11:49 AM
Hi @Predrag Minovic and @ssathish, I tried with WebHCat, it works. I already tried with the Distributed Shell application, but I didn't success. Maybe I did wrong. But I think I will try with Oozie soon. Thanks for you help
... View more
03-18-2017
08:22 AM
1 Kudo
It worked. part-m-00001 is not a separate table, it's just another file in your import directory. If you create an external table on /date_new7, Hive will see a single table with 3 rows. Ditto for Map-reduce jobs taking /date_new7 as their input. If you end up with many small files you can merge them into one (from time to time) by using for example hadoop-streaming, see this example and set "mapreduce.job.reduces=1".
... View more
03-22-2017
08:24 AM
Hi @Juan Manuel Nieto, well done! I noticed AMBARI-18898 and suspected it's causing a havoc on the command line, but didn't have time to try it. Though, now, after fixing Solr, Ranger audit cannot connect to it and Ambari is showing "http 500" (false) alerts on both Infra Solr instances. Edit: I missed "DEFAULT" in the name rules, omitted as I tried with only one rule before. After adding DEFAULT everything is back to normal!
... View more
07-25-2017
11:46 PM
Hi All, did you find the fix? Thank you
... View more
03-04-2017
04:45 AM
1 Kudo
Check your livy.superusers in Spark->Livy in Ambari, it should mach your Zeppelin principal, in this case zeppelin-lake. You are using custom zeppelin service name, there may be also some bugs in that case. Also check your Zeppelin principal stored in zeppelin.server.kerberos.keytab. Many other issues are covered in this post and it this blog, but Zeppelin service name and the principal in those posts is "zeppelin".
... View more
03-03-2017
04:38 PM
Hi @christophe menichetti, As @Predrag Monodic mentioned, you can use Blueprints for non-UI based installs. Unfortunately, the UI Wizard will not allow you to generate a Blueprint and Cluster Creation template after you gone through all the screens. The simplest way to generate a Blueprint to start with is to try the following: 1. On a local VM cluster for testing (vagrant, docker, etc), create a cluster that has the services, components, and configuration that you are interested in deploying in your production cluster. 2. Use the UI to deploy this local cluster, going through all the normal screens in the wizard. 3. You can then export the Blueprint from this running cluster. This REST call will generate a Blueprint based on the currently-running cluster you setup in Step #2. 4. Save this Blueprint, and customize it as necessary. 5. Create a Cluster Creation Template that matches hostnames to the host groups from the exported Blueprint. Please note that you may want to manually rename the host groups in the exported Blueprint, as they are generated using a "host_group_n" convention, which may not be useful for documenting your particular cluster. You can check out the following link on the Blueprints wiki to see how to make the REST call to export the Blueprint from a running cluster: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-APIResourcesandSyntax Hope this helps!
... View more
03-03-2017
11:18 AM
1 Kudo
Client means a set of binaries and libraries to run commands and develop software for a particular Hadoop service. So, if you install Hive client you can run beeline, if you install HBase client you can run HBase shell and so on. Typically you install all clients on so-called Edge (or Gateway) nodes, used by end-users to access cluster services. Some clients are also installed in the background (by Ambari) on master nodes for related services, like hdfs clients on NNs. Clients are passive components, unlike Data node, Node manager etc. they don't run unless someone starts related binaries.
... View more
05-23-2017
11:10 AM
The docs say "MariaDB 10" but RHEL7 comes with "MariaDB 5".
... View more
04-17-2017
07:00 PM
I faced this issue recently. Turns out that one of my datanodes was decomissioned (due to earlier maintenance). You might try checking the list of datanodes from the dfshealth page. Using default ports, for me it was: <<mynamenode>>:50070/dfshealth.html#tab-datanode That would list the datanodes and their status (active, decomissioned, space, etc...) Also note that recent versions of Ambari will give that link under HDFS Summary "Quick Links" dropdown (its called "Namenode UI")
... View more
09-07-2017
07:00 PM
Is this one way trust encrypted between kdc and ad?
... View more