Member since
07-19-2018
613
Posts
101
Kudos Received
117
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 5688 | 01-11-2021 05:54 AM | |
| 3812 | 01-11-2021 05:52 AM | |
| 9488 | 01-08-2021 05:23 AM | |
| 9288 | 01-04-2021 04:08 AM | |
| 38612 | 12-18-2020 05:42 AM |
08-12-2020
06:49 AM
1 Kudo
@Wilber My suggestion would be to not over write existing files, but write to staging location as external table. Then merge staging location data into final internal hive table with INSERT INTO final_table SELECT * FROM staging_table;
... View more
08-07-2020
04:44 AM
@jloormoreira A couple suggestions: If you are using the hdp sandbox, you should be be able to spin up the vm and have ambari and services installed out of the box. No need to run cluster install wizard. This is the HDP Sandbox: https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html?utm_source=mktg-tutorial If you are installing a cluster yourself, you need to back up and complete the documentated steps to install ambari-server & ambari-agent and setup ambari-server. Install as root user and do not do anything for "SSL" until you have base install completed. Complete host mapping for your fqdn (/etc/hosts) in your machine and in the VM. Make sure these are right. Make sure the VM hostname is right and persisting. Next, in the VM complete password less auth steps for the node by creating ssh keys (ssh-keygen) and adding that ~/.ssh/id_rsa.pub key to root ~/.ssh/authorized_keys. Be sure to do the initial login ssh root@yourfqdn and click Y the first time. Now you can go to ambari to install cluster. During cluster install use the id_rsa (not id_rsa.pub) for register host. Once these basics are completed the rest of the install should go fine. My last bit of advice is, if the size of the vm you are using is limited do no try to install all ambari services. It will take a 16-32gb instance to run whole stack and even this is almost too much for single node. In the 8-16gb range you will have issues trying to install everything so recommend installing only basics (yarn,hdfs,ambari metrics) + the other components you need.
... View more
06-29-2020
02:45 PM
Too cool man, great work!
... View more
06-29-2020
04:48 AM
@dewi A management pack is 1 or more Custom Ambari Services which are easily added to many different ambari versions via the management pack install command. A custom service is something you would manually add to a single ambari installation. So the management pack just makes it easier, more applicable to multiple versions, and stacks.
... View more
06-25-2020
05:18 AM
@RajeshLuckky If you follow the original post, you need the ssl key and cert in the jdbc string. "At Nifi level make sure the cert file(s) are owned to nifi user". For example, if you create the cert and key files in the folder /etc/nifi/ssl/ then you would execute: chown -R nifi:nifi /etc/nifi/ssl/ This will own the files to nifi so the nifi user can read them.
... View more
06-25-2020
05:13 AM
@abhinav_joshi its related to permissions for root user. First, you should avoid root user. Also recommend use FQDNs, not ips. It seems that user has perms on the localhost ip, but maybe not when connecting from the external IP. If you make a specific user, you would need to grant them access to a database from a specific host. For example: CREATE DATABASE hive; CREATE USER 'hive'@'hdp3.cloudera.com' IDENTIFIED BY 'hive'; GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hdp3.cloudera.com' WITH GRANT OPTION; FLUSH PRIVILEGES;
... View more
06-25-2020
05:08 AM
@math23 No, you can also just copy the service directly into your ambari. I suggest this way if you can't easily get a newer cluster. I tried to point you at how to install the custom service in the accepted solution. A management pack just makes that task a lil easier, as well as being applicable to different stack versions.
... View more
06-24-2020
09:46 AM
@math23 I have confirmed your version of ambari is not able to install management packs. The error you are getting is indicating "install-mpack" is not an allowed ambari-server command. You will need to use a newer version of Ambari/HDP. If you really have to install in old version you can also deliver a custom service to ambari without a management pack (.tar.gz). You can unpack the contents and copy the airflow service into right ambari version folder directly. I don't have the steps to do that for airflow, but some other examples of how to get the version, and what the service folder contents look, are including in these custom services: https://github.com/EsharEditor/ambari-hue-service https://github.com/abajwa-hw/ambari-flink-service
... View more
06-24-2020
07:30 AM
@math23 I will give it a go with that version. Where did you get the mpack archive? Just to make sure I get same one. The one I used is from the GitHub I linked above. ambari-server install-mpack --mpack=https://github.com/miho120/ambari-airflow-mpack/raw/master/airflow-service-mpack.tar.gz --verbose ambari-server uninstall-mpack --mpack-name=airflow-ambari-mpack I was just working in HDP3 and Ambari 2.7.4 and got mpack installed without any issues. When adding the service I just got tons of issues with the mpack being old airflow version and related dependency errors. I am going to have to upgrade the services and make a new management pack to make it work in HDP3 with airflow 1.10.10.
... View more
06-24-2020
06:55 AM
@bjornmy Ok I understand. In this case, you have to finish all of the json object values to attributes and then back to json, not just content. I was using only that one as an example, and because in the original flow you were only sending content. I think its best at this point to investigate the other two methods (JOLTTransform or UpdateRecord) as they should require less work handling all the other values.
... View more