Member since
07-19-2018
613
Posts
101
Kudos Received
117
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 5688 | 01-11-2021 05:54 AM | |
| 3812 | 01-11-2021 05:52 AM | |
| 9488 | 01-08-2021 05:23 AM | |
| 9289 | 01-04-2021 04:08 AM | |
| 38620 | 12-18-2020 05:42 AM |
04-16-2020
06:49 AM
@Udhav You need to map a FQDN (fully qualified domain name) to your "localhost" via /etc/hosts For example I often use "hdp.cloudera.com". cat /etc/hosts | grep 'cloudera.com': 1xx.xxx.xxx.xxx hdf.cloudera.com 1xx.xxx.xxx.xx hdp.cloudera.com Next you put the FQDN in the list of hosts during Cluster Install Wizard. Be sure to complete the next required steps for ssh key, agent setup, etc. When the Confirm Hosts modal fails, you can click the Failed link, open modals and get to the full error. The easiest way for me to spin up ambari/hadoop in my computer is using AMBARI VAGRANT: https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide This provides an easy way to spin up 1-X number of nodes in my computer, and it handles all the ssh-keys and host mappings. Using this I can spin up ambari on centos with just a few chained commands: wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.7.0.0/ambari.repo && yum --enablerepo=extras install epel-release -y && yum install java java-devel ambari-server ambari-agent -y && ambari-server setup -s && ambari-server && ambari-server start && ambari-agent start
... View more
04-16-2020
06:35 AM
@Damian_S Yes, I always use mysql/mariadb for hive metastore. If you have the original data you can you just move it to the new location? This should be part of your migration steps regardless of the backend for the metastore.
... View more
04-16-2020
05:50 AM
@Damian_S If you have re-created the cluster, did you migrate the original hive metastore data? This metastore would have stored details about the hive tables in hdfs. If you no longer have this metastore data, you will need to recreate the hive schemas/databases, and re-execute the hive table create statements against the hdfs files. If the metastore is intact (not recreated, reset, etc) then you may be experiencing permission issues against the files in hdfs.
... View more
04-16-2020
05:42 AM
@Cl0ck I am not sure if this helps, but in the past when I have had ntp stability issues across a cluster, I configure my Master Ambari Server to the internal ntp clock server I want. Then for all the rest of the nodes, I use the Master Ambari Service hostname/ip, not the ntp servers. This allows the main machine to get the internal clock time, and share that with all rest of the nodes.
... View more
04-16-2020
05:36 AM
@bhara I don't see an actual error message in there. I just notice the result output has rows = none. Does the user you are logging into HUE with have permissions to see the database and tables? Can you share your hue.ini hive configuration lines?
... View more
04-16-2020
05:29 AM
@Soc You should go to /etc/yum.repos.d/ and clean your repo files. Delete any that you do need. Next execute: yum clean all Then try again.
... View more
04-13-2020
06:16 AM
@ForrestGump No, out of the box NiFi should work with all Processors and Controller Services. The stock configuration of NiFi should work without any issues. You should not see any inconsistencies or stability issues unless you are exceeding resources available on the NiFi node(s). If you are seeing specific issues with "Regex, prepend, and other items" they each should give a very specific error. Sometimes the errors are not shown in the UI. You have to tail the nifi app logs in order to see full error informations.
... View more
04-09-2020
11:52 AM
1 Kudo
@bhara You do not have to use LDAP. You can create the users in HUE admin, using the first admin user you created. If you want to configure LDAP please see official documentation here: https://docs.gethue.com/administrator/configuration/server/#ldap You will need to make the LDAP hue.ini changes via ambari in the HUE->Config->Advanced->Advanced Hue-Ini and restart hue after each change. Your error above are 2 issues I notice: SSL Configuration for HDFS. Your HUE truststore must have the ssl certs for hdfs hosts https://gethue.com/configure-hue-with-https-ssl/ (bottom section) https://docs.cloudera.com/documentation/enterprise/5-11-x/topics/cm_sg_ssl_hue.html (top section) HDFS Configuration - Doc Here The SSL Example links above are not specific to your case (HDP) but still apply. Also I am assuming you have hdfs secure. The links I share for SSL outline the fundamentals required to put the right HDFS and SSL settings in hue.ini for secure access to hdfs. The HDFS Configuration link is official gethue.com documentation for HDFS. You will need to make the SSL hue.ini changes via ambari in the HUE->Config->Advanced->Advanced Hue-Ini and restart hue after each change.
... View more
04-08-2020
01:26 PM
1 Kudo
@ForrestGump there must be some configuration difference then. I created a simple flow and was able to get below output using exact proc I screen shot above on your Before data: Year,Day,Hour,Minute,ID_for_SW_Plasma_spacecraft,Percent_of_interpolation,Timeshift,RMS_Timeshift,RMS_Min_var,Time_btwn_observation_sec,Field_magnutude_average_nT,BY_nT(GSM),BZ_nT_(GSM),RMS_SD_B_scalar_nT,RMS_SD_field_vector_nT,Speed_km/s,Alfven_mach_number,Magnetosonic_Mach_number,BSN_location_Xgse_Re,2019,1,0,0,51,100,2788,164,0.12,999999,5.11,2.00,2.73,0.08,1.03,451.0,9.8,6.5,13.15 2019,1,0,1,51,100,2810,159,0.12,37,5.10,2.33,2.58,0.11,1.04,451.3,9.8,6.5,13.10 2019,1,0,2,51,80,2852,109,0.09,18,4.86,2.37,2.56,0.12,0.56,454.7,10.3,6.7,13.07 2019,1,0,3,51,67,2951,66,0.06,-39,4.78,2.21,2.55,0.03,0.33,452.3,11.0,6.8,13.00 2019,1,0,4,51,100,3025,7,0.00,-13,4.80,2.17,2.37,0.03,0.14,451.4,11.2,6.8,13.00 2019,1,0,5,99,80,2973,111,0.09,111,4.94,2.68,2.39,0.13,0.55,99999.9,999.9,99.9,13.19 2019,1,0,6,51,67,3074,20,0.02,-40,4.88,2.54,2.01,0.02,0.28,451.0,9.8,6.5,13.27 2019,1,0,7,51,50,3114,9,0.00,19,4.82,2.37,2.93,0.02,0.14,451.0,9.9,6.5,13.29 2019,1,0,8,99,999,999999,999999,99.99,999999,9999.99,9999.99,9999.99,9999.99,9999.99,99999.9,999.9,99.9,9999.99 2019,1,0,9,99,100,3036,0,0.00,999999,5.16,3.34,2.44,0.00,0.00,99999.9,999.9,99.9,13.24 2019,1,0,10,99,100,3036,0,0.00,60,5.16,3.34,2.43,0.00,0.00,99999.9,999.9,99.9,13.24 I have dropped the template for you here: https://github.com/steven-dfheinz/NiFi-Templates/blob/master/Replace_Text_Demo.xml
... View more
04-08-2020
12:47 PM
1 Kudo
@bhara Change the line 193 below and try to start again. File:/var/lib/ambari-agent/cache/common-services/HUE/4.6.0/package/scripts/params.py dfs_namenode_http_address = config['configurations']['hdfs-site']['dfs.namenode.http-address'] to dfs_namenode_http_address = 'localhost' That will give dfs_namenode_http_address a value and get past the error.
... View more