Member since
07-19-2018
613
Posts
100
Kudos Received
117
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3153 | 01-11-2021 05:54 AM | |
2251 | 01-11-2021 05:52 AM | |
6011 | 01-08-2021 05:23 AM | |
5580 | 01-04-2021 04:08 AM | |
25845 | 12-18-2020 05:42 AM |
04-22-2020
11:59 AM
Here is Ambari 2.7.4 (last of the repos before moved behind paywall) main link for Repos: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.4.0/bk_ambari-installation/content/ch_obtaining-public-repos.html When you get the right 2.7.4 repos you will be able to install in Ubuntu 18.
... View more
04-22-2020
11:42 AM
What repo did you add to your ubuntu server? Share your history if you can. HDP 3 should have repos for ubuntu 18: https://docs.cloudera.com/HDPDocuments/Ambari-2.7.5.0/bk_ambari-installation/content/hdp_315_repositories.html
... View more
04-22-2020
11:31 AM
@ruthika Here is a working hive 3.0 (hdp3 - no ssl, no kerberos) config for hue 4.6.0: [beeswax]
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=hdp.cloudera.com
# Binary thrift port for HiveServer2.
#hive_server_port=10000
# Http thrift port for HiveServer2.
#hive_server_http_port=10001
# Host where LLAP is running
## llap_server_host = localhost
# LLAP binary thrift port
## llap_server_port = 10500
# LLAP HTTP Thrift port
## llap_server_thrift_port = 10501
# Alternatively, use Service Discovery for LLAP (Hive Server Interactive) and/or Hiveserver2, this will override server and thrift port
# Whether to use Service Discovery for LLAP
## hive_discovery_llap = true
# is llap (hive server interactive) running in an HA configuration (more than 1)
# important as the zookeeper structure is different
## hive_discovery_llap_ha = false
# Shortcuts to finding LLAP znode Key
# Non-HA - hiveserver-interactive-site - hive.server2.zookeeper.namespace ex hive2 = /hive2
# HA-NonKerberized - (llap_app_name)_llap ex app name llap0 = /llap0_llap
# HA-Kerberized - (llap_app_name)_llap-sasl ex app name llap0 = /llap0_llap-sasl
## hive_discovery_llap_znode = /hiveserver2-hive2
# Whether to use Service Discovery for HiveServer2
hive_discovery_hs2 = true
# Hiveserver2 is hive-site hive.server2.zookeeper.namespace ex hiveserver2 = /hiverserver2
hive_discovery_hiveserver2_znode = /hiveserver2
# Applicable only for LLAP HA
# To keep the load on zookeeper to a minimum
# ---- we cache the LLAP activeEndpoint for the cache_timeout period
# ---- we cache the hiveserver2 endpoint for the length of session
# configurations to set the time between zookeeper checks
## cache_timeout = 60
# Host where Hive Metastore Server (HMS) is running.
# If Kerberos security is enabled, the fully-qualified domain name (FQDN) is required.
#hive_metastore_host=hdp.cloudera.com
# Configure the port the Hive Metastore Server runs on.
#hive_metastore_port=9083
# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/etc/hive/conf
# Timeout in seconds for thrift calls to Hive service
## server_conn_timeout=120
# Choose whether to use the old GetLog() thrift call from before Hive 0.14 to retrieve the logs.
# If false, use the FetchResults() thrift call from Hive 1.0 or more instead.
## use_get_log_api=false
# Limit the number of partitions that can be listed.
## list_partitions_limit=10000
# The maximum number of partitions that will be included in the SELECT * LIMIT sample query for partitioned tables.
## query_partitions_limit=10
# A limit to the number of rows that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_row_limit=100000
# A limit to the number of bytes that can be downloaded from a query before it is truncated.
# A value of -1 means there will be no limit.
## download_bytes_limit=-1
# Hue will try to close the Hive query when the user leaves the editor page.
# This will free all the query resources in HiveServer2, but also make its results inaccessible.
## close_queries=false
# Hue will use at most this many HiveServer2 sessions per user at a time.
# For Tez, increase the number to more if you need more than one query at the time, e.g. 2 or 3 (Tez has a maximum of 1 query by session).
## max_number_of_sessions=1
# Thrift version to use when communicating with HiveServer2.
# Version 11 comes with Hive 3.0. If issues, try 7.
thrift_version=11
# A comma-separated list of white-listed Hive configuration properties that users are authorized to set.
## config_whitelist=hive.map.aggr,hive.exec.compress.output,hive.exec.parallel,hive.execution.engine,mapreduce.job.queuename
# Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.
## auth_username=hive
## auth_password=hive
# Use SASL framework to establish connection to host.
use_sasl=true Pay close attention the values I have uncommented. Especially the zk discovery, thrift version, and SASL Regarding your second issue: please monitor /var/log/hue/error.log for any errors while operating dashboard. Share those with us so we can be helpful. Additionally, you may need to use browser Dev Tool and report any client side errors associated to dashboard tools & widgets. You can also post and find help on the Hue Discourse.
... View more
04-22-2020
05:11 AM
@Udhav Yes. Once you complete the rest of the node readiness (install repos, ambari-server, ambari-agent) you would visit http://[fqdn]:8080 and begin rest of cluster install via Ambari.
... View more
04-22-2020
04:26 AM
@Udhav You need to set the hostname to the fqdn (not localhost), map the hostname in /etc/hosts/, generate the ssh-keys, add keys to authorized_keys, and login. Thats it. Here is full output of the above steps. Please note the vagrant vm I use already has ssh setup, but I did it again (you will notice 2 keys in authorized_keys). [root@c7401 ~]# hostname c7401.ambari.apache.org [root@c7401 ~]# hostname -f c7401.ambari.apache.org [root@c7401 ~]# cat /etc/hosts | grep c7401 192.168.74.101 c7401.ambari.apache.org c7401 [root@c7401 ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. [root@c7401 ~]# cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDbUBYPL9JcoThQ5HWjZLpaHPEd3UlQaPcxzs1p08+pmWc6Q8JT+SGJ4G9087sDMnO1HsutzaoRchlTVN8AN1p7bg9mSnUVGYBSfrmz9dNiYujBCmEnymEB2qn30u3YncFE8um+fQRafrkcMTGwTfclB1CU9mP4FZEIW0+fWBqHBnR3mmkaVUAUgVATLTKIw8dMngDTRG9zVS6HVEpMPQYXl6mK5Oq0XLq31AZpYb7Fia4plw2mK7wQLUxxc0dSa7NQ3bzvr+lO3LRzEaAVvk4ZlqXddX23Z8PpsX7ZhS8lnQn4sojH82+BndVUO9N8VzS3LxzSAjmAThEDC47eXQyV root@c7401.ambari.apache.org [root@c7401 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys [root@c7401 ~]# cat ~/.ssh/authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDbUBYPL9JcoThQ5HWjZLpaHPEd3UlQaPcxzs1p08+pmWc6Q8JT+SGJ4G9087sDMnO1HsutzaoRchlTVN8AN1p7bg9mSnUVGYBSfrmz9dNiYujBCmEnymEB2qn30u3YncFE8um+fQRafrkcMTGwTfclB1CU9mP4FZEIW0+fWBqHBnR3mmkaVUAUgVATLTKIw8dMngDTRG9zVS6HVEpMPQYXl6mK5Oq0XLq31AZpYb7Fia4plw2mK7wQLUxxc0dSa7NQ3bzvr+lO3LRzEaAVvk4ZlqXddX23Z8PpsX7ZhS8lnQn4sojH82+BndVUO9N8VzS3LxzSAjmAThEDC47eXQyV root@c7401.ambari.apache.org [root@c7401 ~]# ssh root@c7401.ambari.apache.org The authenticity of host 'c7401.ambari.apache.org (192.168.74.101)' can't be established ECDSA key fingerprint is SHA256:mjGym7gkqWjPvW2JXhKjqWl4XC6wuhgNIukldSVtkFk. ECDSA key fingerprint is MD5:b7:d4:73:92:03:69:ae:63:af:69:19:96:51:2b:bc:de. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'c7401.ambari.apache.org,192.168.74.101' (ECDSA) to the list of known hosts. Last login: Wed Apr 22 11:21:57 2020
... View more
04-16-2020
07:32 AM
@mayank_tripathi It's free. No issues there. Yes I believe your summary is correct. If you are using /trial folder, you can put the cer in there, and then execute the trustore command. I like to keep all my files in the same place. Just make sure when you are done that /trial folder is right permissions so nifi user can read the files and the files are copied to all nifi nodes. If you do not do correct ownership and copy to all nodes part, the controller service will throw an error.
... View more
04-16-2020
06:49 AM
@Udhav You need to map a FQDN (fully qualified domain name) to your "localhost" via /etc/hosts For example I often use "hdp.cloudera.com". cat /etc/hosts | grep 'cloudera.com': 1xx.xxx.xxx.xxx hdf.cloudera.com 1xx.xxx.xxx.xx hdp.cloudera.com Next you put the FQDN in the list of hosts during Cluster Install Wizard. Be sure to complete the next required steps for ssh key, agent setup, etc. When the Confirm Hosts modal fails, you can click the Failed link, open modals and get to the full error. The easiest way for me to spin up ambari/hadoop in my computer is using AMBARI VAGRANT: https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide This provides an easy way to spin up 1-X number of nodes in my computer, and it handles all the ssh-keys and host mappings. Using this I can spin up ambari on centos with just a few chained commands: wget -O /etc/yum.repos.d/ambari.repo http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.7.0.0/ambari.repo && yum --enablerepo=extras install epel-release -y && yum install java java-devel ambari-server ambari-agent -y && ambari-server setup -s && ambari-server && ambari-server start && ambari-agent start
... View more
04-16-2020
06:35 AM
@Damian_S Yes, I always use mysql/mariadb for hive metastore. If you have the original data you can you just move it to the new location? This should be part of your migration steps regardless of the backend for the metastore.
... View more
04-16-2020
05:50 AM
@Damian_S If you have re-created the cluster, did you migrate the original hive metastore data? This metastore would have stored details about the hive tables in hdfs. If you no longer have this metastore data, you will need to recreate the hive schemas/databases, and re-execute the hive table create statements against the hdfs files. If the metastore is intact (not recreated, reset, etc) then you may be experiencing permission issues against the files in hdfs.
... View more