Member since
02-02-2021
116
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
745 | 08-13-2021 09:44 AM | |
3695 | 04-27-2021 04:23 PM | |
1372 | 04-26-2021 10:47 AM | |
923 | 03-29-2021 06:01 PM | |
2750 | 03-17-2021 04:53 PM |
10-07-2021
08:52 AM
Hi @pvishnu Thanks for the response. So this is a new cluster and ranger was already previously installed on the node so I modified some of the data in the ambari postgres db to make it think that Ranger is already installed. Is there any documentation on what I should do to make sure that everything is synced up? Thanks,
... View more
09-23-2021
05:56 AM
Ambari version 2.6.1 So I was not adding Hbase. I was trying to add a mpack so I tried to recreate the tar.gz file and install the mpack via command line. I may have been doing things too fast and may have accidentally deleted something. Not sure. But I don't think I touched Hbase.
... View more
09-09-2021
09:07 AM
DISTCP or import/export is not supported for ACID tables. You need to follow below mechanism: Distscp for ACID is not supported ,you have 2 approaches: Approach 1 ============= 1. Assuming that you have ACID in source and target clusters. 2. Create a external in source and target clusters. 3. Copy the data from ACID TO external in SOURCE CLUSTER INSERT into external select * from acid. 4. Perfrom distscp from source to target for external table. 5. Copy the data from external TO ACID IN SOURCE CLUSTER INSERT into acid select * from external. Approach 2 ========= Use DLM Refrence: https://community.cloudera.com/t5/Support-Questions/HIVE-ACID-table-Not-enough-history-available-for-0-x-Oldest/td-p/204551
... View more
08-31-2021
01:15 PM
1 Kudo
Hi, With Hadoop 3, there is intra node balance as well as the data nodes balance which can help you distribute and balance the data on your nodes cluster. for sure the recommended way is having all data nodes with same number of disks and size, but its is possible to have different config for data nodes but you will need to keep balancing your data nodes quite often which will take computation and network resources. Also another thing to consider when you have disks with different size is "data node volume choosing policy" which is by default set to round robin , you need to consider choosing available space instead. i suggest you to read this article from Cloudera as well. https://blog.cloudera.com/how-to-use-the-new-hdfs-intra-datanode-disk-balancer-in-apache-hadoop/ Best Regards
... View more
08-13-2021
09:44 AM
Ok nevermind, it was a firewall issue. Everything is working now. Thanks,
... View more
08-12-2021
06:23 AM
Thanks it worked.
... View more
08-09-2021
10:48 AM
2 Kudos
Hi @ryu , I have recently copied the hive tables from our Production cluster to non production cluster using distcp the location of hive warehouse directory from Prod to non prod, After running distcp we created the table schema on non prod as same as Prod using 'create table'. If table consist partition then please apply 'alter table' to add partition. We are also using hive replication to copy the tables from our Prod to DR cluster. If this has helped you then please mark the answer as solution.
... View more
07-23-2021
03:41 AM
Hello @ryu As mentioned by @arunek95, we assume Phoenix is enabled for the Cluster. If not, Kindly enable Phoenix & try the Command again. The Logging indicates HDP v2.6.1.0 with Phoenix v4.7. The Directory "/usr/lib/phoenix/" has the Phoenix Client & you mentioned the same Directory has Phoenix Server Jar as well. Kindly verify if the Permission on the JAR is Correct & confirm via "jar -tvf" on the Phoenix Server Jar that the Class "MetaDataEndpointImpl" is included in the same. The Error indicates the Phoenix creating the SYSTEM Tables (Upon 1st Connection to Phoenix) is encountering the Error. In our Internal Setup, We see the Phoenix-Server Jar is present in HBase Lib Path as well, pointing to the Phoenix-Server Jar in Phoenix Lib Path as SymLink: /usr/hdp/<Version>/hbase/lib/phoenix-server.jar -> /usr/hdp/<Version>/phoenix/phoenix-server.jar Kindly ensure the Phoenix Server JAR is present in HBase Lib Directory as well. Additionally, Review the Master Logs to check for the Error Message at HBase Level as well. - Smarak
... View more
07-02-2021
11:19 AM
Hello @ryu , Refer below link for Enabling PQS via ambari ++++++++++++ https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-access/content/ch_using-phoenix.html#enabling-phoenix </snippet from the link> To enable Phoenix with Ambari: Open Ambari. Select Services tab > HBase > Configs tab. Scroll down to the Phoenix SQL settings. (Optional) Reset the Phoenix Query Timeout. Click the Enable Phoenix slider button. ++++++++++++ Also would recommend to review below doc for installing phoenix via command line https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/ch_install_phoenix_chapter.html
... View more
06-28-2021
11:31 AM
@Scharan Thanks for the response. So I added this in the metainfo.xml <metainfo> <schemaVersion>2.0</schemaVersion> <services> <service> ... <quickLinksConfigurations-dir>quicklinks</quickLinksConfigurations-dir> <quickLinksConfigurations> <quickLinksConfiguration> <fileName>quicklinks.json</fileName> <default>true</default> </quickLinksConfiguration> </quickLinksConfigurations> </service> </services> </metainfo> And this is the quicklinks.json file: { "name": "default", "description": "default quick links configuration", "configuration": { "protocol": { "type":"https", "checks":[ { "property":"dfs.http.policy", "desired":"HTTPS_ONLY", "site":"hdfs-site" } ] }, "links": [ { "name": "namenode_ui", "label": "NameNode UI", "url":"%@://%@:%@", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_logs", "label": "NameNode Logs", "url":"%@://%@:%@/logs", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "namenode_jmx", "label": "NameNode JMX", "url":"%@://%@:%@/jmx", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } }, { "name": "Thread Stacks", "label": "Thread Stacks", "url":"%@://%@:%@/stacks", "requires_user_name": "false", "port":{ "http_property": "dfs.namenode.http-address", "http_default_port": "50070", "https_property": "dfs.namenode.https-address", "https_default_port": "50470", "regex": "\\w*:(\\d+)", "site": "hdfs-site" } } ] } } I have restarted ambari-server but however, still do not see the quicklinks in ambari UI. Any help is much appreciated. Thanks,
... View more