Member since
09-24-2015
178
Posts
113
Kudos Received
28
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3399 | 05-25-2016 02:39 AM | |
3617 | 05-03-2016 01:27 PM | |
842 | 04-26-2016 07:59 PM | |
14451 | 03-24-2016 04:10 PM | |
2075 | 02-02-2016 11:50 PM |
06-02-2016
12:28 PM
A. Run Ambari Metrics in Distributed Mode rather than embedded If you are running with more than 3 nodes, I strongly suggest
running in distributed mode and writing hbase.root.dir contents to hdfs
directly, rather than to the local disk of a single node. This applies
to already installed and running IOP clusters. In the Ambari Web UI, select the Ambari Metrics service and navigate to Configs. Update the following properties: General > Metrics Service operation mode=distributed
Advanced ams-hbase-site > hbase.cluster.distributed=true
Advanced ams-hbase-site > hbase.root.dir=hdfs://namenode.fqdn.example.org:8020/amshbase
Restart Metrics Collector and affected Metrics monitors
... View more
10-18-2017
08:29 PM
@Raj Sivanesan Here's what I did in a lab environment. I wound up with over 20k partitions on a table (d'oh) and was ok with blowing out the table/database. I can't confirm that this should be done on a production cluster - use with caution. Feedback is welcome. Backup Hive Metastore: mysqldump -u root -p hivedb >> hivedb.bak Hive Metastore: --SELECT TBL_ID FROM TBLS WHERE TBL_NAME = 'myTable';
DELETE FROM PARTITION_KEY_VALS WHERE PART_ID IN (SELECT PART_ID FROM PARTITIONS WHERE TBL_ID = 54);
DELETE FROM PARTITION_PARAMS WHERE PART_ID IN (SELECT PART_ID FROM PARTITIONS WHERE TBL_ID = 54);
DELETE FROM PARTITIONS WHERE TBL_ID = 54;
Hive: DROP DATABASE IF EXISTS myDatabase CASCADE;
... View more
10-07-2015
06:32 PM
3 Kudos
Did the customer use the 'sync' option while mounting the share on the NFS client? Large file transfers (few GBs and larger) are slow without 'sync' and often stall completely. https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html#Mount_the_export_ 'sync' ensures that the client will not reorder writes. Reordered writes force the NFS gateway to buffer data since HDFS only supports sequential writes (appends). You can also try increasing 'nfs.rtmax' and 'nfs.wtmax' in the NFS gateway configuration as recommended in the same link. It looks like we are missing NFS Gateway documentation in the the HDP docs. I'll make sure we get that updated.
... View more
10-07-2015
03:24 PM
1 Kudo
One way you can achieve the transformation of your CSV data to ORC would be to do the following: 1. Register your CSV GZ data as a text table, something like: create table <tablename>_txt (...) location '...'; 2. Create equivalent ORC table create table <tablename>_orc (...) stored as orc; 3. Populate the data into equivalent ORC table insert overwrite table <tablename>_orc select * from <tablename>_txt; I have used this in the past and worked for me.
... View more
02-07-2016
08:03 PM
I also ran into this problem and it was painful to troubleshoot. Is there a JIRA to improve the error message?
... View more
07-17-2017
09:16 AM
when I delete one host, this occured how to solve it ? ambari 2.4.1.0
... View more
10-07-2015
08:58 AM
Why are the dates in the log from 7/24/2014? Is this an old issue that hasn't been solved and you are reposting it, or is your clock incorrect? If your clock is incorrect, than you will have Kerberos issues since time is a big factor in determining the validity of the credentials. The clocks on the hosts need to be within 5 minutes of the host that contains the KDC, else bad things will happen. If this is an old issue and you are using HDP 2.1, then I assume you are using Ambari 1.6.x. In this version of Ambari, you must have set up Kerberos manually. Since there is a lot of room for error, you should go back and make sure you didn't miss a step or incorrect create a keytab file. Unless you create the keytab file for a particular principal using kadmin.local, the password for the account will get regenerated. This will cause issues if you create multiple keytab files for the same principal - the 2nd time you generate a keytab file, the 1st keytab file will become obsolete; the 3rd time you generate a keytab file, the 2nd keytab file will become obsolete, etc... Also, make sure all of the configs were set properly. By incorrectly setting a principal name or keytab file location, one or more services will fail to authenticate. Finally, check the ACLs on the keytab files to make sure that the relevant service(s) can read them. If a service is running as the local hdfs user, but the keytab file is only readable by root, than the service cannot read the keytab file and authentication will fail.
... View more
09-30-2015
05:03 PM
1 Kudo
The problem found at this location was that they did not have correct "ranger admin" password in the RANGER CONFIG tab in Ambari. So, when the HDFS was restarted, it tries to create a new ranger repository and it fails due to Incorrect "ranger admin" password. Once the "ranger admin" password is updated correctly, the HDFS namenode started with Ranger authorizer and was able to audit all activities via Ranger Console.
... View more
09-25-2015
04:51 PM
1 Kudo
a) Are you able to search for hive_table There is a bug in certain version of the HDP Sandbox. If you use the latest one, the hive hook for atlas works.
... View more
- « Previous
- Next »