Member since
01-19-2017
3681
Posts
633
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1604 | 06-04-2025 11:36 PM | |
| 2071 | 03-23-2025 05:23 AM | |
| 983 | 03-17-2025 10:18 AM | |
| 3733 | 03-05-2025 01:34 PM | |
| 2567 | 03-03-2025 01:09 PM |
05-14-2019
07:40 PM
@Madhura Mhatre If you increase the size of the same ibdata disk mount then you don't need to update any metadata because the pointers will be intact in the metastore. Make sure you shut down the all the databases on the mount point before increasing the size. Happy hadooping
... View more
05-14-2019
06:49 PM
1 Kudo
@Madhura Mhatre From the output, you can clearly see that its the Hive database that has grown. Having said that @Jay Kumar SenSharma's solution would work if it was the Ambari database that was huge where you can purge the history which is not the case. You cannot purge the hive database as you will lose data but you can create a more compact table with CTAS see below. The end result you would have manageable table size on disk Option 1 set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
create table if not exists t1 (a int, b int) partitioned by (c int); -- your original table
create table t1orc (a int, b int) partitioned by (c int) stored as ORC; -- your compressed table
insert into table t1orc partition(c) select a, b, c from t1; CTAS has these restrictions: The target table cannot be a partitioned table.
The target table cannot be an external table.
The target table cannot be a list bucketing table. Option 2 The other solution is to change the location and increase the size of the mount point be aware to maintain the same path as the metadata maintains that record in the Hive metastore So you will need to follow update the location as documented here Hope that helps
... View more
05-14-2019
11:01 AM
1 Kudo
@duong tuan anh There are a couple of issue with your Kafka configuration you are trying to run a 3 node cluster but using the same log directory that's the reason your broker is going down because it finds another process already writing it's log to that /kafka-logs. Note carefully the difference I have set down you will need to edit your server.properties like below on the 3 nodes and ONLY then will you successfully start your Kafka broker. Make sure you kill any running broker process so as not to have port conflicts !! Create 3 Server.properties files and place them on the respective node cp config/server.properties config/server-1.properties
cp config/server.properties config/server-2.properties On node1 config/server.properties:
broker.id=0
listeners=PLAINTEXT://node1:9092
log.dirs=/kafka-logs On node2 config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://node2:9093
log.dirs=/kafka-logs-1 On node3 config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://node3:9094
log.dirs=/kafka-logs-2 # The id of the broker. This must be set to a unique integer for each broker. In your case you should have broker.id=0 on node1,broker.id=2 ,broker.id=2 on node3 zookeeper.connect=am-bigdata-03.am.local:2181,am-bigdata-01.am.local:2181,am-bigdata-02.am.local:2181 The zookeeper ensemble should also have unique myid's On node1 echo "1" > $..../kafka/zookeeper/data/myid On node2 echo "2" > $..../kafka/zookeeper/data/myid On node3 echo "3" > $..../kafka/zookeeper/data/myid Now you can start your Kafka broker it should fire up Reference: https://kafka.apache.org/quickstart#quickstart_multibroker Please revert
... View more
05-14-2019
04:56 AM
@Shashank Naresh You should be able to do that under the network configuration for HDF Enable Network Adapter choose Attached to : NAT and the Name will be Blank for Adaptor 1 and for Adapter 2 enable Network Adapter Attached to: Bridged Adapter Name choose in the drop-down list either appropriate driver for LAN or Wireless depending on your case and repeat it for HDP. You should realize that HDP and HDF consume a lot of RAM so I hope you are running a 32 GB host machine Good luck
... View more
05-13-2019
07:04 PM
@Mazen Elshayeb There is something I don't understand can you share how you create the KDC database? How come you have a principal "ambari_hdfs-050819@HADOOP.COM"? I suggest starting afresh so delete/destroy the current KDC as the root user or sudo on ubuntu whichever is appropriate # sudo kdb5_util -r HADOOP.COM destroy Accept with a "Yes" Now create a new Kerberos database Complete remove Kerberos $ sudo apt purge -y krb5-kdc krb5-admin-server krb5-config krb5-locales krb5-user krb5.conf
$ sudo rm -rf /var/lib/krb5kdc
Do a refresh installation First, get the FQDN of your kdc server for this example # hostanme -f
test.hadoop.com Use the above output for a later set up # apt install krb5-kdc krb5-admin-server krb5-config Proceed as follow At the prompt for the Kerberos Realm = HADOOP.COM
Kerberos server hostname = test.hadoop.com
Administrative server for Kerberos REALM = test.hadoop.com Configuring krb5 Admin Server # krb5_newrealm Open /etc/krb5kdc/kadm5.acl it should contain a line like this */admin@HADOOP.COM * The kdc.conf should be adjusted to look like this [kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} The krb5.conf should look like this if you are on a multi-node cluster this is the fines you will copy to all other hosts, notice the entry under domain_realm? [libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = HADOOP.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
HADOOP.COM = {
admin_server = test.hadoop.com
kdc = test.hadoop.com
} Restart the Kerberos kdc daemons and kerberos admin servers: # for script in /etc/init.d/krb5*; do $script restart; done Don't manually create any principle like the "ambari_hdfs-050819@HADOOP.COM" Go to the ambari kerberos wizard for the domain notice the . (dot) kdc host = test.hadoop.com
Real Name = HADOOP.COM
Domains = .hadoop.com ,hadoop.com
-----
kadmin host = test.hadoop.com
Admin principal = admin/admin@HADOOP.COM
Admin password = password set during the creation of kdc database Now from here just accept the default the keytabs should generate successfully. I have attached files to guide you Procedure to Kerberize HDP 3.1_Part2.pdfProcedure to Kerberize HDP 3.1_Part1.pdf Procedure to Kerberize HDP 3.1_Part3.pdf Hope that helps please revert if you have any questions
... View more
05-12-2019
07:05 AM
@Haijin Li I think there is a problem with your command the - (hyphen) is missing can you copy and paste the below $ sudo su - hive Please revert
... View more
05-10-2019
10:01 PM
@Adil BAKKOURI Hurrah we are now there, thats the error I was expecting now this is a case closed. Validate the hostname by running # hostname -f This should give you the FQDN The error below is very simple its a privilege issue with the hive user and database creation script you run, you didn't give the correct privileges to the hive user "Access denied for user 'hive'@'master.rh.bigdata.cluster' to database 'hive'" To resolve the above please do the following assumptions Root password = gr3atman Hive password = hive Hostname = master.rh.bigdata.cluster mysql -uroot -pgr3atman
mysql> GRANT ALL PRIVILEGES ON hive.* to 'hive'@'localhost' identified by 'hive';
mysql> GRANT ALL PRIVILEGES ON hive.* to 'hive'@'master.rh.bigdata.cluster' identified by 'hive';
mysql> GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'master.rh.bigdata.cluster';
mysql> flush privileges; All of the above should succeed. Now your hive should fire up Bravo !! ************ If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors.
... View more
05-10-2019
05:28 PM
@Maurice Knopp Great news if you are on Centos and kerberized environment I am just wondering how the Kerberos have you tried regenerating the specific keytab? About you hql scripts you should realize there has been a lot of changes between HDP 2.3 with hive 1.2.1 and HDP 3.1 with hive version 3.1.0 About your MariaDB database and running the below version with HDP 3.1 "resource_management.core.exceptions.Fail: JDBC driver 'org.mariadb.jdbc.Driver' not supported." Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 30
Server version: 5.5.60-MariaDB MariaDB Serv
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and other
Type 'help;' or '\h' for help. Type '\c' to clear the current input statemen
MariaDB [(none)]> Please did you run the steps I mentioned in http://community.hortonworks.com/answers/245833/view.html Please revert
... View more
05-10-2019
01:09 PM
@Adil BAKKOURI Most production HDP clusters run on Ubuntu so I don't see why yours should fail. Ping on LinkedIn! If your database MySQL version 5.6 please can you check the database engine SELECT table_name, table_schema, engine FROM information_schema.tables; For all HDP products, it should show InnoDB like below for hive | PARTITION_KEYS | hive | InnoDB |
| PARTITION_KEY_VALS | hive | InnoDB |
| PARTITION_PARAMS | hive | InnoDB |
| PART_COL_PRIVS | hive | InnoDB |
| PART_COL_STATS | hive | InnoDB |
| PART_PRIVS | hive | InnoDB |
| ROLES | hive | InnoDB |
| ROLE_MAP | hive | InnoDB | Please revert
... View more
05-10-2019
01:09 PM
@Adil BAKKOURI What's your OS and Version? I have done hundreds of HDP installation never got this blockage!! What's the current error? Can you share the below logs? hivemetastore.log hiveserver2.log This should be a driver issue please share any error
... View more