Member since
01-19-2017
3627
Posts
608
Kudos Received
361
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
230 | 08-02-2024 08:15 AM | |
3436 | 04-06-2023 12:49 PM | |
777 | 10-26-2022 12:35 PM | |
1521 | 09-27-2022 12:49 PM | |
1788 | 05-27-2022 12:02 AM |
02-02-2016
09:20 AM
This should solve the "Permission denied:Principal[name=hive, type=USER] does not have following privileges for operation GRANT_PRIVILEGE [[SELECT with grant, INSERT with grant, UPDATE with grant, DELETE with grant] on Object[type=TABLE_OR_VIEW, name=default.logs]"
... View more
02-02-2016
09:18 AM
@Roberto Sancho check out my syntax GRANT SELECT ON TABLE logs TO bigotes WITH GRANT OPTION;
... View more
02-02-2016
08:47 AM
@Roberto Sancho Predag's response grants all privileges which is NOT what is advisable I would think limiting the grant OPTION for only select is a more secure approach
... View more
02-02-2016
08:41 AM
1 Kudo
I think you need to run the below statement for the user bigotes this will allow him to grant the privileges to others. > su - hive > hive
hive > GRANT SELECT ON TABLE logs TO bigotes WITH GRANT OPTION;
... View more
02-01-2016
09:37 PM
@Kibrom Gebrehiwot Grab Artem Ervits answer
... View more
02-01-2016
09:32 PM
2 Kudos
The ZooKeeper server continually saves znode snapshot files and, optionally, transactional logs in a Data Directory to enable you to recover data. It's a good idea to back up the ZooKeeper Data Directory periodically. Although ZooKeeper is highly reliable because a persistent copy is replicated on each server, recovering from backups may be necessary if a catastrophic failure or user error occurs. When you use the default configuration, the ZooKeeper server does not remove the snapshots and log files, so they will accumulate over time. You will need to clean up this directory occasionally, taking into account on your backup schedules and processes. To automate the cleanup, a zkCleanup.sh script is provided in the bin directory of thezookeeper base package. Modify this script as necessary for your situation. In general, you want to run this as a cron task based on your backup schedule. The data directory is specified by the dataDir parameter in the ZooKeeper configuration file, and the data log directory is specified by the dataLogDir parameter.
... View more
02-01-2016
09:28 PM
1 Kudo
@Bharathkumar B First comment out the modified IP address in /etc/hosts .. Dont remove these 2 lines 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # 192.168.192.100 # Modified IP Don't panic if you can access the console as root run #ifconfig This will display your new IP and use that to acces ambari or whatever UI you. http://newIP:8080 The original IP address is not hard code in any config files !
... View more
02-01-2016
09:42 AM
Of course that possible. This doc could help you Topology scripts are used by Hadoop to determine the rack location of nodes Topology configuration
... View more
01-21-2016
01:10 PM
Those blocks are already replicated or will be reconstructed so from the survivng nodes In order to forcefully let the namenode leave safemode, following command should be executed depending on the version of hadoop
bin/hadoop dfsadmin -safemode leave hdfs dfsadmin -safemode leave
... View more
01-18-2016
09:01 AM
@emaxwel @Artem @neeraj Gentlemen thanks for all your responses.Its unfortunate the bits can't be installed elsewhere except in /usr/hdp and furthermore administration of the various named used could have been simplified I am from the Oracle Application background at most there are 2 users for the ebs application and database. I will reformat the 4 servers. @emaxwell you have a very valid argument on the segregation of duties I will try to incorporate that "security concern" I dont want some dark angel poke holes in my production cluster
... View more