Member since
01-19-2017
3676
Posts
632
Kudos Received
371
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
852 | 03-23-2025 05:23 AM | |
470 | 03-17-2025 10:18 AM | |
1538 | 03-05-2025 01:34 PM | |
1052 | 03-03-2025 01:09 PM | |
1228 | 03-02-2025 07:19 AM |
02-02-2016
09:18 AM
@Roberto Sancho check out my syntax GRANT SELECT ON TABLE logs TO bigotes WITH GRANT OPTION;
... View more
02-02-2016
08:47 AM
@Roberto Sancho Predag's response grants all privileges which is NOT what is advisable I would think limiting the grant OPTION for only select is a more secure approach
... View more
02-02-2016
08:41 AM
1 Kudo
I think you need to run the below statement for the user bigotes this will allow him to grant the privileges to others. > su - hive > hive
hive > GRANT SELECT ON TABLE logs TO bigotes WITH GRANT OPTION;
... View more
02-01-2016
09:37 PM
@Kibrom Gebrehiwot Grab Artem Ervits answer
... View more
02-01-2016
09:32 PM
2 Kudos
The ZooKeeper server continually saves znode snapshot files and, optionally, transactional logs in a Data Directory to enable you to recover data. It's a good idea to back up the ZooKeeper Data Directory periodically. Although ZooKeeper is highly reliable because a persistent copy is replicated on each server, recovering from backups may be necessary if a catastrophic failure or user error occurs. When you use the default configuration, the ZooKeeper server does not remove the snapshots and log files, so they will accumulate over time. You will need to clean up this directory occasionally, taking into account on your backup schedules and processes. To automate the cleanup, a zkCleanup.sh script is provided in the bin directory of thezookeeper base package. Modify this script as necessary for your situation. In general, you want to run this as a cron task based on your backup schedule. The data directory is specified by the dataDir parameter in the ZooKeeper configuration file, and the data log directory is specified by the dataLogDir parameter.
... View more
02-01-2016
09:28 PM
1 Kudo
@Bharathkumar B First comment out the modified IP address in /etc/hosts .. Dont remove these 2 lines 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # 192.168.192.100 # Modified IP Don't panic if you can access the console as root run #ifconfig This will display your new IP and use that to acces ambari or whatever UI you. http://newIP:8080 The original IP address is not hard code in any config files !
... View more
02-01-2016
09:42 AM
Of course that possible. This doc could help you Topology scripts are used by Hadoop to determine the rack location of nodes Topology configuration
... View more
01-21-2016
01:10 PM
Those blocks are already replicated or will be reconstructed so from the survivng nodes In order to forcefully let the namenode leave safemode, following command should be executed depending on the version of hadoop
bin/hadoop dfsadmin -safemode leave hdfs dfsadmin -safemode leave
... View more
01-18-2016
09:01 AM
@emaxwel @Artem @neeraj Gentlemen thanks for all your responses.Its unfortunate the bits can't be installed elsewhere except in /usr/hdp and furthermore administration of the various named used could have been simplified I am from the Oracle Application background at most there are 2 users for the ebs application and database. I will reformat the 4 servers. @emaxwell you have a very valid argument on the segregation of duties I will try to incorporate that "security concern" I dont want some dark angel poke holes in my production cluster
... View more
01-14-2016
03:31 PM
@Artem@neeraj Thanks guys for your responses as you realise HDP creates a couple of users and bit difficult to manage across the cluster. 1. I want to have only one user eg.tom to own all the hive,hdfs,pig etc as it easy to ssh to any server and quickly be effective avoiding the su or sudo which file should I edit to achieve this ? 2. I have done a lot of Linux installs the default HDP FS layout doesn't please me at all I want to install HDP outside the /var /usr or /etc directories so if anything goes wrong I can just delete all files in that partition and and relaunch after some minor cleanup. My reasoning is I would like to allocate /u01 like 300 GB HDD on each of the 4 server in the cluster so I end up with 1.2 T after HDFS format for data.
... View more