Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 773 | 06-04-2025 11:36 PM | |
| 1353 | 03-23-2025 05:23 AM | |
| 669 | 03-17-2025 10:18 AM | |
| 2423 | 03-05-2025 01:34 PM | |
| 1581 | 03-03-2025 01:09 PM |
02-02-2016
11:29 AM
1 Kudo
Seems you have a DB connectivity problem ,can you copy and paste a complete output of the Ambari failed to start log ? The connection was closed when Java tried to read from it. This can be caused by:
The PostgreSQL server being restarted The PostgreSQL backend you were connected to being terminated The PostgreSQL backend you were connected to crashing Dodgy network connection Badly behaved stateful firewalls Idle connections timing out of the NAT connection tables of NAT firewall/routers ... and probably more. Check the PostgreSQL server logs to see if there's anything informative there; also consider doing some network tracing with a tool like Wireshark.
... View more
02-02-2016
09:40 AM
OOoops GRANT SELECT ON TABLE logs TO USER bigotes WITH GRANT OPTION;
... View more
02-02-2016
09:20 AM
This should solve the "Permission denied:Principal[name=hive, type=USER] does not have following privileges for operation GRANT_PRIVILEGE [[SELECT with grant, INSERT with grant, UPDATE with grant, DELETE with grant] on Object[type=TABLE_OR_VIEW, name=default.logs]"
... View more
02-02-2016
09:18 AM
@Roberto Sancho check out my syntax GRANT SELECT ON TABLE logs TO bigotes WITH GRANT OPTION;
... View more
02-02-2016
08:47 AM
@Roberto Sancho Predag's response grants all privileges which is NOT what is advisable I would think limiting the grant OPTION for only select is a more secure approach
... View more
02-02-2016
08:41 AM
1 Kudo
I think you need to run the below statement for the user bigotes this will allow him to grant the privileges to others. > su - hive > hive
hive > GRANT SELECT ON TABLE logs TO bigotes WITH GRANT OPTION;
... View more
02-01-2016
09:37 PM
@Kibrom Gebrehiwot Grab Artem Ervits answer
... View more
02-01-2016
09:32 PM
2 Kudos
The ZooKeeper server continually saves znode snapshot files and, optionally, transactional logs in a Data Directory to enable you to recover data. It's a good idea to back up the ZooKeeper Data Directory periodically. Although ZooKeeper is highly reliable because a persistent copy is replicated on each server, recovering from backups may be necessary if a catastrophic failure or user error occurs. When you use the default configuration, the ZooKeeper server does not remove the snapshots and log files, so they will accumulate over time. You will need to clean up this directory occasionally, taking into account on your backup schedules and processes. To automate the cleanup, a zkCleanup.sh script is provided in the bin directory of thezookeeper base package. Modify this script as necessary for your situation. In general, you want to run this as a cron task based on your backup schedule. The data directory is specified by the dataDir parameter in the ZooKeeper configuration file, and the data log directory is specified by the dataLogDir parameter.
... View more
02-01-2016
09:28 PM
1 Kudo
@Bharathkumar B First comment out the modified IP address in /etc/hosts .. Dont remove these 2 lines 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # 192.168.192.100 # Modified IP Don't panic if you can access the console as root run #ifconfig This will display your new IP and use that to acces ambari or whatever UI you. http://newIP:8080 The original IP address is not hard code in any config files !
... View more
02-01-2016
09:42 AM
Of course that possible. This doc could help you Topology scripts are used by Hadoop to determine the rack location of nodes Topology configuration
... View more