Member since
01-19-2017
3651
Posts
623
Kudos Received
364
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
172 | 12-22-2024 07:33 AM | |
109 | 12-18-2024 12:21 PM | |
428 | 12-17-2024 07:48 AM | |
298 | 08-02-2024 08:15 AM | |
3578 | 04-06-2023 12:49 PM |
01-05-2017
02:21 AM
Hi Prakash, I met just the same problem, did you solve it ?
... View more
02-10-2016
09:37 PM
@Neeraj that doc talks about changing master key for encryption. I had to change the password for DB. So i un encrypted. Changed password and encrypted it.
... View more
02-03-2016
08:51 PM
@rbalam so do you have the mysql-connector-java on the Ambari server ? If not copy it please.
... View more
05-11-2017
03:49 PM
Thanks I had same issue after HDP2.6 upgrade. The install silently chnaged the seetings. 1- connect to Ambari 2- hdfs service > advanced config > Custom core-site and change this: hadoop.proxyuser.hive.groups = * hadoop.proxyuser.hive.hosts = * hadoop.proxyuser.hcat.groups = * hadoop.proxyuser.hcat.hosts = * This solved my issue as well
... View more
03-22-2017
10:05 AM
Happy to recommend Attunity Replicate for DB2. Need to deploy Attunity AIS onto the source server as well when dealing with mainframe systems though, but the footprint was minimal (after the complete load has happened, Replicate is just reading the DB logs after that point). Have used with SQL Server as well (piece of cake once we met the pre-requisites on the source DB) and IMS (a lot more work due to the inherent complexities of hierarclical DB e.g. logical pairs, variants but we got it all working once we'd uncovered all the design 'features' inherent to the IMS DB's we connected to. Can write to HDFS or connect to Kafka but I never got a chance to try them (just wrote csv files to edge node) due to project constraints alas
... View more
02-01-2016
09:32 PM
2 Kudos
The ZooKeeper server continually saves znode snapshot files and, optionally, transactional logs in a Data Directory to enable you to recover data. It's a good idea to back up the ZooKeeper Data Directory periodically. Although ZooKeeper is highly reliable because a persistent copy is replicated on each server, recovering from backups may be necessary if a catastrophic failure or user error occurs. When you use the default configuration, the ZooKeeper server does not remove the snapshots and log files, so they will accumulate over time. You will need to clean up this directory occasionally, taking into account on your backup schedules and processes. To automate the cleanup, a zkCleanup.sh script is provided in the bin directory of thezookeeper base package. Modify this script as necessary for your situation. In general, you want to run this as a cron task based on your backup schedule. The data directory is specified by the dataDir parameter in the ZooKeeper configuration file, and the data log directory is specified by the dataLogDir parameter.
... View more
05-07-2017
08:15 AM
Thanks a lot sir..You saved my day
... View more
01-06-2017
07:57 AM
Hi Amit,
Can you please let us know the location that you have added this hive-hcatalog-core.jar file, we are facing similar issue now. PS: We are trying to run hive query in Shell Script in Oozie hue. Regards, Ram
... View more
01-18-2016
09:01 AM
@emaxwel @Artem @neeraj Gentlemen thanks for all your responses.Its unfortunate the bits can't be installed elsewhere except in /usr/hdp and furthermore administration of the various named used could have been simplified I am from the Oracle Application background at most there are 2 users for the ebs application and database. I will reformat the 4 servers. @emaxwell you have a very valid argument on the segregation of duties I will try to incorporate that "security concern" I dont want some dark angel poke holes in my production cluster
... View more
02-05-2016
11:11 PM
@rich Thanks rich. your solution worked out. I accept this answer.
... View more
- « Previous
- Next »