Member since
01-19-2017
3627
Posts
608
Kudos Received
361
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
233 | 08-02-2024 08:15 AM | |
3446 | 04-06-2023 12:49 PM | |
778 | 10-26-2022 12:35 PM | |
1527 | 09-27-2022 12:49 PM | |
1797 | 05-27-2022 12:02 AM |
02-10-2016
09:37 PM
@Neeraj that doc talks about changing master key for encryption. I had to change the password for DB. So i un encrypted. Changed password and encrypted it.
... View more
02-03-2016
08:51 PM
@rbalam so do you have the mysql-connector-java on the Ambari server ? If not copy it please.
... View more
05-11-2017
03:49 PM
Thanks I had same issue after HDP2.6 upgrade. The install silently chnaged the seetings. 1- connect to Ambari 2- hdfs service > advanced config > Custom core-site and change this: hadoop.proxyuser.hive.groups = * hadoop.proxyuser.hive.hosts = * hadoop.proxyuser.hcat.groups = * hadoop.proxyuser.hcat.hosts = * This solved my issue as well
... View more
03-22-2017
10:05 AM
Happy to recommend Attunity Replicate for DB2. Need to deploy Attunity AIS onto the source server as well when dealing with mainframe systems though, but the footprint was minimal (after the complete load has happened, Replicate is just reading the DB logs after that point). Have used with SQL Server as well (piece of cake once we met the pre-requisites on the source DB) and IMS (a lot more work due to the inherent complexities of hierarclical DB e.g. logical pairs, variants but we got it all working once we'd uncovered all the design 'features' inherent to the IMS DB's we connected to. Can write to HDFS or connect to Kafka but I never got a chance to try them (just wrote csv files to edge node) due to project constraints alas
... View more
02-01-2016
09:32 PM
2 Kudos
The ZooKeeper server continually saves znode snapshot files and, optionally, transactional logs in a Data Directory to enable you to recover data. It's a good idea to back up the ZooKeeper Data Directory periodically. Although ZooKeeper is highly reliable because a persistent copy is replicated on each server, recovering from backups may be necessary if a catastrophic failure or user error occurs. When you use the default configuration, the ZooKeeper server does not remove the snapshots and log files, so they will accumulate over time. You will need to clean up this directory occasionally, taking into account on your backup schedules and processes. To automate the cleanup, a zkCleanup.sh script is provided in the bin directory of thezookeeper base package. Modify this script as necessary for your situation. In general, you want to run this as a cron task based on your backup schedule. The data directory is specified by the dataDir parameter in the ZooKeeper configuration file, and the data log directory is specified by the dataLogDir parameter.
... View more
05-07-2017
08:15 AM
Thanks a lot sir..You saved my day
... View more
01-06-2017
07:57 AM
Hi Amit,
Can you please let us know the location that you have added this hive-hcatalog-core.jar file, we are facing similar issue now. PS: We are trying to run hive query in Shell Script in Oozie hue. Regards, Ram
... View more
01-18-2016
09:01 AM
@emaxwel @Artem @neeraj Gentlemen thanks for all your responses.Its unfortunate the bits can't be installed elsewhere except in /usr/hdp and furthermore administration of the various named used could have been simplified I am from the Oracle Application background at most there are 2 users for the ebs application and database. I will reformat the 4 servers. @emaxwell you have a very valid argument on the segregation of duties I will try to incorporate that "security concern" I dont want some dark angel poke holes in my production cluster
... View more
02-05-2016
11:11 PM
@rich Thanks rich. your solution worked out. I accept this answer.
... View more
12-31-2015
02:32 AM
Thanks to everyone for the suggestions. I wanted to give an update as I am able to connect to the prompt, hopefully will help others who have similar environments similar to mine. My challenge was to get the SSH working from Windows 10 (Surface Pro) where the VM was created in Azure (classic) General guidelines https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-use-ssh-key/ Installed GitHub with GitShell. Ran the command from GitShell to generate the key file and .pem file (openssl.exe req -x509 -nodes -days
365 -newkey rsa:2048 -keyout myPrivateKey.key -out myCert.pem) Installed PuttyGen Convert to .rsa file so PuttyGen can understand (openssl rsa -in ./myPrivateKey.key -out myPrivateKey_rsa# chmod 600 ./myPrivateKey_rsa) Load the private key via PuttyGen, save the PrivateKey as .ppk file Next copy the PublicKey from PuttyGen. Now, I attempted to take the key to existing Azure VM, Settings, Reset Password and change to SSH Public Key. Although, the key was accepted, it did not finish for a long time. The restart did not help. Kept getting Server key was refused. So, I created a new VM using Marketplace Sandbox using Resource Manager (not classic) using the Public Key. Note down the user name! Ran Putty, provided the IP address using port 22, on SSH -> Auth page, provided the .ppk file saved earlier from PuttyGen. When prompted, provide the username used in Azure portal. Voila!
... View more
- « Previous
- Next »