Member since
02-01-2016
71
Posts
36
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2923 | 06-27-2019 10:09 AM | |
1307 | 01-27-2017 05:22 AM | |
1467 | 01-06-2017 05:05 AM | |
2048 | 11-17-2016 05:37 AM | |
2636 | 03-03-2016 12:28 PM |
02-27-2022
11:00 PM
1 Kudo
This solution is works for me on HDP 3.1.4 Ambari 2.7 Thanks for sharing.
... View more
06-27-2019
10:09 AM
Followed the steps as suggested by @Jay Kumar SenSharma .. backed up hbase rootdir and added the service again in ambari UI. Its working absolutely fine now.
... View more
11-29-2018
04:59 PM
We got the similar error.. However we figured out the issue was with Linux server socket communication. Once we restart Hbase Region server , service started fine. Problem with these type of issues as we lost Data locality on that region server due to restarting Hbase Region server service.
... View more
11-07-2017
02:31 PM
@klksrinivas Did find a solution how to run a job (Spark2) with Oozie?
... View more
01-27-2017
05:22 AM
unfortunately , I don't any way for configuring sqoop incr jobs from oozie. So, had to write a shell script for the same. #!/bin/bash
if [ $# -ne 1 ]; then echo "Not enough arguments to start the job,Synatx is Incr_jobs.sh arg1" else
sqoop job --exec $1 > sqooplog.txt 2>&1 fi grep -i "Merge MapReduce job failed" sqooplog.txt
if [ `echo $?` -eq 0 ]
then echo "`echo $1` job failed" | mailx -s "`echo $1` job failed" -a sqooplog.txt <mailid> else
echo "`echo $1` job completed successfully" | mailx -s "`echo $1` job completed successfully" <mailid>
fi
... View more
12-20-2017
07:58 PM
Hi @Krishna Srinivas I am facing the same issue, where in I have to specify the composite primary key in merge-key. But it is giving me the error. Can you please explain me on how can I achieve the above mentioned answer by taking an example. That will help me understand more clearly. Thanks in Advance !!
... View more
01-06-2017
07:17 AM
@Ed Berezitsky >> Small correction: if you use hcatalog, but your table is still textfile format with "|" field delimiter, you'll still have the same issue The output file field delimiters are only needed for HDFS imports. In the case of Hcatalog imports, you tell the text file format properties as part of the storage stanza and the defaults for hive will be used. Essentially, the default storage format should be ok to handle this. BTW, hcatalog import works with most storage formats, not just ORC @Krishna Srinivas You should be able to use a Hive table using Spark SQL also - but may be you have other requirements also. Glad to see that @Ed Berezitsky's solution worked for you
... View more
11-17-2016
05:37 AM
Hi , Issue got resolved by completely removing ambari-server and ambari-agent, Also by moving /var/lib/ambari-server and /var/lib/ambari-agent and reinstalling both and importing the backup db's of ambari and ambarirca.
... View more
11-15-2016
12:24 PM
Thanq @SBandaru By unchecking Skip group modifications during install, I am able to install and configure using LDAP.
... View more
07-14-2016
07:28 PM
@Krishna Srinivas Have you tried the Falcon mirroring feature ? Instead of cluster to cluster replication, you can try replicating to different directories in the same cluster. http://hortonworks.com/hadoop-tutorial/mirroring-datasets-between-hadoop-clusters-with-apache-falcon/ https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_data_governance/content/section_mirroring_data_falcon.html https://falcon.apache.org/HDFSDR.html
... View more