Member since
09-29-2015
23
Posts
9
Kudos Received
0
Solutions
12-15-2017
05:46 PM
- Find more articles tagged with:
- ambari-views
- Hive
- How-ToTutorial
- oozie-hive
- oozie-sharelib
- Pig
- solutions
- wfm
08-24-2017
04:14 PM
1 Kudo
sqoop-validate.pdf
... View more
- Find more articles tagged with:
- How-ToTutorial
- solutions
- Sqoop
- validation
Labels:
06-08-2017
06:58 PM
@Sarnath K Your cluster definition might be wrong. What's your HDP version? You can try it with HDP 2.5.3. Create cluster definition using falcon gui and try.
... View more
05-16-2017
07:24 PM
1 Kudo
@Niraj Parmar follow this document to run different user. https://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.1/bk_dataflow-security/content/secure-storm-ui.html
... View more
05-05-2017
07:16 PM
1 Kudo
@Sandeep Gade Did you follow this link https://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.1/bk_dataflow-security/content/secure-storm-ui.html
... View more
04-26-2017
06:51 PM
Hi @Vishakha Agarwal Use this link below for windows. https://community.hortonworks.com/articles/28537/user-authentication-from-windows-workstation-to-hd.html
... View more
04-19-2017
03:37 PM
@Vishal Gupta Ranger cannot authenticate the users. It's for authorization.
Delete local users in Ranger to block users.
... View more
04-01-2017
03:39 AM
Hi @Anishkumar Valsalam It's a SSLHandshake Error. Verify the certificates. Root and Intermediate certificate goes to Truststore. Follow this link https://community.hortonworks.com/articles/58009/hdf-20-enable-ssl-for-apache-nifi-from-ambari.html
... View more
03-31-2017
09:13 PM
It's simple to change it with ambari UI. Otherwise use the ambari configs.sh script and restart all the services. It's smooth and easy.
... View more
03-24-2017
09:20 PM
It's a Tez issue. Update the below parameters in tez-site and hive-site. It might resolve your issue. Tez-site.xml: <name>tez.am.launch.cmd-opts</name> <value>-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC</value <name>tez.task.launch.cmd-opts</name> <value>-XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC</value> Hive-site.xm: <name>hive.tez.java.opts</name> <value>-server -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseParallelGC -XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps</value>
... View more
02-07-2017
10:25 PM
Restart Spark, Livy , Zeppelin servers and interpreters. It worked for me. Ram Baskaran
... View more
01-25-2017
10:13 PM
Hi @Pardeep I had a same issue. The problem is /etc/krb5.conf file. Just match it with https://community.hortonworks.com/articles/59635/one-way-trust-mit-kdc-to-active-directory.html
... View more
12-05-2016
07:03 PM
2 Kudos
you need to create directory /apps/falcon/extensions/hdfs-mirroring. Verify the link https://community.hortonworks.com/articles/55382/hive-disaster-recovery-using-falcon.html
... View more
12-05-2016
02:20 AM
1 Kudo
Remove the falcon service and reinstall it with ambari.
... View more
12-05-2016
02:19 AM
Falcon requires hive bootstrapping export method. hive -e “EXPORT TABLE TABLE_NAME TO ‘hdfs://BACKUP_CLUSTER:8020/hiveimport/' FOR replication('bootstrapping’)”
... View more
12-05-2016
02:11 AM
It will work when you use the correct hive bootstrapping method. The correct way to replicate any hive table is hive -e “EXPORT TABLE TABLE_NAME TO ‘hdfs://BACKUP_CLUSTER:8020/hiveimport/' FOR replication('bootstrapping’)”
... View more
12-05-2016
02:05 AM
Use Falcon Cli to kill any running process. To find the status: falcon instance -type process -name NAME_OF_YOUR_PROCESS -status
To kill the process : falcon instance -type process -name NAME_OF_YOUR_PROCESS -kill https://falcon.apache.org/FalconCLI.html
... View more
12-05-2016
01:47 AM
Upload the hdfs-replication-workflow.xml into hdfs will resolve this problem. i.e. create a hdfs directory /apps/data-mirroring/workflows and upload it
... View more
10-21-2016
06:38 PM
Use this link to connect kerberos hive in JDBC.
... View more
08-16-2016
02:49 PM
2 Kudos
Yes. It's possible. Update the same key on both KMS (prod and DR). I am using falcon to copy the data from prod to DR with KMS encryption.
... View more
07-26-2016
01:44 PM
1 Kudo
Did you submit the cluster entity. Here is the step to do. $FALCON_HOME/bin/falcon entity -submit -type cluster -file /cluster/definition.xml
... View more
07-15-2016
06:33 PM
In order to distcp between two HDFS HA cluster (for example A and B), modify the following in the hdfs-site.xml for both clusters: For example, nameservice for cluster A and B is HAA and HAB respectively. - Add value to the nameservice for both clusters dfs.nameservices = HAA, HAB - Add property dfs.internal.nameservices In cluster A: dfs.internal.nameservices = HAA In cluster B: dfs.internal.nameservices = HAB - Add dfs.ha.namenodes.<nameservice> In cluster A dfs.ha.namenodes.HAB = nn1,nn2 In cluster B dfs.ha.namenodes.HAA = nn1,nn2 - Add property dfs.namenode.rpc-address.<cluster>.<nn> In cluster A dfs.namenode.rpc-address.HAB.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAB.nn2 = <NN2_fqdn>:8020 In cluster B dfs.namenode.rpc-address.HAA.nn1 = <NN1_fqdn>:8020 dfs.namenode.rpc-address.HAA.nn2 = <NN2_fqdn>:8020 - Add property dfs.client.failover.proxy.provider.<cluster - i.e HAA or HAB> In cluster A dfs.client.failover.proxy.provider.HAB = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider In cluster B dfs.client.failover.proxy.provider.HAA = org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider - Restart HDFS service. Once complete you will be able to run the distcp command using the nameservice
... View more
02-22-2016
09:45 PM
Download both phoenix and hbase client into Squirrel lib and try.
... View more