Member since
02-02-2016
583
Posts
518
Kudos Received
98
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4188 | 09-16-2016 11:56 AM | |
| 1748 | 09-13-2016 08:47 PM | |
| 6941 | 09-06-2016 11:00 AM | |
| 4170 | 08-05-2016 11:51 AM | |
| 6244 | 08-03-2016 02:58 PM |
05-24-2016
12:08 PM
@Farrukh Mahmood Still thinking there might be a syntax issue with the Sqoop command, wondering if you can share the latest command you ran along with error msg?
... View more
05-24-2016
11:12 AM
@Farrukh Mahmood Ok, then see if you have completed the prerequisites as mentioned by @Dave Russell
... View more
05-24-2016
10:52 AM
@Farrukh Mahmoo Can you please try with below syntax? sqoop import --direct --connect jdbc:netezza://nzserverip:5480/db1 --username user1 --password pass1 --table tab1 --target-dir /test/tab1 --delete-target-dir --m 8 --log-dir "/logdir"
... View more
05-23-2016
10:41 PM
@Smart Solutions Here is my env info. sudo -u spark ./sbin/start-thriftserver.sh --master yarn-client --executor-memory 512m --hiveconf hive.server2.thrift.port=10015
[root@ey spark-thriftserver]# lsof -i:10015
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 29812 spark 193u IPv6 246416 0t0 TCP *:10015 (LISTEN)
[root@ey spark-thriftserver]# lsof -i:10001
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 22905 hive 482u IPv4 139955 0t0 TCP *:scp-config (LISTEN)
Just wanted to check if you have hive-site.xml or some other .xml file inside /usr/hdp/current/spark-thriftserver/conf direcotry which has hive.server2.thrift.port property configured?
... View more
05-23-2016
09:31 PM
@Smart Solutions I'm not able to reproduce this issue on my cluster, But as a workaround you can move the HS2 http port from 10001 to 10002 and let spark occupied the 10001 port.
... View more
05-23-2016
01:31 PM
@chennuri gouri shankar Can you share the output of below commands? cat /var/log/ambari-metrics-collector/ambari-metrics-collector.pid
ps -aef|grep collector
... View more
05-23-2016
01:22 PM
@karthik sai Hi Karthik, I was saying if you can install the Hortonworks Hadoop cluster or probably a Sandbox machine along with flume would help us to understand your issue while you run the same flume example on that. Here is the download link of sandbox. http://hortonworks.com/downloads/#sandbox
... View more
05-23-2016
01:09 PM
@karthik sai Looks like you are using CDH distro therefore I would recommend you to run sam test on HDP cluster with Flume and let us know if you still face any issue.
... View more
05-23-2016
11:46 AM
@Mon key Frankly i don't know how it got removed from your cluster 🙂 but this property comes default when you install History server. Also please accept the answer if that helped you to resolve this issue.
... View more
05-23-2016
10:25 AM
1 Kudo
@Mon key Hi, Can you cross check if you have below property in Mapreduce2 -> config section on Ambari UI? mapreduce.jobhistory.done-dir = /mr-history/done If not and try adding this property under "Advanced mapred-site" section, the default value of this parameter is "/mr-history/done"
... View more