Member since
10-12-2016
14
Posts
4
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3124 | 10-25-2016 04:44 PM | |
2802 | 10-21-2016 04:29 PM | |
1884 | 10-14-2016 07:44 PM |
01-13-2017
05:57 PM
I've had zero success trying to get NIFI installed via Ambari. When it throws the recommended settings screen, it won't let me proceed. Additionally, I tried the above steps to rm the files, I consistently get cannot remove `/var/lib/ambari-server/resources/stacks/HDP/2.5/services/NIFI/package/scripts': Invalid argument I seem to get the invalid argument "feature" whenever I try and remove stuff from this docker package. Any guidance would be appreciated. Thanks,
... View more
10-25-2016
04:44 PM
@Timothy Spann @Vedant Jain ISSUE RESOLVED!!
It was indeed an issue with the Docker configuration. First, from the host (port 22) I went to /var/lib/docker/containers/ and then cd'd into the container/ From there, I opened config.v2.json and hostconfig.json and added an entry for port 10015. (There was an entry for port 10000, but not one for 10015) Then I restarted the host. (shutdown -r now) After restart, from the host (port 22) I could see that there was now a route to 10015: [root@sandbox ~]# iptables -t nat -L -n | grep 10015 MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:10015 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:10015 to:172.17.0.2:10015 I then logged into Ambari, started the Thrift Server and was able to connect via Tableau and the Spark SQL ODBC to port 10015. To expose other ports that are not in the default docker image, you'll probably have to follow the same process.
... View more
10-25-2016
04:20 PM
I think I'm getting warmer....
When I go to sandbox port 22 (outside of the docker container) and run iptables, I'm seeing an entry for port 10000 but there isn't one for 10015. Next step, I'll need to figure out how to add these entries to the iptables of the actual VM. [root@sandbox ~]# iptables -t nat -L -n | grep 10000
MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:10000
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:10000 to:172.17.0.2:10000.
[root@sandbox ~]# iptables -t nat -L -n | grep 10015
[root@sandbox ~]#
... View more
10-24-2016
09:19 PM
I'm using VMWare instead of VirtualBox, but I did go into Virtual Network Editor, selected NAT, then for Host Port, put in 10015, the VM IP Address and the Virtual machine port to 10015. It is still erroring out when I try to Telnet to the IP and the port.
... View more
10-24-2016
03:40 PM
I'm on Tableau 9.3. It was working as expected with Sandbox HDP 2.4.
... View more
10-21-2016
10:04 PM
1 Kudo
I had zero issues connecting Tableau to Sandbox HDP2.4 using the simba spark SQL ODBC driver, but I'm having issues with Sandbox HDP2.5. I can connect to Hive no problem on port 10000 but trying to connect to Spark on 10015 provides a connection failed message. Telnet to port 10000 works just fine, but telnet cannot open the connection to the server on port 10015. I ran:
root@sandbox sbin]# netstat -nltup | grep 100
with the following results: tcp 0 0 0.0.0.0:10020 0.0.0.0:* LISTEN 2533/java
tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 1703/java
tcp 0 0 0.0.0.0:10033 0.0.0.0:* LISTEN 2533/java
tcp 0 0 :::10015 :::* LISTEN 4212/java Process 4212 corresponds to the process that is running thrift server. I'm seeing this at the end of /var/log/spark thrift-server log: 6/10/21 19:55:53 INFO ThriftCLIService: Starting ThriftBinaryCLIService on port 10015 with 5...500 worker threads and there was nothing after that.
Any help getting thrift-server up for Tableau would be greatly appreciated.
... View more
Labels:
10-21-2016
04:29 PM
1 Kudo
Fixed the error by using an earlier version of KiteAPI: curl http://central.maven.org/maven2/org/kitesdk/kite-tools/0.17.0/kite-tools-0.17.0-binary.jar -o kite-dataset
... View more
10-19-2016
07:49 PM
I'm not sure if it's the docker implementation of HDP 2.5 on sandbox or what the story is. I've got the most recent version of the Kite API installed.
Perms look good: [root@sandbox ~]# hdfs dfs -ls / Found 12 items drwxrwxrwx - yarn hadoop 0 2016-10-17 19:51 /app-logs
drwxr-xr-x - hdfs hdfs 0 2016-09-13 11:01 /apps
drwxr-xr-x - yarn hadoop 0 2016-09-13 10:56 /ats
drwxr-xr-x - hdfs hdfs 0 2016-09-13 11:08 /demo
drwxr-xr-x - hdfs hdfs 0 2016-09-13 10:56 /hdp
drwxr-xr-x - mapred hdfs 0 2016-09-13 10:56 /mapred
drwxrwxrwx - mapred hadoop 0 2016-09-13 10:56 /mr-history
drwxr-xr-x - hdfs hdfs 0 2016-10-12 15:05 /ranger
drwxrwxrwx - spark hadoop 0 2016-10-19 19:46 /spark-history
drwxrwxrwx - spark hadoop 0 2016-09-13 11:20 /spark2-history
drwxrwxrwx - hdfs hdfs 0 2016-10-17 17:23 /tmp
drwxr-xr-x - hdfs hdfs 0 2016-10-12 15:18 /user [root@sandbox ~]# hdfs dfs -ls /tmp
Found 13 items -rwxrwxrwx 3 raj_ops hdfs 6676440 2016-10-17 17:23 /tmp/Payor_1_Claims.txt
-rwxrwxrwx 3 raj_ops hdfs 2803 2016-10-17 17:23 /tmp/Payor_1_Eligibility.txt
-rwxrwxrwx 3 raj_ops hdfs 21015 2016-10-17 17:23 /tmp/Payor_1_Glucose_Results.txt
-rwxrwxrwx 3 raj_ops hdfs 2317192 2016-10-17 17:22 /tmp/Payor_2_Additional_Dx_Codes.txt
-rwxrwxrwx 3 raj_ops hdfs 7866129 2016-10-17 17:23 /tmp/Payor_2_Claims.txt
-rwxrwxrwx 3 raj_ops hdfs 8626 2016-10-17 17:23 /tmp/Payor_2_Eligibility.txt
-rwxrwxrwx 3 raj_ops hdfs 22969 2016-10-17 17:23 /tmp/Payor_2_Glucose_Results.txt
-rwxrwxrwx 3 raj_ops hdfs 8474653 2016-10-17 17:23 /tmp/Payor_3_Claims.txt
-rwxrwxrwx 3 raj_ops hdfs 995712 2016-10-17 17:23 /tmp/Payor_3_Dx_Codes.txt
-rwxrwxrwx 3 raj_ops hdfs 88106 2016-10-17 17:23 /tmp/Payor_3_Eligibility.txt
-rwxrwxrwx 3 raj_ops hdfs 23125 2016-10-17 17:23 /tmp/Payor_3_Glucose_Results.txt
drwxrwxrwx - hdfs hdfs 0 2016-09-13 10:56 /tmp/entity-file-history
drwxrwxrwx - ambari-qa hdfs 0 2016-10-17 19:52 /tmp/hive
... View more
10-18-2016
04:40 PM
logs.zip I think I zipped up the requested logs. Thanks for your help, I'm somewhat new to hortonworks and trying to flush out a POC. Here's the full text of my error message: [hdfs@sandbox bin]$ ./kite-dataset csv-import /home/hdfs/bin/ingest/Payor_1_Claims.txt Payor_1_Claims --delimiter '|'
1 job failure(s) occurred:
org.kitesdk.tools.CopyTask: Kite(dataset:file:/tmp/b138551a-23e0-49ee-a51e-d9dd0773f1... ID=1 (1/1)(1): java.io.FileNotFoundException: File does not exist: hdfs://sandbox.hortonworks.com:8020/tmp/crunch-380116631/p1/REDUCE
at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1427)
at org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1419)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1419)
at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:766)
at org.apache.hadoop.mapreduce.v2.util.MRApps.parseDistributedCacheArtifacts(MRApps.java:600)
at org.apache.hadoop.mapreduce.v2.util.MRApps.setupDistributedCache(MRApps.java:490)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:93)
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(CrunchControlledJob.java:329)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:238)
at org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112)
at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55)
at org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83)
at java.lang.Thread.run(Thread.java:745)
... View more
10-17-2016
09:10 PM
2 Kudos
I recently swapped sandboxes from HDP 2.4 to HDP 2.5 and I'm running into all sorts of issues with the KiteSDK. I created the directory /hdp/apps/2.5.0.0-1245/mapreduce/ and copied in mapreduce.tar.gz which got me a little further, but now I'm running into a "org.kitesdk.tools.CopyTask: Kite(dataset:file:/tmp/413a41a2-8813-4056-9433-3c5e073d80... ID=1 (1/1)(1): java.io.FileNotFoundException: File does not exist: hdfs://sandbox.hortonworks.com:8020/tmp/crunch-283520469/p1/REDUCE" that I can't seem to overcome.
Has anyone successfully gotten KiteAPI to work on HDP 2.5? I can't figure out what I'm doing wrong here.
I'd be happy to go back to 2.4 but I can't seem to find a download for it.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)