Member since
02-19-2016
40
Posts
12
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2418 | 08-23-2016 08:52 PM |
08-16-2016
10:13 PM
[root@sandbox /]# hdfs dfs -ls /tmp/mahi_dev/Data/ ls: `/tmp/mahi_dev/Data/': No such file or directory
... View more
08-16-2016
09:23 PM
@Constantin Stanca, @mqureshi, I tried to run in local file system as below [hdfs@sandbox LogFiles]$ hadoop jar wordcount.jar WordCount count.txt /Out Not valid error gone but got "Unsupported major.minor version 52.0" Error. I compiled my program with hadoop-common-2.7.1.jar Hadoop and Java version as below Hadoop 2.7.1.2.3.2.0-2950 java version "1.7.0_91" Please suggest.
... View more
08-16-2016
04:25 PM
Hi I did following. (/tmp is under hdfs) . Please suggest how to resolve below issue [root@sandbox /]# pig
grunt> lines= LOAD 'hdfs://sandbox.hortonworks.com:8020/tmp/mahi_dev/Data/count.txt' AS (line:chararray);
grunt> dump lines;
2016-08-16 15:41:17,892 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:DefaultJobName got an error while submitting
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input path does not exist: hdfs://sandbox.hortonworks.com:8020/tmp/mahi_dev/Data/count.txt
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:279)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
at java.lang.Thread.run(Thread.java:745)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://sandbox.hortonworks.com:8020/tmp/mahi_dev/Data/count.txt
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:265)
... 18 more
2016-08-16 15:41:17,906 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1469722152838_0058
2016-08-16 15:41:17,906 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases lines
2016-08-16 15:41:17,906 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: lines[1,7],lines[-1,-1] C: R:
2016-08-16 15:41:17,919 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2016-08-16 15:41:22,943 [main] WARN org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2016-08-16 15:41:22,943 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1469722152838_0058 has failed! Stop running all dependent jobs
2016-08-16 15:41:22,943 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2016-08-16 15:41:23,123 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2016-08-16 15:41:23,124 [main] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
2016-08-16 15:41:23,292 [main] INFO org.apache.hadoop.mapred.ClientServiceDelegate - Could not get Job info from RM for job job_1469722152838_0058. Redirecting to job history server.
2016-08-16 15:41:24,297 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: sandbox.hortonworks.com/10.0.2.15:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2016-08-16 15:41:25,300 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: sandbox.hortonworks.com/10.0.2.15:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
... View more
Labels:
- Labels:
-
Apache Pig
08-14-2016
07:39 PM
1 Kudo
Hi . While running wordcount jar I am getting error"Not a valid jar" [hdfs@sandbox /]$ hadoop fs -ls
Found 8 items
drwx------ - root hdfs 0 2016-08-12 00:21 .Trash
drwxr-xr-x - root hdfs 0 2016-08-01 21:31 .hiveJars
drwx------ - hdfs hdfs 0 2016-08-12 19:42 .staging -rw-r--r-- 3 admin hdfs 1024 2016-08-14 19:24 count.txt
drwxr-xr-x - hdfs hdfs 0 2016-08-12 18:38 emp
-rw-r--r-- 3 admin hdfs 4096 2016-08-14 19:24 wordcount.jar [hdfs@sandbox /]$ hadoop jar wordcount.jar WordCount count.txt wordcountoutput WARNING: Use "yarn jar" to launch YARN applications.
Not a valid JAR: /wordcount.jar I compiled my program with hadoop-common-2.3.0.jar Hadoop and Java version as below Hadoop 2.7.1.2.3.2.0-2950 java version "1.7.0_91"
... View more
Labels:
- Labels:
-
Apache YARN
08-01-2016
09:11 PM
Hi I am getting permission issue while creating external table. I logged in as root. hive> create external table users (
> user_id INT,
> age INT,
> gender STRING,
> occupation STRING,
> zip_code STRING
> )
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '|'
> STORED AS TEXTFILE
> LOCATION '/myid/userinfo';
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
HDFS
-
Security
05-13-2016
06:20 PM
yes. I used hue since it is mentioned in below instruction link http://hortonworks.com/hadoop-tutorial/how-to-install-and-configure-the-hortonworks-odbc-driver-on-windows-7/ Is there any way to check detailed log which can give me more information about the error ?
... View more
05-13-2016
04:23 PM
@devers. PFA dsn-setup.jpg I tried with user name hive and hue but same error.
... View more
05-13-2016
01:51 PM
@ Jitendra I tried above link previously but got the error mentioned in my thread.
... View more
05-12-2016
07:45 PM
1 Kudo
Hi While doing DSN setup I am getting below error. Please suggest. FAILED!
[Hortonworks][Hardy] (34) Error from server: connect() failed: errno = 10060.
... View more
Labels:
- Labels:
-
Apache Hive
- « Previous
-
- 1
- 2
- Next »