Member since
11-07-2017
5
Posts
4
Kudos Received
0
Solutions
11-07-2017
07:32 AM
it's password_hash='ffa2eb4251b38e069e968890cb2bcdb6229982322f5ed2470bf96231fe4c39c8', password_salt=-4382599614486590865
... View more
11-07-2017
07:13 AM
Hello rufusayeni do you also know the default password hash and salt for cdh4.5? thanks and regards!
... View more
01-27-2017
02:39 PM
the only option for tojson is --pretty. When I compress a avro file with avro tools, then I can read it with out specifying a codec
... View more
01-27-2017
01:25 PM
2 Kudos
Hello together, I created a data flow with putting avro files to hdfs. Everything is working, except if I set compression to SNAPPY. With snappy, it creates a avro.snappy file, but when I try to read it with the avro tools, then I getting following exception. java -jar avro-tools-1.8.1.jar tojson hour\=21/000000_0.avro.snappy
Exception in thread "main" java.io.IOException: Not a data file.
at org.apache.avro.file.DataFileStream.initialize(DataFileStream.java:105)
at org.apache.avro.file.DataFileStream.<init>(DataFileStream.java:84)
at org.apache.avro.tool.DataFileReadTool.run(DataFileReadTool.java:71)
at org.apache.avro.tool.Main.run(Main.java:87)
at org.apache.avro.tool.Main.main(Main.java:76) Any idea what could be wrong? Thanks and regards, Christian
... View more
Labels:
01-07-2016
03:25 PM
2 Kudos
I try to insert data from csv file with the kitesdk in the hortonworks sandbox 2.3.2. After fixing the missing ojdbc6.jar issue and missing mapreduce.tar.gz, I'm run in following error: 1 job failure(s) occurred:
org.kitesdk.tools.CopyTask: Kite(dataset:file:/tmp/f00edc6d-4862-40d0-ad48-d6973b3261... ID=1 (1/1)(1): java.io.FileNotFoundException: File does not exist: hdfs://sandbox.hortonworks.com:8020/tmp/crunch-615301065/p1/REDUCE
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)
at
org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:751)
at org.apache.hadoop.mapreduce.v2.util.MRApps.parseDistributedCacheArtifacts(MRApps.java:571)
at org.apache.hadoop.mapreduce.v2.util.MRApps.setupDistributedCache(MRApps.java:463)
at org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:93)
at org.apache.hadoop.mapred.LocalJobRunner$Job.<init>(LocalJobRunner.java:163)
at org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(CrunchControlledJob.java:329)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:238)
at org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112)
at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55)
at org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83)
at java.lang.Thread.run(Thread.java:745) Any idea whats going wrong here or what I'm missed? Thanks in advanced.
... View more