Member since
11-18-2014
196
Posts
17
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5747 | 03-16-2016 05:54 AM | |
2311 | 02-05-2016 04:49 AM | |
1536 | 01-08-2016 06:55 AM | |
12512 | 09-29-2015 01:31 AM | |
880 | 05-06-2015 01:50 AM |
11-18-2016
01:49 AM
Hello! I'm trying to make a unit test in spark that reads from a hive table As far as I saw I have to extend QueryTest with TestHiveSingleton However, I cannot manage to import QueryTest! (which is supposed to be in org.apache.spark.sql) I'm already imported the spark_sql_2.10. I tried to import the assembly also (no results). Note: I am in CDH 5.8.2 Thank you!
... View more
09-26-2016
01:33 PM
2 Kudos
Another suggestion.. Give requirements such that a cluster is certifiable in production /development etc.. Hopes this helps 🙂
... View more
09-26-2016
01:29 PM
3 Kudos
About the conferences, I have to register every time.. It would be nice to have access with the cloudera account..
... View more
09-17-2016
09:37 AM
I have the same here, after un update from CDH 5.3...
... View more
09-15-2016
08:18 AM
Hello, I did the update by folowing the tuorial: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ag_upgrade_cm5.html I had no problems until I started Cloudera Manager. In fact, I cannot start Cloudera Manager Services, and I cannot start any services on Cloudera Manager. When I want to start Cloudera Manager Services I have: Command aborted because of exception: Command timed-out after 90 seconds Also the resume page had: 7 hosts are reporting with NONE CDH version I did a yum list installed | grep cloud on each machine and on each of it I have: cloudera-manager-agent.x86_64 5.8.1-1.cm581.p0.7.el6 @cloudera-manager
cloudera-manager-daemons.x86_64 5.8.1-1.cm581.p0.7.el6 @cloudera-manager
oracle-j2sdk1.7.x86_64 1.7.0+update67-1 @cloudera-director so normally I have no conflict in the version of Cloudera .. I haven't seen any post with this error for un update.. Thank you
... View more
09-02-2016
02:51 AM
Hello, Your problem is not linked to the impala scheduler, but to the shell. In fact oozie cannot find your shell file. 1.What are the permission on the file? does oozie has acces? shell-impala-invalidate.sh 2.What are the premission to the folder? /data/4/yarn/nm/usercache/*******/appcache/application_1463053085953_30120/container_e49_1463053085953_30120_01_000002 (this folder is on one of your workers) Alina
... View more
08-24-2016
01:25 AM
Hello, I have a small problem with the round function, when I'm adding the nb of decimals, it is not rounding at all. I saw that there was a problem with the rounding function in https://issues.apache.org/jira/browse/HIVE-4523 , just that i'm in hive 1.3.1 CDH 5.3.4 and normally it should be fixed in hive 1.3.0.. Thank you!
... View more
05-12-2016
05:14 AM
Hello, What is the easy/automated/todo way of deploying your code from development to production with Cloudera? Thank you! Alina GHERMAN
... View more
05-10-2016
01:54 AM
Hello, I'm having a variable X that has several fields: some of type chararray : A B and C one of type bag : le field D. The bag is composed of tuples with 2 variables {(name:chararray,value:chararray),(name:chararray,value:chararray)...} I need to join this variable X WITH Y by A field (from X) WITH Z BY each name field from the D tuple (from X) The join with Y is a classic one; However, I can't figure out how I can do the join with Z... Thank you!
... View more
04-26-2016
05:24 AM
Hello, I created an UDF for pig for writing into HBse; However, when I run my job , the job finishes successfully, there is nothing written into my HBase table and the logs I have 2 errors: java.io.IOException: Failed on local exception: java.io.EOFException;
Host Details : local host is: "ip"; destination host is: "ip":59856;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1415)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy29.getJobReport(Unknown Source)
at org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getJobReport(MRClientProtocolPBClientImpl.java:133)
at sun.reflect.GeneratedMethodAccessor288.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:320)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:419)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:553)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:183)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580)
at org.apache.hadoop.mapred.JobClient.getTaskReports(JobClient.java:635)
at org.apache.hadoop.mapred.JobClient.getMapTaskReports(JobClient.java:629)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.getStats(Launcher.java:150)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:468)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1297)
at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:478)
at org.apache.pig.PigRunner.run(PigRunner.java:49)
at org.apache.oozie.action.hadoop.PigMain.runPigJob(PigMain.java:286)
at org.apache.oozie.action.hadoop.PigMain.run(PigMain.java:226)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39)
at org.apache.oozie.action.hadoop.PigMain.main(PigMain.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:227)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1055)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:950)
ailed to load OpenSSL Cipher.
java.lang.UnsatisfiedLinkError: Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)!
at org.apache.hadoop.crypto.OpensslCipher.initIDs(Native Method)
at org.apache.hadoop.crypto.OpensslCipher.<clinit>(OpensslCipher.java:89)
at org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.<init>(OpensslAesCtrCryptoCodec.java:50)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:67)
at org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:100)
at org.apache.hadoop.fs.Hdfs.<init>(Hdfs.java:91)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:129)
at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:157)
at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:242)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:334)
at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:331)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:448)
at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:470)
at org.apache.hadoop.mapred.YARNRunner.<init>(YARNRunner.java:137)
at org.apache.hadoop.mapred.YARNRunner.<init>(YARNRunner.java:121)
at org.apache.hadoop.mapred.YARNRunner.<init>(YARNRunner.java:111)
at org.apache.hadoop.mapred.YarnClientProtocolProvider.create(YarnClientProtocolProvider.java:34)
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:95)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:472)
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:450)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:158)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1297)
at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:478)
at org.apache.pig.PigRunner.run(PigRunner.java:49)
at org.apache.oozie.action.hadoop.PigMain.runPigJob(PigMain.java:286)
at org.apache.oozie.action.hadoop.PigMain.run(PigMain.java:226)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39)
at org.apache.oozie.action.hadoop.PigMain.main(PigMain.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:227)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
2016-04-26 10:32:56,288 [main] DEBUG org.apache.hadoop.util.PerformanceAdvisory - Crypto codec org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec is not available.
2016-04-26 10:32:56,288 [main] DEBUG org.apache.hadoop.util.PerformanceAdvisory - Using crypto codec org.apache.hadoop.crypto.JceAesCtrCryptoCodec.
2016-04-26 10:32:56,290 [main] DEBUG org.apache.hadoop.mapreduce.Cluster - Picked org.apache.hadoop.mapred.YarnClientProtocolProvider as the ClientProtocolProvider
2016-04-26 10:32:56,321 [main] INFO org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2016-04-26 10:32:56,805 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:606)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:700)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1463)
at org.apache.hadoop.ipc.Client.call(Client.java:1382)
at org.apache.hadoop.ipc.Client.call(Client.java:1364)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy20.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy21.getApplicationReport(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:365)
at org.apache.hadoop.mapred.ResourceMgrDelegate.getApplicationReport(ResourceMgrDelegate.java:294)
at org.apache.hadoop.mapred.ClientServiceDelegate.getProxy(ClientServiceDelegate.java:152)
at org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:319)
at org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:419)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:553)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:183)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580)
at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.progressOfRunningJob(Launcher.java:274)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher.calculateProgress(Launcher.java:257)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:335)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1297)
at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:478)
at org.apache.pig.PigRunner.run(PigRunner.java:49)
at org.apache.oozie.action.hadoop.PigMain.runPigJob(PigMain.java:286)
at org.apache.oozie.action.hadoop.PigMain.run(PigMain.java:226)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39)
at org.apache.oozie.action.hadoop.PigMain.main(PigMain.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:227)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
2016-04-26 10:33:05,395 [main] DEBUG org.apache.hadoop.ipc.Client - IPC Client (1962449973) connection to ip/ip:8032 from agherman: closed
2016-04-26 10:33:05,395 [main] TRACE org.apache.hadoop.ipc.ProtobufRpcEngine - 1: Exception <- ip:8032: getApplicationReport {java.net.ConnectException: Call From ip/ip to ip:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused}
I cannot manage therefore to find out the source of the problem... Thank you in advance!
... View more
04-26-2016
01:47 AM
Hello, no I did not.. I'll take all this into account and see if this and CHD5.5 can get into our priorities.. Thank you! Alina
... View more
04-17-2016
09:30 AM
Hello, I can't figure out why, but hue got very very slow. It happened step by step.. But now, it is really too slow.. Can you point me out some configurations in order to speed it up? Also, each time I navigate in the but browser, I have an error, but the navigation is working.. Thank you,
... View more
- Tags:
- hue
04-12-2016
08:17 AM
Thank you! Indeed, I recreated all the tables... since I have the trash disabled, I had nothing in trash... However, this is a very complete reply. Thank you!
... View more
04-12-2016
08:12 AM
I'm note sure that I can change all the sources in order to post to all my Flume agents, but this is an interesting solution. Thank you!
... View more
04-02-2016
03:27 PM
Hello, Thank you for your reply, In my case the flume source is HTTP, and I wanted to know if there is a way to ensure that if the machine with the flume source if getting down, I can still receive the data (HA). However, I can imagine only a solution with two sources and a load balancer machine before the 2 machines....and I was searching more for a solution within the Hadoop cluster (as it is done with YARN and HBase..) Thank you, Alina
... View more
03-29-2016
09:40 PM
Hello, I had a problem, my job failed because hbase could not find an existing table. I then did a: sudo -u hbase hbase hbck -repair and now all my tables are gone (beside one)!! I cannot see my old data in the hbase folder! Is there a way to recover all this? Please help! Thank you!
... View more
Labels:
03-29-2016
02:54 AM
03-16-2016
05:54 AM
What's strange is that it worked when I did cat file_with_all_split_cmmands|hbase shell ... Alina
... View more
03-16-2016
03:11 AM
Hello, I would like to change my HBase table and add some splits for the keys that don't exists yet. I tried the folowings without success : split 'table_name', 'key_that_dont_exists_yet'
alter 'table_name', {SPLITS=>['my_new_split1','my_new_split2']} However, I have the felling that is not working. When I check the splits, I have the same splits as before in the browser and when I do a hdfs dfs -ls /hbase/data/default/table_name I also have no additional folders... Note: I'm using CDH 5.3 Thank you!
... View more
- Tags:
- split_regions
Labels:
03-14-2016
09:04 AM
ello, I would like to set the folowing to true for all my impala queries: set APPX_COUNT_DISTINCT=true; and I can't find any way to do it... I need this, because I have some impala queries that I send over the Impala JDBC Driver and that I would like to optimize. Note: this topic is somehow linked to http://community.cloudera.com/t5/Interactive-Short-cycle-SQL/use-set-command-through-Impala-JDBC-Driver/td-p/37455 Thank you! Alina
... View more
- Tags:
- jdbc
- set_command
03-14-2016
07:32 AM
Hello, I have a really strange thing showing up in Cloudera Manager statistiques. - I have no HBase logs newer than 6 hours - I have 60 jvm threads waiting on one of the worker, on a REGIONSERVER. - the master also has 40 jvm waiting -I do not have the other regionservers jvm_waiting_threads (they are not showing up in the CM graph - SELECT jvm_waiting_threads) Note: I have some troubles with HBase/ each other day a HBase Region Server is getting down (not the same each time) Thank you! Alina
... View more
- Tags:
- waiting_threads
Labels:
03-08-2016
11:10 PM
Hello, Do you want to create a pig or hive/impala UDF? Mainly, the udf is some java/python code. For the Java code you have to upload the jar on his, for the python one, you have to add your code on hdfs. Please check this out for more information : http://gethue.com/hadoop-tutorial-hive-udf-in-1-minute/
... View more
03-07-2016
07:36 AM
I think that I found a possible cause of the problem. At the hour indicated by Flume logs, the Flume service was OK ( nothing special), however, at the hour indicated by flume logs I restarted HDFS in order to deploy a new configuraiton... However, I still have no solution for the lost files (other than not take them into account)... Alina
... View more
03-07-2016
07:09 AM
Hello, I just had this one more time, and the source problem is : 2016-03-07 14:00:09,285 INFO org.apache.flume.sink.hdfs.BucketWriter: Creating hdfs://NameNode/path/to/file.json.tmp
2016-03-07 14:16:30,389 WARN org.apache.flume.sink.hdfs.BucketWriter: Caught IOException writing to HDFSWriter (Connection reset by peer). Closing file (hdfs://NameNode/path/to/file.json.tmp) and rethrowing exception.
2016-03-07 14:16:30,389 INFO org.apache.flume.sink.hdfs.BucketWriter: Closing hdfs://NameNode/path/to/file.json.tmp
2016-03-07 14:16:30,389 WARN org.apache.flume.sink.hdfs.BucketWriter: failed to close() HDFSWriter for file (hdfs://NameNode/path/to/file.json.tmp). Exception follows.
2016-03-07 14:16:30,390 INFO org.apache.flume.sink.hdfs.BucketWriter: Renaming hdfs://NameNode/path/to/file.json.tmp to hdfs://NameNode/path/to/file.json I have this problem for all files for wich I had the error that I put in bold. Alina
... View more
03-07-2016
04:13 AM
Hello, In CDH 5.6 there is Hive on Spark and Impala. How should we choose between these 2 services? Are there any benchmarks that compare these 2 services? Thank you! 🙂
... View more
- Tags:
- Hive on Spark
- impala
03-07-2016
01:42 AM
I searched for the root problem and I found this: DatanodeRegistration(<ip>, datanodeUuid=5a1b56f4-34ac-48da-bfd1-5b8107c26705, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=cluster18;nsid=1840079577;c=0):
Got exception while serving BP-1623273649-ip-1419337015794: blk_1076874382_3137087 to /<ip>:49919
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/<ip>:50010 remote=/<ip>:49919]
at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:172)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:220)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:547)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:716)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:487)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:111)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:69)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
at java.lang.Thread.run(Thread.java:745)
View Log File
In fact, the RegionServer Got down for 4 minutes, then the job failed with NotServingRegionException (the exception that I posted in my first post) Is the real solution to increase the dfs.datanode.socket.write.timeout as is posted in http://blog.csdn.net/odailidong/article/details/46433205 and https://issues.apache.org/jira/browse/HDFS-693 ? In fact what is even strager is that in the error I have SocketTimeoutException: 480000 millis And in my actual configuration I have in HDFS -> Service Monitor Client Config Overrides <property><name>dfs.socket.timeout</name><value>3000</value></property><property><name>dfs.datanode.socket.write.timeout</name><value>3000</value></property><property><name>ipc.client.connect.max.retries</name><value>1</value></property><property><name>fs.permissions.umask-mode</name><value>000</value></property> Also, in the https://issues.apache.org/jira/browse/HDFS-693 JIRA , Uma Maheswara Rao G said that "In our observation this issue came in long run with huge no of blocks in Data Node". In my case we have between 56334 and 80512 blocks per DataNode. Is this considered as huge? Thank you!
... View more
03-06-2016
03:29 AM
Of course, here is the jira that I created: https://issues.apache.org/jira/browse/HIVE-13215 I'm not really sure if I chose the best tags, and labels 😄 Alina
... View more
- Tags:
- JIRA
03-06-2016
03:12 AM
Hello, The row counter was not going successfuly. I manage to make it work only after I manually made a major_compact on this table and I balanced the regions over the Region Servers. However, the tables balancing is taked about 20-30 minutes. In Cloudera Manager I had the configuration to make a major_compact every 2 days. How can I be sure that I won't have this situation again? Thank you!
... View more
02-25-2016
07:20 AM
Hello, I'm having a workflow that I would like to export. However I have an error while exporting in Hue: [25/Feb/2016 06:16:43 -0800] base ERROR Internal Server Error: /oozie/export_workflow/180
Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/decorators.py", line 52, in decorate
return view_func(request, *args, **kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/views/editor.py", line 240, in export_workflow
zip_file = workflow.compress(mapping=dict([(param['name'], param['value']) for param in workflow.find_all_parameters()]))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 581, in compress
xml = self.to_xml(mapping=mapping)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 561, in to_xml
xml = re.sub(re.compile('\s*\n+', re.MULTILINE), '\n', django_mako.render_to_string(tmpl, {'workflow': self, 'mapping': mapping}))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/desktop/core/src/desktop/lib/django_mako.py", line 109, in render_to_string_normal
result = template.render(**data_dict)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/template.py", line 443, in render
return runtime._render(self, self.callable_, args, data)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 786, in _render
**_kwargs_for_callable(callable_, data))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 818, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 844, in _exec_template
callable_(context, *args, **kwargs)
File "/tmp/tmpRHee5F/oozie/editor/gen/workflow.xml.mako.py", line 105, in render_body
__M_writer( node.to_xml(mapping) )
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 712, in to_xml
return django_mako.render_to_string(node.get_template_name(), data)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/desktop/core/src/desktop/lib/django_mako.py", line 109, in render_to_string_normal
result = template.render(**data_dict)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/template.py", line 443, in render
return runtime._render(self, self.callable_, args, data)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 786, in _render
**_kwargs_for_callable(callable_, data))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 818, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 844, in _exec_template
callable_(context, *args, **kwargs)
File "/tmp/tmpRHee5F/oozie/editor/gen/workflow-decision.xml.mako.py", line 40, in render_body
__M_writer(escape(unicode( node.get_oozie_child('default') )))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 731, in get_oozie_child
child = self.get_link(name).child.get_full_node()
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 720, in get_link
return Link.objects.exclude(name__in=Link.META_LINKS).get(parent=self, name=name)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/query.py", line 366, in get
% self.model._meta.object_name)
DoesNotExist: Link matching query does not exist.
[25/Feb/2016 06:16:43 -0800] middleware INFO Processing exception: Link matching query does not exist.: Traceback (most recent call last):
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/core/handlers/base.py", line 111, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/decorators.py", line 52, in decorate
return view_func(request, *args, **kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/views/editor.py", line 240, in export_workflow
zip_file = workflow.compress(mapping=dict([(param['name'], param['value']) for param in workflow.find_all_parameters()]))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 581, in compress
xml = self.to_xml(mapping=mapping)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 561, in to_xml
xml = re.sub(re.compile('\s*\n+', re.MULTILINE), '\n', django_mako.render_to_string(tmpl, {'workflow': self, 'mapping': mapping}))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/desktop/core/src/desktop/lib/django_mako.py", line 109, in render_to_string_normal
result = template.render(**data_dict)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/template.py", line 443, in render
return runtime._render(self, self.callable_, args, data)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 786, in _render
**_kwargs_for_callable(callable_, data))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 818, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 844, in _exec_template
callable_(context, *args, **kwargs)
File "/tmp/tmpRHee5F/oozie/editor/gen/workflow.xml.mako.py", line 105, in render_body
__M_writer( node.to_xml(mapping) )
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 712, in to_xml
return django_mako.render_to_string(node.get_template_name(), data)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/desktop/core/src/desktop/lib/django_mako.py", line 109, in render_to_string_normal
result = template.render(**data_dict)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/template.py", line 443, in render
return runtime._render(self, self.callable_, args, data)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 786, in _render
**_kwargs_for_callable(callable_, data))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 818, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py", line 844, in _exec_template
callable_(context, *args, **kwargs)
File "/tmp/tmpRHee5F/oozie/editor/gen/workflow-decision.xml.mako.py", line 40, in render_body
__M_writer(escape(unicode( node.get_oozie_child('default') )))
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 731, in get_oozie_child
child = self.get_link(name).child.get_full_node()
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/apps/oozie/src/oozie/models.py", line 720, in get_link
return Link.objects.exclude(name__in=Link.META_LINKS).get(parent=self, name=name)
File "/opt/cloudera/parcels/CDH-5.3.5-1.cdh5.3.5.p0.4/lib/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/db/models/query.py", line 366, in get
% self.model._meta.object_name)
DoesNotExist: Link matching query does not exist. Note, when I clock on the workflow button, I do have the new tab that is opened to the location of all the jars, and the workflow.xml.... Note: This is the only workflow that I cannot export. (I already exported other workflows...) Thank you!
... View more
02-25-2016
05:31 AM
Hello, If it's for test purposes, and since you have that much of disk, I suppose you could add some swap. That should help (not for the performances, but for having access to the platform). Requirements: http://www.cloudera.com/downloads/quickstart_vms/5-4.html If you count the OS; the browser that you may have open, the VM if you are running in a VM... I think you may be at the limit of memory.
... View more