Member since
01-08-2018
133
Posts
31
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17286 | 07-18-2018 01:29 AM | |
3096 | 06-26-2018 06:21 AM | |
5245 | 06-26-2018 04:33 AM | |
2706 | 06-21-2018 07:48 AM | |
2231 | 05-04-2018 04:04 AM |
06-09-2018
03:00 AM
While running a wordcount program i am getting the following error. cloudera@localhost ~]$ hadoop jar WordCount.jar WordCount /inputnew2/inputfile.txt /output_new 18/06/09 00:29:06 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:07 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:08 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:09 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:10 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:11 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:12 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:13 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:14 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:15 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:8021. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS) 18/06/09 00:29:15 ERROR security.UserGroupInformation: PriviledgedActionException as:cloudera (auth:SIMPLE) cause:java.net.ConnectException: Call From localhost.localdomain/127.0.0.1 to localhost.localdomain:8021 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused Exception in thread "main" java.net.ConnectException: Call From localhost.localdomain/127.0.0.1 to localhost.localdomain:8021 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:782) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:729) at org.apache.hadoop.ipc.Client.call(Client.java:1241) at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:225) at org.apache.hadoop.mapred.$Proxy10.getStagingAreaDir(Unknown Source) at org.apache.hadoop.mapred.JobClient.getStagingAreaDir(JobClient.java:1324) at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:102) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:951) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:945) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:945) at org.apache.hadoop.mapreduce.Job.submit(Job.java:566) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596) at WordCount.main(WordCount.java:132) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:528) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:492) at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:509) at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:603) at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:252) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1290) at org.apache.hadoop.ipc.Client.call(Client.java:1208) ... 18 more
... View more
06-05-2018
12:01 AM
Hi, I have similiar requirement where we cannot get AD admin account due to security policy. We are using CDH 5.11.2 Express version. Could you please help me providing steps for this approach. Thanks in Advance. Reards, Dinu
... View more
05-09-2018
07:44 PM
Thanks all and specially @GeKas < just to update that i was able to solve the issue, it was some of the lefover of enabling keberos on the cluster, i was install the oracle JDK which installed java1.7_cloudera, once i removed this package from the node, the LZO error gone.
... View more
05-09-2018
07:55 AM
Could you please mark it as answered so the community will be benifited ?
... View more
05-04-2018
04:04 AM
2 Kudos
Just select "None" in Sentry service section of Kafka. You don't have to delete rules. The rules are stored in sentry and since kafka will not ask, rules are useless.
... View more
05-04-2018
02:27 AM
1 Kudo
You are using sqoop1. Sqoop1 is not a service, it is a tool, that submits the job to YARN. So, apart from your stdout and YARN logs, there are no sqoop logs. Number of mappers (-m 4) means that your job will open 4 connections to your database. If there is no indication in logs about out of memory or illegal value on a column (I mean yarn logs), then you should check that your DB can accept 4 concurrent connections.
... View more
05-01-2018
01:28 AM
The reason you were seeing HdfsParquetTableWriter::ColumnWriter is that I was testing the bug using the syntax: CREATE TABLE db.newTable STORED AS PARQUET AS SELECT a.topLevelField, b.priceFromNestedField FROM db.table a LEFT JOIN a.nestedField This was purely to force the bug to occur - if you just did the SELECT in Hue it would often succeed because it only brings back the first 100 rows - to consistently trigger the crash I had to make Impala read from both Parquet files. No other query was running at the time. Anyway, as Chris says, the bug appears to be fixed in 5.14.2. The job which originally consistently triggered the crash has now been running unchanged over the same source data for 20 hours without hitch. Thanks for your help Matt
... View more
04-18-2018
07:57 AM
Awesome ..you are right , i have successfully enabled TLS/SSL with Level -3 encryption Thank you @GeKas for all your inputs
... View more
04-18-2018
06:55 AM
@dpugazhe Generally the / monut in the linux servers are small. Could you share the df -h command output of you linux box? I would suggest you to change the location for the parcels and logs for example if you have larger mount in your linux box called /xxxxx, change the /var/lib and /var/log to /xxxx/hadoop/lib and /xxxx/hadoop/log and the same for the parcels, as you are using cloudera manager, these changes can be done quickly. so to do that. 1- Stop cloudera manager services. 2- Move the old logs to the new partition. 3- Delete the old logs. 4- Start cloudera manager services
... View more