Member since
11-11-2014
21
Posts
3
Kudos Received
0
Solutions
08-14-2016
09:51 PM
Thanks Artem, you are correct, but due to some constraints we can not wait until upgrade. I am unable to find a fix for this.
... View more
07-28-2016
08:13 PM
I have disabled the audit logging to a database by setting: XAAUDIT.DB.IS_ENABLED=false But it still asks for a username.
... View more
02-05-2016
08:24 PM
@Rakesh Gupta excellent question, are you still having issues with this? Can you accept best answer or provide your own solution?
... View more
05-05-2015
07:30 AM
Thanks Brad!!!
... View more
01-15-2015
05:47 AM
some log from cloudera-scm-server: 2015-01-15 11:00:48,683 INFO Metric-schema-update:com.cloudera.cmon.components.MetricSchemaManager: Updating schema work aggregates
2015-01-15 11:00:50,314 INFO Metric-schema-update:com.cloudera.cmon.components.MetricSchemaManager: Registering work aggregates
2015-01-15 11:00:50,656 INFO CMMetricsForwarder-0:com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder: Failed to send metrics.
java.lang.reflect.UndeclaredThrowableException
at com.sun.proxy.$Proxy100.writeMetrics(Unknown Source)
at com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder.sendWithAvro(ClouderaManagerMetricsForwarder.java:287)
at com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder.sendMetrics(ClouderaManagerMetricsForwarder.java:274)
at com.cloudera.server.cmf.components.ClouderaManagerMetricsForwarder.run(ClouderaManagerMetricsForwarder.java:129)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.avro.AvroRemoteException: java.net.ConnectException: Connection refused
at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:88)
... 11 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1091)
at org.apache.avro.ipc.HttpTransceiver.writeBuffers(HttpTransceiver.java:71)
at org.apache.avro.ipc.Transceiver.transceive(Transceiver.java:58)
at org.apache.avro.ipc.Transceiver.transceive(Transceiver.java:72)
at org.apache.avro.ipc.Requestor.request(Requestor.java:147)
at org.apache.avro.ipc.Requestor.request(Requestor.java:101)
at org.apache.avro.ipc.specific.SpecificRequestor.invoke(SpecificRequestor.java:72)
... 11 more
... View more
12-11-2014
06:57 AM
So concretely: 1) I have to remove cdh components like hadoop-namenode, etc.. (which I installed using repository, then apt-get install) 2) Remove all the directories I created for different hadoop componenets like /data/1/dfs/nn etc.. basically doing ' hadoop fs -rmr / ' and also removing the local directories. 3) Start the CM admin console and run the CM wizard to install cdh components using Parcels. (which will download everything again and install) Cant we make use of the existing installation of cdh components? May be the CM wizard finds that oh... the components are alraedy installed so it moves to the next step of defining roles and services? Do we really need to remove them in step 1? Thanks for the support!
... View more
11-19-2014
01:53 AM
It looks like you asked for more resources than you configured YARN to offer, so check how much you can allocate in YARN and how much Spark asked for. I don't know about the ERROR; it may be a red herring. Please have a look at http://spark.apache.org/docs/latest/ for pretty good Spark docs.
... View more