Member since
09-11-2014
15
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
21587 | 03-20-2015 08:45 AM | |
2783 | 10-07-2014 01:26 PM |
06-13-2016
03:01 PM
Does spark-streaming also have the double-counting problem for transformations on DStreams? I assume that under-the-hood they operate the same way, but I figured I'd better double-check.
... View more
06-13-2016
08:07 AM
I appreciate that suggestion, but for my use case, counters need to be incremented during the transformation when there's context to know what to count. That's my fault for not being more clear on earlier, but I need to know if I failed to parse because of invalid xml, or because of an OOME, or because the xml didn't conform to the schema, etc. That information can't be derrived from the output of "null", it needs to happen inside varous `catch` blocks. I think the only suggestion so far that would "work" would be to run duplicate functions, first as an action for the counters, and then as a transformation to actually get my results. But that is likely to be inneficient, and will make going back to Hadoop more attractive.
... View more
06-13-2016
07:54 AM
So I need the reliable counters. That's a must-have. You're right, I could do the same process twice, once for the counters (action) and once for the transformation (no counters), but that's not efficient.
... View more
06-13-2016
07:39 AM
For me, since I need a collection (or RDD) of Bean objects from the result of processing, a void function won't cut it. Using a reduce is an interesting idea, but would require all the information I want to count to be derivable from the Beans, which it won't be. If I want to count the different types of parse errors (not just success-vs-failure) that information will be lost by the time I just have beans. Seems a shame that spark didn't manage to capture this one critical piece of the Hadoop framework.
... View more
06-10-2016
01:43 PM
So at first I was really excited about your answer - until I realized that I really can't perform transformations on my rdd items. So here's a revised way to ask my question: Say I have an RDD of xml documents, and I want to run them through a function where I parse them into some sort of Bean (a transformation), and I want to count how many xml documents can't be parsed, and how may of my beans have property x, etc. Is there no way to count these things reliably, since the `forEach()` action takes a `VoidFunction` and the `map()` transformation doesn't keep reliable counters?
... View more
06-10-2016
10:05 AM
Is there a reliable way to count things when using Spark? In Hadoop jobs, counters are never "counted" if the job fails partway through, or if speculative execution causes two process to perform the same operation, only the first complete task's counters are aggregated. My undersanding is that Spark's accumulators are not the same, and that both stage failures and speculative execution can lead to double-counting, meaning my counters are not actually reliable. Surely there's a way to get around this?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark
03-20-2015
08:45 AM
Update: I found the solution on another user's post here: https://community.cloudera.com/t5/Cloudera-Manager-Installation/aftper-config-kerberos-on-CDH5-Service-Monitor-HDFS-can-not/td-p/18712 Looks like cloudera manager only uses TCP, even though it is not recommended to use TCP on your KDC, as there is little protection against denail-of-service attacks (http://linux.die.net/man/5/kdc.conf). Why is this not noted in the documentation as a requirement for the KDC?
... View more
03-20-2015
07:24 AM
I had CM create and deploy the KRB5.conf files originally. I've tried modifying them manually a few times since, since I can no longer use CM to push changes to them while the services refuse to start. The content of the file is: [libdefaults]
default_realm = MY.REALM.COM
dns_lookup_kdc = true
dns_lookup_realm = false
ticket_lifetime = 36000
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts:normal
default_tkt_enctypes = aes256-cts:normal
permitted_enctypes = aes256-cts:normal
udp_preference_limit = 1
[realms]
MY.REALM.COM = {
kdc = MY.KDC.HOST.COM
admin_server = MY.KDC.HOST.COM
default_domain = MY.KDC.HOST.COM
}
[domain_realm]
.my.realm.com = MY.REALM.COM
my.realm.com = MY.REALM.COM
[logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
default = FILE:/var/log/krb5lib.log
includedir /etc/krb5.conf.d/ I do have AES strong encryption in use, but both the security jars are present: # ls -lah /usr/java/latest/jre/lib/security/*.jar
-rw-rw-r-- 1 root root 2.5K May 31 2011 /usr/java/latest/jre/lib/security/local_policy.jar
-rw-rw-r-- 1 root root 2.5K May 31 2011 /usr/java/latest/jre/lib/security/US_export_policy.jar Lookups also seem to be returning as expected. Where should I go from here?
... View more
03-19-2015
03:01 PM
I installed the latest Cloudera Manager Server and Agent packages (5.3.2) started everything up fine, created my cluster, installed parcels for HDFS, HBase, Hive, Hue, Impala, Oozie, Yarn, and Zookeeper. I fixed all configuration and health issues. Everything was working exactly as expected. I then began following instructions I found here: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cm_sg_intro_kerb.html to get my cluster secured with Kerberos. Have my KDC fully configured and running. The wizard was working great until just after it prompted me for the admin principal (cloudera-scm/admin) and its password. After I entered those, it indicated that it had successfully authenticated that principal, and began to configure my services to be used with kerberos. At some point though (I believe as it was trying to turn the Cloudera Management Service back on) it failed, and emitted this: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: Login failure for hue/my.namenode.com@MY.REALM.COM from keytab hue.keytab
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at com.cloudera.cmf.cdhclient.CdhExecutorFactory.createExecutor(CdhExecutorFactory.java:274)
at com.cloudera.cmf.cdhclient.CdhExecutorFactory.createExecutor(CdhExecutorFactory.java:309)
at com.cloudera.enterprise.AbstractCDHVersionAwarePeriodicService.<init>(AbstractCDHVersionAwarePeriodicService.java:73)
at com.cloudera.cmon.firehose.AbstractHBasePoller.<init>(AbstractHBasePoller.java:95)
at com.cloudera.cmon.firehose.HBaseFsckPoller.<init>(HBaseFsckPoller.java:53)
at com.cloudera.cmon.firehose.Firehose.createSecurityAwarePollers(Firehose.java:446)
at com.cloudera.cmon.firehose.Firehose.setupServiceMonitoringPollers(Firehose.java:436)
at com.cloudera.cmon.firehose.Firehose.<init>(Firehose.java:311)
at com.cloudera.cmon.firehose.Main.main(Main.java:527)
Caused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: Login failure for hue/my.namenode.com@MY.REALM.COM from keytab hue.keytab
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at com.cloudera.cmf.cdhclient.CdhExecutorFactory.createExecutor(CdhExecutorFactory.java:268)
... 8 more
Caused by: java.lang.RuntimeException: java.io.IOException: Login failure for hue/my.namenode.com@MY.REALM.COM from keytab hue.keytab
at com.google.common.base.Throwables.propagate(Throwables.java:160)
at com.cloudera.cmf.cdhclient.CdhExecutorFactory$SecureClassLoaderSetupTask.run(CdhExecutorFactory.java:491)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.IOException: Login failure for hue/my.namenode.com@MY.REALM.COM from keytab hue.keytab
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:855)
at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:279)
at com.cloudera.cmf.cdh4client.CDH4ObjectFactoryImpl.login(CDH4ObjectFactoryImpl.java:194)
at com.cloudera.cmf.cdhclient.CdhExecutorFactory$SecureClassLoaderSetupTask.run(CdhExecutorFactory.java:485)
... 5 more
Caused by: javax.security.auth.login.LoginException: Connection refused
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:767)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
at javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
at javax.security.auth.login.LoginContext.login(LoginContext.java:590)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:846)
... 8 more
Caused by: java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.krb5.internal.TCPClient.<init>(NetClient.java:65)
at sun.security.krb5.internal.NetClient.getInstance(NetClient.java:43)
at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:372)
at sun.security.krb5.KdcComm$KdcCommunication.run(KdcComm.java:343)
at java.security.AccessController.doPrivileged(Native Method)
at sun.security.krb5.KdcComm.send(KdcComm.java:327)
at sun.security.krb5.KdcComm.send(KdcComm.java:219)
at sun.security.krb5.KdcComm.send(KdcComm.java:191)
at sun.security.krb5.KrbAsReqBuilder.send(KrbAsReqBuilder.java:319)
at sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
... 21 more The wizard offered no recourse, or suggestions for how to continue. Going back to the CM "home," I can get the same error every time I try to restart the Cloudera Management Service. I get a different error if I try to restart my cluster first instead, but it also seems to indicate that kerberos is not working: 2015-03-19 21:25:43,255 ERROR org.apache.zookeeper.server.ZooKeeperServerMain: Unexpected exception, exiting abnormally
java.io.IOException: Could not configure server because SASL configuration did not allow the ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Connection refused
at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:207)
at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:116)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:91)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:53)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:121)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79) However, the "security inspector" tool executes successfully, and all the principals listed under the Kerberos->credentials page match the principals I can see on my KDC's kadmin shell. Any idea what's gone wrong, or how I can troubleshoot it?
... View more
10-07-2014
01:26 PM
ended up just running hue/build/env/bin/pip install windmill and then hue/build/env/bin/hue test specific myapp.tests worked just fine. Seems weird that that dependency wouldn't have installed itself during `make apps`. Anyhow, thanks for your help.
... View more