Member since
11-08-2018
96
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6145 | 09-12-2019 10:04 AM | |
5715 | 02-12-2019 06:56 AM |
03-24-2020
05:30 AM
Hello @Shelton , Thanks for your immediate response. Find below outputs, HOSTNAME]$ ls /disk1/yarn/nm/usercache mcaf HOSTNAME]$ ls /disk1/yarn/nm/usercache/mcaf appcache filecache HOSTNAME]$ ls -lrt /disk1/yarn/nm/usercache/mcaf total 20 drwx--x--- 397 yarn yarn 16384 Mar 4 01:18 filecache drwx--x--- 2 yarn yarn 4096 Mar 4 02:22 appcache HOSTNAME]$ ls -lrt /disk1/yarn/nm/usercache total 4 drwxr-s--- 4 mcaf yarn 4096 Feb 24 01:26 mcaf Q1, If we enable kerberos do we needs to modify permissions to the above directory? And mcaf having sudo access. Q2, We are using two edgenodes. Can i use the above merged.keytab in another edgenode ? Or do i needs to generate them like what i did in current edgenode ? Best Regards, Vinod
... View more
03-23-2020
11:27 PM
Hello @Shelton @venkatsambath , As you mention above I have done merging keytab files like, mcaf.keytab, yarn.keytab and other service keytabs. Created mcafmerged.keytab and executed using kinit -kt mcafmerged.keytab mcaf@Domain.ORG After the above process i am able to access hdfs, hbase tables using hbase shell and able to see yarn applications -list. But when i run below sample yarn job, yarn jar /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/hadoop-examples.jar teragen 500000000 /tmp/teragen44 Getting below error's, Can't create directory /disk1/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied Can't create directory /disk2/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied Can't create directory /disk3/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied Can't create directory /disk4/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied Can't create directory /disk5/yarn/nm/usercache/mcaf/appcache/application_1585026002165_0001 - Permission denied Did not create any app directories. And i gave a trail run of my application job and that is also failing with below errors, org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:308) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:149) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:293) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:888) ~[DMXSLoader-0.0.31.jar:0.0.31] at com.class.name.dmxsloader.main.DMXSLoaderMain.hasStagingData(DMXSLoaderMain.java:304) [DMXSLoader-0.0.31.jar:0.0.31] at com.class.name.dmxsloader.main.DMXSLoaderMain.main(DMXSLoaderMain.java:375) [DMXSLoader-0.0.31.jar:0.0.31] Caused by: java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) ~[?:1.7.0_67] at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) ~[?:1.7.0_67] at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) ~[?:1.7.0_67] at sun.nio.ch.IOUtil.write(IOUtil.java:65) ~[?:1.7.0_67] at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487) ~[?:1.7.0_67] at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?] at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?] at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?] at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) ~[hadoop-common-2.6.0-cdh5.4.7.jar:?] at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) ~[?:1.7.0_67] at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) ~[?:1.7.0_67] at java.io.DataOutputStream.flush(DataOutputStream.java:123) ~[?:1.7.0_67] at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:246) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.ipc.IPCUtil.write(IPCUtil.java:234) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:895) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:850) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:31865) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1580) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1294) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1126) ~[DMXSLoader-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:299) ~[DMXSLoader-0.0.31.jar:0.0.31] ... 10 more NOTE: I have kept the below command in first line of my application script before going to launch the job, kinit -kt mcafmerged.keytab mcaf@MWKRBCDH.ORG Please let me know where i am missing here ? Thanks & Regards, Vinod
... View more
03-23-2020
09:34 AM
@venkatsambath Sorry for late response. Thank you for your valuable response and i got your point where i am doing mistake. Here i want to create a keytab file for a user and that user can access all the services like, hdfs, hbase and other services running in the cluster. I have tried with following steps please suggest me with your inputs. sudo ktutil ktutil: addent -password -p mcaf@Domain.ORG -k 1 -e RC4-HMAC Password for mcaf@Domain.ORG: ktutil: wkt mcaf.keytab ktutil: q klist -kt mcaf.keytab Keytab name: FILE:mcaf.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 03/23/20 11:58:38 mcaf@Domain.ORG sudo kinit -kt mcaf.keytab mcaf@Domain.ORG And able to access hdfs using, hadoop fs -ls / But coming to hbase, i am not able to see the tables. hbase(main):001:0> list TABLE 0 row(s) in 0.4090 seconds => [] When i copied the latest keytab from process directory for hbase-master, dayrhemwkq001:~:HADOOP QA]$ kinit -kt hbase.keytab hbase/dayrhemwkq001.enterprisenet.org@MWKRBCDH.ORG I can able to see the tables. My question is, I want to give a access to the user and that user can access hbase, hdfs and other services running in the luster. Please suggest me with your inputs. Best Regards, Vinod
... View more
03-05-2020
12:29 AM
Hi @venkatsambath, As you said i have kept the kinit commands in first step in my scripts and when ever we execute the commands the kinit also run. But still i am facing same issue but this time i can see zookeeper as a user, The commands i am using, kinit -kt /home/mcaf/hdfs.keytab hdfs/hostname@Domain.ORG kinit -kt /home/mcaf/hdfs.keytab HTTP/hostname@Domain.ORG kinit -kt /home/mcaf/hbase.keytab hbase/hostname@Domain.ORG kinit -kt /home/mcaf/yarn.keytab HTTP/hostname@Domain.ORG kinit -kt /home/mcaf/yarn.keytab yarn/hostname@Domain.ORG kinit -kt /home/mcaf/zookeeper.keytab zookeeper/hostname@Domain.org Error Logs, 20/03/04 02:00:42 WARN security.UserGroupInformation: PriviledgedActionException as:zookeeper/hostname@Domain.ORG (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=zookeeper, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:145) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6581) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6533) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4337) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4307) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4280) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:853) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038) org.apache.hadoop.security.AccessControlException: Permission denied: user=zookeeper, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x org.apache.hadoop.security.AccessControlException: Permission denied: user=zookeeper, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:257) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:238) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:216) at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:145) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6599) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6581) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6533) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4337) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4307) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4280) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:853) at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:321) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:601) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) Can you please help me on this issue? Best Regards, Vinod
... View more
02-18-2020
11:08 PM
Thank you @venkatsambath After modifying the min user id value to 500 i can able to run sample mapreduce job and i can see it in yarn applications in cloudera manager. Now, I have tried with my regular job in same cluster, But it is failing and find below error messages, ERROR 2020Feb19 02:01:21,086 main com.client.engineering.group.JOB.main.JOBMain: org.apache.hadoop.hbase.client.RetriesExhaustedException thrown: Can't get the location org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:308) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:149) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:293) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:268) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:140) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:135) ~[JOB-0.0.31.jar:0.0.31] at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:888) ~[JOB-0.0.31.jar:0.0.31] at com.client.engineering.group.JOB.main.JOBMain.hasStagingData(JOBMain.java:304) [JOB-0.0.31.jar:0.0.31] at com.client.engineering.group.JOB.main.JOBMain.main(JOBMain.java:375) [JOB-0.0.31.jar:0.0.31] Caused by: java.io.IOException: Broken pipe ERROR 2020Feb19 02:01:30,198 main com.client.engineering.group.job.main.jobMain: _v.1.0.0a_ org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException thrown: Failed 1 action: IOException: 1 time, org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, NOTE: I have executed the kinit mcaf before executing my job. And do we need to execute 'kinit mcaf' every time before submitting the job ? And how can we configure scheduled jobs ? Please help me to understand. Best Regards, Vinod
... View more
02-18-2020
10:07 PM
Hello @venkatsambath , FYI... min.user.id is set to 1000 in my Yarn configurations. allowed.system.users is set to impala,nobody,llama,hive in my Yarn configurations. Thanks, Vinod
... View more
02-18-2020
09:55 PM
Hi @venkatsambath First verified whether i am able to access hdfs before doing "kinit mcaf" and its failed to access. Now i did kinit mcaf and verified hdfs access and able to list the files and able to create a directories. Now i tried triggered sample yarn job, hostname.com:~:HADOOP QA]$ yarn jar /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/hadoop-examples.jar teragen 500000000 /tmp/teragen4 20/02/19 00:46:30 INFO client.RMProxy: Connecting to ResourceManager at resourcemanager/IP_ADDRESS:8032 20/02/19 00:46:30 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 8 for mcaf on ha-hdfs:nameservice1 20/02/19 00:46:30 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 8 for mcaf) 20/02/19 00:46:31 INFO terasort.TeraSort: Generating 500000000 using 2 20/02/19 00:46:31 INFO mapreduce.JobSubmitter: number of splits:2 20/02/19 00:46:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1582090413480_0002 20/02/19 00:46:31 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (HDFS_DELEGATION_TOKEN token 8 for mcaf) 20/02/19 00:46:32 INFO impl.YarnClientImpl: Submitted application application_1582090413480_0002 20/02/19 00:46:32 INFO mapreduce.Job: The url to track the job: http://resourcemanager:8088/proxy/application_1582090413480_0002/ 20/02/19 00:46:32 INFO mapreduce.Job: Running job: job_1582090413480_0002 20/02/19 00:46:34 INFO mapreduce.Job: Job job_1582090413480_0002 running in uber mode : false 20/02/19 00:46:34 INFO mapreduce.Job: map 0% reduce 0% 20/02/19 00:46:34 INFO mapreduce.Job: Job job_1582090413480_0002 failed with state FAILED due to: Application application_1582090413480_0002 failed 2 times due to AM Container for appattempt_1582090413480_0002_000002 exited with exitCode: -1000 For more detailed output, check application tracking page:http://resourcemanager:8088/proxy/application_1582090413480_0002/Then, click on links to logs of each attempt. Diagnostics: Application application_1582090413480_0002 initialization failed (exitCode=255) with output: Requested user mcaf is not whitelisted and has id 779,which is below the minimum allowed 1000 Failing this attempt. Failing the application. 20/02/19 00:46:34 INFO mapreduce.Job: Counters: 0 Can you please check it and let me know please. Regards, Vinod
... View more
02-18-2020
02:33 AM
Hello @venkatsambath Thank you for your response...!! Actually we use mcaf as a user to execute the jobs but why http user coming to the picture ? hostname.com:~:HADOOP QA]$ groups mcaf supergroup hostname.com:~:HADOOP QA]$ users mcaf hostname.com:~:HADOOP QA]$ hadoop fs -ls / Found 4 items drwx------ - hbase supergroup 0 2020-02-18 02:46 /hbase drwxr-xr-x - hdfs supergroup 0 2015-02-04 11:44 /system drwxrwxrwt - hdfs supergroup 0 2020-02-17 05:07 /tmp drwxr-xr-x - mcaf supergroup 0 2019-03-28 03:12 /user hostname.com:~:HADOOP QA]$ getent group supergroup supergroup:x:25290:hbase,mcaf,zookeeper,hdfs hostname.com:~:HADOOP QA]$ getent group hadoop hadoop:x:497:mapred,yarn,hdfs Can you please have a look and suggest me what to do? Note: I am trying to enable Kerberos and once it is running with out any interrupt or with out any issues, then we are planing to integrate with AD. Thanks, Vinod
... View more
02-17-2020
11:08 PM
Hello Team,
I have anabled MIT-Kerberos and integrated my cluster, Initialized the principals for hdfs, hbase and yarn.
Able to access the hdfs and hbase tables.
But when i am trying to run sample mapreduce job its getting failed, Find below error logs.
==> yarn jar /opt/cloudera/parcels/CDH-5.4.7-1.cdh5.4.7.p0.3/jars/hadoop-examples.jar teragen 500000000 /tmp/teragen2
Logs:
WARN security.UserGroupInformation: PriviledgedActionException as:HTTP/hostname.org@FQDN.COM (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Permission denied: user=HTTP, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=HTTP, access=WRITE, inode="/user":mcaf:supergroup:drwxr-xr-x
hostname.org:~:HADOOP QA]$ klist Ticket cache: FILE:/tmp/krb5cc_251473 Default principal: HTTP/hostname.org@FQDN.COM
Valid starting Expires Service principal 02/18/20 01:55:32 02/19/20 01:55:32 krbtgt/FQDN.COM@FQDN.COM renew until 02/23/20 01:55:32
Can some one please check the issue and help us.
Thanks & Regards,
Vinod
... View more
Labels:
- Labels:
-
Apache YARN
-
Kerberos
01-17-2020
08:43 AM
Hi @EricL , Can you please let me know your comments, And the strange thing is i have cleanedup everything and freshly installed and setup a cluster. But still we are facing same issue and the jobs are running in Local mode not in YARN Mode. I have ran it in Active Resource Manager server and edgenode, But no luck. Can someone please let us know where i can debug and fix this issue ? Is this related to OS level of issue or YARN issue or any other issue? When i ran hostinspector i ddint find any issues like, Firewall, selinux or other issues. Please do the needful and help us. Best Regards, Vinod
... View more