Member since
04-12-2019
105
Posts
3
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3487 | 05-28-2019 07:41 AM | |
2071 | 05-28-2019 06:49 AM | |
1677 | 12-20-2018 10:54 AM | |
1214 | 06-27-2018 09:05 AM | |
6646 | 06-27-2018 09:02 AM |
04-18-2018
12:06 PM
Hi, we are using hdp 2.5.2 I'm newer in security part. I have integrated Kerberos unix based authentication(we are not using LDAP) where we have also configured SPNEGO for restrict the UI access, which is working fine. Now We are looking for knox, As per documentation, Knox provide SSO facility. I have few query based on knox 1. Is it mandatory to install AD/LDAP with KNOX? we want to run knox without AD/LDAP. 2. How know is useful in web based URL? Have there any specific link and configuration which i can understand. 3. Does we have to remove SPNEGO if we want to configure KNOX with SSO? Please assist.. Will be very helpful..
... View more
Labels:
- Labels:
-
Apache Knox
04-13-2018
05:07 AM
yes, It's working fine. It's not an issue. command is executed successfully by same method. No changing required.
... View more
04-09-2018
08:04 AM
Thanks for reply. But still i'm in doubt, Why it is not 126 MB or 132 MB?
... View more
04-09-2018
08:01 AM
I have configured simple test oozie workflow and running below configuration: <?xml version="1.0" encoding="UTF-8" standalone="no"?>
<workflow-app xmlns="uri:oozie:workflow:0.5" name="shell-action-asif-test">
<start to="shell_1"/>
<action name="shell_1">
<shell xmlns="uri:oozie:shell-action:0.3">
<job-tracker>${resourceManager}</job-tracker>
<name-node>${nameNode}</name-node>
<exec>echo</exec> <argument>"This is testing by ME"</argument>
<capture-output/>
</shell>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>${wf:errorMessage(wf:lastErrorNode())}</message>
</kill>
<end name="end"/>
</workflow-app> While running above configuration, i'm getting error in logs: 2018-04-09 12:03:14,036 INFO ActionStartXCommand:520 - SERVER[slave3.bd-coe.com] USER[admin] GROUP[-] TOKEN[] APP[shell-action-asif-test] JOB[0000011-180330162120442-oozie-oozi-W] ACTION[0000011-180330162120442-oozie-oozi-W@:start:] Start action [0000011-180330162120442-oozie-oozi-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2018-04-09 12:03:14,036 INFO ActionStartXCommand:520 - SERVER[slave3.bd-coe.com] USER[admin] GROUP[-] TOKEN[] APP[shell-action-asif-test] JOB[0000011-180330162120442-oozie-oozi-W] ACTION[0000011-180330162120442-oozie-oozi-W@:start:] [***0000011-180330162120442-oozie-oozi-W@:start:***]Action status=DONE 2018-04-09 12:03:14,036 INFO ActionStartXCommand:520 - SERVER[slave3.bd-coe.com] USER[admin] GROUP[-] TOKEN[] APP[shell-action-asif-test] JOB[0000011-180330162120442-oozie-oozi-W] ACTION[0000011-180330162120442-oozie-oozi-W@:start:] [***0000011-180330162120442-oozie-oozi-W@:start:***]Action updated in DB! 2018-04-09 12:03:14,087 INFO WorkflowNotificationXCommand:520 - SERVER[slave3.bd-coe.com] USER[-] GROUP[-] TOKEN[-] APP[-] JOB[0000011-180330162120442-oozie-oozi-W] ACTION[] No Notification URL is defined. Therefore nothing to notify for job 0000011-180330162120442-oozie-oozi-W Ouput in yarn logs: Log Type: stderr Log Upload Time: Mon Apr 09 12:02:54 +0530 2018 Log Length: 1728 Apr 09, 2018 12:02:39 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class Apr 09, 2018 12:02:39 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class Log Type: stdout
Log Upload Time: Mon Apr 09 12:02:54 +0530 2018 Log Length: 0 Can someone help on it? Will be appreciable.
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache YARN
04-04-2018
11:16 AM
1 Kudo
Hi, Hope all doing well. I'm looking for reason why data block size is 128 MB in Hadoop 2.x? What logic we have used to define the size should be 128 MB? Why we didn't define 100MB?
... View more
Labels:
- Labels:
-
Apache Hadoop
03-22-2018
06:20 AM
It's working now. Check pointing period is 6-7 hours. During that period, NN was down. Thanks
... View more
03-20-2018
11:27 AM
@Geoffrey S. O. May be that point is not meaningful. Thanks for help me. You guys spent your precious time in community, That is appreciable.
... View more
03-20-2018
11:12 AM
@Jay Kumar SenSharma Below is the output which i found [root@slave0 centos]#curl "http://slave1.dl.com:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem" | grep 'LastCheckpointTime' % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1427 0 1427 0 0 183k 0 --:--:-- --:--:-- --:--:-- 199k
"LastCheckpointTime" : 1521460953000, [root@slave0 centos]# date -d @1521460953000 Fri Mar 7 00:00:00 IST 50183 It may be automatically checkpoint is not happening, While time is sync between servers.
... View more
03-20-2018
10:59 AM
@Geoffrey Shelton If you not comfort i can remove 2 point. I'd found 2nd point in zookeeper documentation.
... View more
03-20-2018
10:31 AM
Hi, I'm looking at checkpoint alert in NN ha environment. Where i have last checkpoint was completed 22 hours ago. I'm doing checkpoint manually by command line. how can i do it automatically and how can we ignore these alerts about checkpoint in UI. Last Checkpoint: [22 hours, 19 minutes, 45507 transactions]
... View more
Labels:
- Labels:
-
Apache Hadoop
- « Previous
- Next »