Member since
01-25-2017
396
Posts
28
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1395 | 10-19-2023 04:36 PM | |
| 5154 | 12-08-2018 06:56 PM | |
| 6759 | 10-05-2018 06:28 AM | |
| 23331 | 04-19-2018 02:27 AM | |
| 23353 | 04-18-2018 09:40 AM |
10-27-2017
05:02 PM
Hi Arpit I'm using hadoop 2.6 1- I'm starting the DN using the superuser. 2- No, HADOOP_SECURE_DN_USER is commented under /etc/default/hadoop-hdfs-datanode, no config for JSVC_HOME 3- dfs.data.transfer.protection is none Do i need to add there 2 parameters for my hadoop-env.sh under /etc/hadoop/conf?
... View more
10-27-2017
04:36 PM
Hi Guys, I'm unable to start DataNode after enabling the kerberos in my cluster.
I tried all the suggested solutions in the community and Internet and without any success to solve it.
All other servers started and my cluster and node able to authenticate against the active directory. Here the important config in the HDFS:
dfs.datanode.http.address 1006
dfs.datanode.address 1004
hadoop.security.authentication kerberos
hadoop.security.authorization true
hadoop.rpc.protection authentication Enable Kerberos Authentication for HTTP Web-Consoles true
and here is the log:
STARTUP_MSG: java = 1.8.0_101
************************************************************/
2017-10-23 06:56:02,698 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-10-23 06:56:03,449 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/aopr-dhc001.lpdomain.com@LPDOMAIN.COM using keytab file hdfs.keytab
2017-10-23 06:56:03,812 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-10-23 06:56:03,891 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-10-23 06:56:03,891 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-10-23 06:56:03,899 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2017-10-23 06:56:03,900 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: File descriptor passing is enabled.
2017-10-23 06:56:03,903 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is aopr-dhc001.lpdomain.com
2017-10-23 06:56:03,908 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: Cannot start secure DataNode without configuring either privileged resources or SASL RPC data transfer protection and SSL for HTTP. Using privileged resources in combination with SASL RPC data transfer protection is not supported.
at org.apache.hadoop.hdfs.server.datanode.DataNode.checkSecureConfig(DataNode.java:1371)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1271)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:464)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2583)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2470)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2517)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2699)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2723)
2017-10-23 06:56:03,919 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2017-10-23 06:56:03,921 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at aopr-dhc001.lpdomain.com/10.16.144.131
************************************************************/
2017-10-23 06:56:08,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = aopr-dhc001.lpdomain.com/10.16.144.131
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0-cdh5.13.0=======================
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
-
Kerberos
-
Security
09-23-2017
10:52 AM
I notice that yarn.resourcemanager.max-completed-applications is set to 10,000 which almost the running application for 2-3 days. I want to increase the number to 40,000 to cover the retention of 7 days, what other changes i should take inot consideration? should i increase the jaba heap size for the resources manager? should i change any other configuration at the level od node manager? what i should monitor after increasing max completed applications to 40,000? how this make impact the resource manager recovery performance? Thanks in advance.
... View more
09-17-2017
04:23 AM
i see that the permissions of /tmp/logs is 770 and when run the command yarn logs -applicationId with the application owner user, then yes i can see the logs using the Cli command but from the UI no one can see these logs, The group for the dir is cloudera-scm, and i don't have LDAP entry for this group, how i chould let user accessing this from the resource manager UI?
... View more
09-16-2017
07:16 AM
Hi everybody, I'm trying to increase the yarn application logs retention in my cluster and set the following parameters: yarn.log-aggregation-enable is set to true spark.eventLog.enabled is set to true yarn.log-aggregation.retain-seconds is set to 7 days But, In the resouce manager i can see the logs for the applications only for 3 days and from the resource manager application in CM i can access the logs only from 1 day back. Can you please help in these 2 issues?
... View more
Labels:
- Labels:
-
Apache YARN
08-26-2017
09:25 PM
To turn on debug mode on the job level, issue the following command before executing the job: export HADOOP_ROOT_LOGGER=hadoop.root.logger=Debug,console or add -Dhadoop.root.logger=DEBUG,console"
... View more
08-17-2017
10:41 AM
I figured out the issue. The diffetence comes from /tmp/logs. Weird why hdfs dfs -du -h -s / is not considering /tmp/logs.
... View more