Member since
01-19-2017
3676
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 612 | 06-04-2025 11:36 PM | |
| 1181 | 03-23-2025 05:23 AM | |
| 585 | 03-17-2025 10:18 AM | |
| 2190 | 03-05-2025 01:34 PM | |
| 1376 | 03-03-2025 01:09 PM |
04-26-2019
01:26 AM
@Geoffrey Shelton Okot Thank you. I think Decommissioning is a best way to do this. Do I need to turn on maintainance mode while I do decommissioning datanode? Also Do I need to stop the services like - NodeManager, Ambari metrics for that datanode after decommissioning?
... View more
04-23-2019
08:31 PM
Very intresting question!
... View more
04-30-2019
11:04 AM
@Naveenraj Devadoss What was the solution? Did you update the network config in /etc/host?
... View more
04-15-2019
04:47 AM
@Sandeep R It seems to be an SSL issue can you validate your LDAP, the port 636 is LDAPS and 389 is for LDAP. To enable LDAPS, you must install a certificate that meets the following requirements: The LDAPS certificate is located in the Local Computer's Personal certificate store (programmatically known as the computer's MY certificate store). A private key that matches the certificate is present in the Local Computer's store and is correctly associated with the certificate. The private key must not have strong private key protection enabled. The Enhanced Key Usage extension includes the Server Authentication (1.3.6.1.5.5.7.3.1) object identifier (also known as OID). The Active Directory fully qualified domain name of the domain controller (for example, DC01.DOMAIN.COM) must appear in one of the following places: The Common Name (CN) in the Subject field. DNS entry in the Subject Alternative Name extension. The certificate was issued by a CA that the domain controller and the LDAPS clients trust. Trust is established by configuring the clients and the server to trust the root CA to which the issuing CA chains. You must use the Schannel cryptographic service provider (CSP) to generate the key. Hope that helps
... View more
02-12-2019
03:14 PM
@Michael Bronson Just create the home directory as follows # su - hdfs
$ hdfs dfs -mkdir /user/slider
$ hdfs dfs -chown slider:hdfs /user/slider That should be enough .. good luck
... View more
02-13-2019
09:13 PM
@Sampath Kumar Cheers
... View more
02-09-2019
07:05 AM
@Howchoy Nice to know it worked but the real issue is that the tookit.sh interprets the $ sign as a special character that's the reason you MUST use an escape character for it to work and the length of more than 13 characters. I am sure if you tried "Ce\$18C" it won't work either.
... View more
02-08-2019
02:58 PM
The below steps describe how to change the Namenode log level while logged on as hdfs with the below steps, without the need to restart the namenode Get the current log level $ hadoop daemonlog -getlevel {namenode_host}:50070BlockStateChange Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerEffectiveLevel: INFO Change to DEBUG $ hadoop daemonlog -setlevel {namenode_host}:50070BlockStateChange DEBUG Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange&level=DEBUG
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerSubmittedLevel: DEBUG
SettingLevel to DEBUG ...
EffectiveLevel: DEBUG Validate DEBUG mode $ hadoop daemonlog -getlevel {namenode_host}:50070BlockStateChange Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerEffectiveLevel: DEBUG You should be able to notice the logging level in namenode.log has been updated, without restarting the service. After finishing your diagnostics you can reset the logging level back to INFO Reset to INFO $ hadoop daemonlog -setlevel {namenode_host}:50070BlockStateChange INFO Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange&level=INFO
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerSubmittedLevel: INFO
SettingLevel to INFO ...
EffectiveLevel: INFO Validate INFO $ hadoop daemonlog -getlevel {namenode_host}:50070BlockStateChange Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerEffectiveLevel: INFO Happy hadooping !!!!
... View more
Labels:
02-12-2019
07:39 AM
When i use WASB as storage, while creating cluster i need to have Master node and compute nodes only right ? No need to have worker node as i am using WASB not HDFS ?
... View more