Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1562 | 06-04-2025 11:36 PM | |
| 2035 | 03-23-2025 05:23 AM | |
| 959 | 03-17-2025 10:18 AM | |
| 3645 | 03-05-2025 01:34 PM | |
| 2529 | 03-03-2025 01:09 PM |
02-08-2019
02:58 PM
The below steps describe how to change the Namenode log level while logged on as hdfs with the below steps, without the need to restart the namenode Get the current log level $ hadoop daemonlog -getlevel {namenode_host}:50070BlockStateChange Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerEffectiveLevel: INFO Change to DEBUG $ hadoop daemonlog -setlevel {namenode_host}:50070BlockStateChange DEBUG Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange&level=DEBUG
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerSubmittedLevel: DEBUG
SettingLevel to DEBUG ...
EffectiveLevel: DEBUG Validate DEBUG mode $ hadoop daemonlog -getlevel {namenode_host}:50070BlockStateChange Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerEffectiveLevel: DEBUG You should be able to notice the logging level in namenode.log has been updated, without restarting the service. After finishing your diagnostics you can reset the logging level back to INFO Reset to INFO $ hadoop daemonlog -setlevel {namenode_host}:50070BlockStateChange INFO Desired Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange&level=INFO
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerSubmittedLevel: INFO
SettingLevel to INFO ...
EffectiveLevel: INFO Validate INFO $ hadoop daemonlog -getlevel {namenode_host}:50070BlockStateChange Output Connecting to http://{namenode_host}:50070/logLevel?log=BlockStateChange
SubmittedLogName:BlockStateChange
LogClass: org.apache.commons.logging.impl.Log4J
LoggerEffectiveLevel: INFO Happy hadooping !!!!
... View more
Labels:
02-10-2019
10:14 PM
1 Kudo
@Michael Bronson HWX doesn't recommend upgrading an individual HDP component because one never knows the incompatibilities that could impact the other components and component selective upgrades tend to be a nightmare during a version upgrade The lastest HDP Kafka version is 11-2.1.x delivered by HDP 3.1 but ASF has its own rollout version and naming convention HTH
... View more
02-09-2019
10:32 PM
1 Kudo
@christophe VALMIR Usually, after the restart give the process a minute or 2 for the processes to pick up. Please don't forget to vote a helpful answer as the issue is resolved,that way other HCC users can quickly find the solution when they encounter the same issue. HTH
... View more
02-06-2019
01:03 PM
@Chris Jenkins My pleasure I made your day and welcome to Big data space, having to go all through all this will make you better technically you've now seen the different facets to resolving a problem. Happy Hadooping !
... View more
02-07-2019
03:42 PM
@Shraddha Singh Where machine is the FQDN and {rangerkms_password} is the rangerkms user password. The FQDN is the output of $hostname -f Re-run the below commands grant all privileges on rangerkms.* to 'rangerkms'@'machine' identified by '{rangerkms_password}';
grant all privileges on rangerkms.* to 'rangerkms'@'machine' with grant option; And let me know
... View more
01-29-2019
01:50 AM
Thanks Again! I do believe I found my issue. the repos where not complete and accurate on my ubuntu 18.04 builds , so I just copied repos from my xenial 16.04 box and replaced xenial with ubuntu then was able to install lafter update the kerberos client.
here was my final repo for ubuntu 18.04
deb http://us.archive.ubuntu.com/ubuntu/ bionic main restricted
deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates main restricted
deb http://us.archive.ubuntu.com/ubuntu/ bionic universe
deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates universe
deb http://us.archive.ubuntu.com/ubuntu/ bionic multiverse
deb http://us.archive.ubuntu.com/ubuntu/ bionic-updates multiverse
deb http://us.archive.ubuntu.com/ubuntu/ bionic-backports main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu bionic-security main restricted
deb http://security.ubuntu.com/ubuntu bionic-security universe
deb http://security.ubuntu.com/ubuntu bionic-security multiverse
... View more
01-28-2019
11:57 AM
1 Kudo
@Michael Bronson If you have exhausted all other avenues YES, Step 1 Check and compare the /usr/hdp/current/kafka-broker symlinks Step 2 Download both env'es as backup from the problematic and functioning cluster Upload the functioning cluster env to the problematic one, since you have a backup Start kafka through ambari Step 3 sed -i 's/verify=platform_default/verify=disable/'/etc/python/cert-verification.cfg Step 4 Lastly, if the above steps don't remedy the issue, then remove and -re-install the ambari-agent and remember to manually point to the correct ambari server in the ambari-agent.ini
... View more
01-25-2019
07:47 PM
Thanks Geoffrey. I copied the backup of the ambari.properties to expected location and ran the upgrade command again and it worked this time.
... View more
01-28-2019
08:50 AM
@Bhushan Kandalkar Good it worked out but you shouldn't have omitted the information about the architecture ie Load balancer such info is critical in the analysis ....:-) Happy hadooping
... View more
01-24-2019
04:07 PM
@Lokesh Mukku Good to know it has given you a better understanding. If you found this answer addressed your question, please take a moment to log in and click the "accept" link on the answer. That would be a great help to Community users to find the solution quickly for these kinds of errors. Happy hadooping !!!!!
... View more
- « Previous
- Next »