Member since
09-20-2018
26
Posts
1
Kudos Received
0
Solutions
02-12-2019
03:52 PM
Hi @Jay kumar SenSharma,
Below is the log generated.
************************************************************/
2019-02-12 11:29:49,759 INFO datanode.DataNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2019-02-12 11:29:50,817 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/hadoop/hdfs/data
2019-02-12 11:29:50,842 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/data/0/hadoop/hdfs/data
2019-02-12 11:29:50,846 INFO checker.ThrottledAsyncChecker (ThrottledAsyncChecker.java:schedule(137)) - Scheduling a check for [DISK]file:/data/1/hadoop/hdfs/data
2019-02-12 11:29:51,087 INFO impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2019-02-12 11:29:52,011 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(85)) - Initializing Timeline metrics sink.
2019-02-12 11:29:52,012 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(105)) - Identified hostname = devaz02, serviceName = datanode
2019-02-12 11:29:52,117 INFO availability.MetricSinkWriteShardHostnameHashingStrategy (MetricSinkWriteShardHostnameHashingStrategy.java:findCollectorShard(42)) - Calculated collector shard devaz03 based on hostname: devaz02
2019-02-12 11:29:52,117 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(130)) - Collector Uri: http://devaz03:6188/ws/v1/timeline/metrics
2019-02-12 11:29:52,122 INFO timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(131)) - Container Metrics Uri: http://devaz03:6188/ws/v1/timeline/containermetrics
2019-02-12 11:29:52,136 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(204)) - Sink timeline started
2019-02-12 11:29:52,269 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-02-12 11:29:52,269 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - DataNode metrics system started
2019-02-12 11:29:52,733 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-02-12 11:29:52,736 INFO datanode.BlockScanner (BlockScanner.java:<init>(184)) - Initialized block scanner with targetBytesPerSec 1048576
2019-02-12 11:29:52,747 INFO datanode.DataNode (DataNode.java:<init>(486)) - File descriptor passing is enabled.
2019-02-12 11:29:52,748 INFO datanode.DataNode (DataNode.java:<init>(499)) - Configured hostname is devaz02
2019-02-12 11:29:52,753 INFO common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-02-12 11:29:52,757 INFO datanode.DataNode (DataNode.java:startDataNode(1399)) - Starting DataNode with maxLockedMemory = 0
2019-02-12 11:29:52,805 INFO datanode.DataNode (DataNode.java:initDataXceiver(1147)) - Opened streaming server at /0.0.0.0:50010
2019-02-12 11:29:52,807 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s
2019-02-12 11:29:52,808 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
2019-02-12 11:29:52,815 INFO datanode.DataNode (DataXceiverServer.java:<init>(78)) - Balancing bandwidth is 6250000 bytes/s
2019-02-12 11:29:52,816 INFO datanode.DataNode (DataXceiverServer.java:<init>(79)) - Number threads for balancing is 50
2019-02-12 11:29:52,816 INFO datanode.DataNode (DataNode.java:initDataXceiver(1165)) - Listening on UNIX domain socket: /var/lib/hadoop-hdfs/dn_socket
2019-02-12 11:29:52,905 INFO util.log (Log.java:initialized(192)) - Logging initialized @4548ms
2019-02-12 11:29:53,107 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-02-12 11:29:53,115 INFO http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.datanode is not defined
2019-02-12 11:29:53,127 INFO http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-02-12 11:29:53,135 INFO http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context datanode
2019-02-12 11:29:53,136 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2019-02-12 11:29:53,136 INFO http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2019-02-12 11:29:53,136 INFO security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-02-12 11:29:53,185 INFO http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 44838
2019-02-12 11:29:53,186 INFO server.Server (Server.java:doStart(346)) - jetty-9.3.19.v20170502
2019-02-12 11:29:53,253 INFO server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-02-12 11:29:53,260 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@2a79d4b1{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE}
2019-02-12 11:29:53,261 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@17cdf2d0{/static,file:///usr/hdp/3.0.1.0-187/hadoop-hdfs/webapps/static/,AVAILABLE}
2019-02-12 11:29:53,399 INFO handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@662f5666{/,file:///usr/hdp/3.0.1.0-187/hadoop-hdfs/webapps/datanode/,AVAILABLE}{/datanode}
2019-02-12 11:29:53,416 INFO server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@4879f0f2{HTTP/1.1,[http/1.1]}{localhost:44838}
2019-02-12 11:29:53,417 INFO server.Server (Server.java:doStart(414)) - Started @5060ms
2019-02-12 11:29:53,770 INFO web.DatanodeHttpServer (DatanodeHttpServer.java:start(255)) - Listening HTTP traffic on /0.0.0.0:50075
2019-02-12 11:29:53,781 INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-02-12 11:29:53,860 INFO datanode.DataNode (DataNode.java:startDataNode(1427)) - dnUserName = hdfs
2019-02-12 11:29:53,860 INFO datanode.DataNode (DataNode.java:startDataNode(1428)) - supergroup = hdfs
2019-02-12 11:29:53,956 INFO ipc.CallQueueManager (CallQueueManager.java:<init>(84)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2019-02-12 11:29:53,986 INFO ipc.Server (Server.java:run(1074)) - Starting Socket Reader #1 for port 8010
2019-02-12 11:29:54,518 INFO datanode.DataNode (DataNode.java:initIpcServer(1033)) - Opened IPC server at /0.0.0.0:8010
2019-02-12 11:29:54,548 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(149)) - Refresh request received for nameservices: null
2019-02-12 11:29:54,569 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(210)) - Starting BPOfferServices for nameservices: <default>
2019-02-12 11:29:54,600 INFO datanode.DataNode (BPServiceActor.java:run(810)) - Block pool <registering> (Datanode Uuid unassigned) service to devaz01/10.161.137.4:8020 starting to offer service
2019-02-12 11:29:54,612 INFO ipc.Server (Server.java:run(1314)) - IPC Server Responder: starting
2019-02-12 11:29:54,613 INFO ipc.Server (Server.java:run(1153)) - IPC Server listener on 8010: starting
2019-02-12 11:29:54,911 INFO datanode.DataNode (BPOfferService.java:verifyAndSetNamespaceInfo(378)) - Acknowledging ACTIVE Namenode during handshakeBlock pool <registering> (Datanode Uuid unassigned) service to devaz01/10.161.137.4:8020
2019-02-12 11:29:54,915 INFO common.Storage (DataStorage.java:getParallelVolumeLoadThreadsNum(354)) - Using 3 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=3, dataDirs=3)
2019-02-12 11:29:54,963 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/data/in_use.lock acquired by nodename 32007@devaz02
2019-02-12 11:29:55,014 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /data/0/hadoop/hdfs/data/in_use.lock acquired by nodename 32007@devaz02
2019-02-12 11:29:55,016 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/data/0/hadoop/hdfs/data
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/0/hadoop/hdfs/data is in an inconsistent state: cluster Id is incompatible with others.
at org.apache.hadoop.hdfs.server.common.StorageInfo.setClusterId(StorageInfo.java:193)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:620)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:605)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:134)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:714)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:551)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1718)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1678)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-02-12 11:29:55,038 INFO common.Storage (Storage.java:tryLock(905)) - Lock on /data/1/hadoop/hdfs/data/in_use.lock acquired by nodename 32007@devaz02
2019-02-12 11:29:55,039 WARN common.Storage (DataStorage.java:loadDataStorage(418)) - Failed to add storage directory [DISK]file:/data/1/hadoop/hdfs/data
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /data/1/hadoop/hdfs/data is in an inconsistent state: cluster Id is incompatible with others.
at org.apache.hadoop.hdfs.server.common.StorageInfo.setClusterId(StorageInfo.java:193)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:620)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:605)
at org.apache.hadoop.hdfs.server.common.StorageInfo.readProperties(StorageInfo.java:134)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:714)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:294)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:407)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:387)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:551)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1718)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1678)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-02-12 11:29:55,072 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(251)) - Analyzing storage directories for bpid BP-49067060-10.161.137.4-1549610122990
2019-02-12 11:29:55,072 INFO common.Storage (Storage.java:lock(864)) - Locking is disabled for /hadoop/hdfs/data/current/BP-49067060-10.161.137.4-1549610122990
2019-02-12 11:29:55,087 INFO datanode.DataNode (DataNode.java:initStorage(1721)) - Setting up storage: nsid=1887919669;bpid=BP-49067060-10.161.137.4-1549610122990;lv=-57;nsInfo=lv=-64;cid=CID-20f8f873-2247-415f-ab5d-580d50c81e0a;nsid=1887919669;c=1549610122990;bpid=BP-49067060-10.161.137.4-1549610122990;dnuuid=2d5b2949-fa6a-405a-b56d-e8a0ef860e37
2019-02-12 11:29:55,115 ERROR datanode.DataNode (BPServiceActor.java:run(829)) - Initialization failed for Block pool <registering> (Datanode Uuid 2d5b2949-fa6a-405a-b56d-e8a0ef860e37) service to devaz01/10.161.137.4:8020. Exiting.
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 1, volumes configured: 3, volumes failed: 2, volume failures tolerated: 0
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:311)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1732)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1678)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:390)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-02-12 11:29:55,115 WARN datanode.DataNode (BPServiceActor.java:run(853)) - Ending block pool service for: Block pool <registering> (Datanode Uuid 2d5b2949-fa6a-405a-b56d-e8a0ef860e37) service to devaz01/10.161.137.4:8020
2019-02-12 11:29:55,218 INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool <registering> (Datanode Uuid 2d5b2949-fa6a-405a-b56d-e8a0ef860e37)
2019-02-12 11:29:57,218 WARN datanode.DataNode (DataNode.java:secureMain(2890)) - Exiting Datanode
2019-02-12 11:29:57,223 INFO datanode.DataNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at devaz02/10.161.137.5
... View more
02-12-2019
11:17 AM
I have installed Ambari 2.7 and HDP 3.0 , namenode nad secondar NN are up but the datanode is getting down after few seconds. I have tried cd /hadoop/hdfs/data and removing current folder but did not work. The logs are attached from /var/log/hadoop/hdfs/hadoop-hdfs-datanode-machine.log
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-machine.log <==
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:817)
at java.lang.Thread.run(Thread.java:745)
2019-02-12 11:10:34,999 WARN datanode.DataNode (BPServiceActor.java:run(853)) - Ending block pool service for: Block pool <registering> (Datanode Uuid 3b87ba72-4e41-425f-afb0-17d2b37dde4b) service to machine/xx.xxx.x.xxx.x:8020
2019-02-12 11:10:35,102 INFO datanode.DataNode (BlockPoolManager.java:remove(102)) - Removed Block pool <registering> (Datanode Uuid 3b87ba72-4e41-425f-afb0-17d2b37dde4b)
2019-02-12 11:10:37,102 WARN datanode.DataNode (DataNode.java:secureMain(2890)) - Exiting Datanode
2019-02-12 11:10:37,108 INFO datanode.DataNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at machine/xx.xxx.xxx.x.4
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
02-12-2019
11:00 AM
Thanks @Jay Kumar SenSharma That worked for me!
... View more
02-11-2019
04:27 AM
Hi @Geoffrey Shelton Okot I am getting the above error while starting the namenode from Ambari UI . When I click 'Start All' the services does not start because namenode is not getting started due to the above error.
... View more
02-08-2019
02:58 PM
I am getting error while installing namenode where it says "-safemode get | grep 'Safe mode is OFF''" Ambari:2.7.0.0 HDP:3.0 I got the below error and I ran : >hdfs dfsadmin -safemode leave
>Output:safemode: Call From devaz01/10.161.137.4 to devaz01:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
then I ran>chmod -R 644 /var/log/hadoop/hdfs
Since then I am getting: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/3.0.1.0-187/hadoop/bin/hdfs --config /usr/hdp/3.0.1.0-187/hadoop/conf --daemon start namenode'' returned 1. ERROR: Unable to write in /var/log/hadoop/hdfs. Aborting.
019-02-08 03:31:21,394 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://machine:8020 -safemode get | grep 'Safe mode is OFF'' returned 1. safemode: Call From machine/xx.xxx.xxx.x to machine:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
safemode: Call From machine/10.161.137.4 to machine:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
... View more
Labels:
- Labels:
-
Apache Hadoop
02-07-2019
03:12 PM
1 Kudo
Hi @Geoffrey Shelton Okot I am getting this error even after running the above commands on mysql. stderr:
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of 'ambari-python-wrap /usr/hdp/current/ranger-kms/dba_script.py -q' returned 1. 2019-02-07 12:52:26,435 [I] Running DBA setup script. QuiteMode:True
2019-02-07 12:52:26,435 [I] Using Java:/usr/jdk64/jdk1.8.0_112/bin/java
2019-02-07 12:52:26,435 [I] DB FLAVOR:MYSQL
2019-02-07 12:52:26,435 [I] DB Host:machine
2019-02-07 12:52:26,435 [I] ---------- Verifing DB root password ----------
2019-02-07 12:52:26,436 [I] DBA root user password validated
2019-02-07 12:52:26,436 [I] ---------- Verifing Ranger KMS db user password ----------
2019-02-07 12:52:26,436 [I] KMS user password validated
2019-02-07 12:52:26,436 [I] ---------- Creating Ranger KMS db user ----------
2019-02-07 12:52:26,436 [JISQL] /usr/jdk64/jdk1.8.0_112/bin/java -cp /usr/hdp/current/ranger-kms/ews/webapp/lib/mysql-connector-java.jar:/usr/hdp/current/ranger-kms/jisql/lib/* org.apache.util.sql.Jisql -driver mysqlconj -cstring jdbc:mysql://machine/mysql -u rangerkms -p '********' -noheader -trim -c \; -query "SELECT version();"
SQLException : SQL state: 28000 java.sql.SQLException: Access denied for user 'rangerkms'@'machine' (using password: YES) ErrorCode: 1045
2019-02-07 12:52:27,212 [E] Can't establish db connection.. Exiting..
... View more
02-07-2019
01:34 PM
I deleted the conf file under /usr/hdp/3.0.1.0-187/hadoop/ and reinstalled services using Ambari UI and it worked.
... View more
02-06-2019
02:23 PM
I have reinstalled and ran yum clean all, below is the output I am getting #hdp-select versions Traceback (most recent call last):
File "/bin/hdp-select", line 456, in <module>
printVersions()
File "/bin/hdp-select", line 295, in printVersions
for f in os.listdir(root):
OSError: [Errno 2] No such file or directory: '/usr/hdp' # hdp-select | grep hadoop Traceback (most recent call last):
File "/bin/hdp-select", line 456, in <module>
printVersions()
File "/bin/hdp-select", line 295, in printVersions
for f in os.listdir(root):
OSError: [Errno 2] No such file or directory: '/usr/hdp'
[root@devaz01 ~]# hdp-select | grep hadoop
hadoop-client - None
hadoop-hdfs-client - None
hadoop-hdfs-datanode - None
hadoop-hdfs-journalnode - None
hadoop-hdfs-namenode - None
hadoop-hdfs-nfs3 - None
hadoop-hdfs-portmap - None
hadoop-hdfs-secondarynamenode - None
hadoop-hdfs-zkfc - None
hadoop-httpfs - None
hadoop-mapreduce-client - None
hadoop-mapreduce-historyserver - None
hadoop-yarn-client - None
hadoop-yarn-nodemanager - None
hadoop-yarn-registrydns - None
hadoop-yarn-resourcemanager - None
hadoop-yarn-timelinereader - None
hadoop-yarn-timelineserver - None #ls -lart /usr/hdp ls: cannot access /usr/hdp: No such file or directory #ls -lart /usr/hdp/current/ ls: cannot access /usr/hdp/current/: No such file or directory
... View more
02-06-2019
12:38 PM
@Jay Kumar SenSharma While checking /usr/hdp/current folder there are no files available. Can you provide me a way to install HDP correctly to get the files and resolve the error. I tried scp'ing it from one of the node out of 5 which is working correctly but they are stored as directories.
... View more
02-06-2019
10:01 AM
run_as_user = root nxautom+ 6789 1 0 Feb02 ? 00:01:09 python /opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/DSCResources/MSFT_nxOMSAutomationWorkerResource/automationworker/worker/main.py /var/opt/microsoft/omsagent/state/automationworker/oms.conf rworkspace:2cdafa97-1d38-4ad7-afc8-68a74145a77c 1.6.3.0
root 97419 97415 9 08:48 pts/0 00:02:02 /usr/bin/python /usr/lib/ambari-agent/lib/ambari_agent/main.py start
root 109563 96746 0 09:11 pts/0 00:00:00 grep --color=auto main.py This is the output I am getting after running the above commands.
... View more
02-06-2019
08:37 AM
Hi @Jay Kumar SenSharma, I am running the ambari-agent command being the root user only.
... View more
02-06-2019
07:18 AM
Hi @Akhil S Naik I am getting 'permission denied' error on two hosts and on other hosts it is 'command not found'.
... View more
02-04-2019
06:49 PM
I am installing a 6 node cluster with 1 as ambari-server and rest other as agents. Ambari-version: 2.7.0.0 HDP version:3.0 I am getting error at the installation stage where it is not able to find hdp-select file for versions. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 363, in execute
self.save_component_version_to_structured_out(self.command_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 222, in save_component_version_to_structured_out
stack_select_package_name = stack_select.get_package_name()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 109, in get_package_name
package = get_packages(PACKAGE_SCOPE_STACK_SELECT, service_name, component_name)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 223, in get_packages
supported_packages = get_supported_packages()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/stack_select.py", line 147, in get_supported_packages
raise Fail("Unable to query for supported packages using {0}".format(stack_selector_path))
resource_management.core.exceptions.Fail: Unable to query for supported packages using /usr/bin/hdp-select
... View more
02-04-2019
02:15 PM
Thanks @Geoffrey Shelton Okot ! It worked for me!
... View more
02-03-2019
01:19 AM
I am trying to install HDP 3.0 version on a 6 node cluster where 5 nodes act as ambari-agent and 1 node as server. Here is the log file: stderr:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 137, in <module>
KmsServer().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms_server.py", line 51, in install
kms.setup_kms_db()
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms.py", line 68, in setup_kms_db
copy_jdbc_connector(kms_home)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/RANGER_KMS/package/scripts/kms.py", line 359, in copy_jdbc_connector
Please run 'ambari-server setup --jdbc-db={db_name} --jdbc-driver={path_to_jdbc} on server host.'".format(params.db_flavor, params.jdk_location)
KeyError: 'db_name'
stdout:
2019-02-02 04:43:22,791 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2019-02-02 04:43:22,810 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2019-02-02 04:43:22,813 - Group['kms'] {}
2019-02-02 04:43:22,815 - Group['livy'] {}
2019-02-02 04:43:22,815 - Group['spark'] {}
2019-02-02 04:43:22,819 - Group['ranger'] {}
2019-02-02 04:43:22,820 - Group['hdfs'] {}
2019-02-02 04:43:22,820 - Group['hadoop'] {}
2019-02-02 04:43:22,820 - Group['users'] {}
2019-02-02 04:43:22,821 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,824 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,829 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,831 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-02-02 04:43:22,833 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,835 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,840 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-02-02 04:43:22,842 - User['kms'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['kms', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,844 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,849 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,851 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-02-02 04:43:22,853 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,858 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-02-02 04:43:22,860 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,862 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,867 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-02-02 04:43:22,868 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-02-02 04:43:22,871 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-02-02 04:43:22,884 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-02-02 04:43:22,885 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2019-02-02 04:43:22,887 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-02-02 04:43:22,889 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-02-02 04:43:22,891 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2019-02-02 04:43:22,905 - call returned (0, '1017')
2019-02-02 04:43:22,906 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2019-02-02 04:43:22,914 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if
2019-02-02 04:43:22,915 - Group['hdfs'] {}
2019-02-02 04:43:22,916 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-02-02 04:43:22,917 - FS Type: HDFS
2019-02-02 04:43:22,918 - Directory['/etc/hadoop'] {'mode': 0755}
2019-02-02 04:43:22,964 - File['/usr/hdp/3.0.1.0-187/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-02-02 04:43:22,966 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-02-02 04:43:23,007 - Repository['HDP-3.0-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-02-02 04:43:23,032 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-02-02 04:43:23,034 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-02-02 04:43:23,039 - Repository['HDP-3.0-GPL-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0', 'action': ['create'], 'components': [u'HDP-GPL', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-02-02 04:43:23,049 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-02-02 04:43:23,050 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-02-02 04:43:23,050 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2019-02-02 04:43:23,060 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-3.0-repo-1]\nname=HDP-3.0-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-3.0-GPL-repo-1]\nname=HDP-3.0-GPL-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/3.x/updates/3.0.1.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2019-02-02 04:43:23,060 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2019-02-02 04:43:23,061 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:23,519 - Skipping installation of existing package unzip
2019-02-02 04:43:23,519 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:23,570 - Skipping installation of existing package curl
2019-02-02 04:43:23,570 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:23,626 - Skipping installation of existing package hdp-select
2019-02-02 04:43:23,639 - The repository with version 3.0.1.0-187 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2019-02-02 04:43:24,364 - Package['ranger_3_0_1_0_187-kms'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2019-02-02 04:43:24,819 - Skipping installation of existing package ranger_3_0_1_0_187-kms
2019-02-02 04:43:24,822 - Stack Feature Version Info: Cluster Stack=3.0, Command Stack=None, Command Version=None -> 3.0
2019-02-02 04:43:24,892 - Using hadoop conf dir: /usr/hdp/3.0.1.0-187/hadoop/conf
2019-02-02 04:43:24,910 - Execute[('cp', '-f', u'/usr/hdp/current/ranger-kms/install.properties', u'/usr/hdp/current/ranger-kms/install-backup.properties')] {'not_if': 'ls /usr/hdp/current/ranger-kms/install-backup.properties', 'sudo': True, 'only_if': 'ls /usr/hdp/current/ranger-kms/install.properties'}
2019-02-02 04:43:24,923 - Skipping Execute[('cp', '-f', u'/usr/hdp/current/ranger-kms/install.properties', u'/usr/hdp/current/ranger-kms/install-backup.properties')] due to not_if
2019-02-02 04:43:24,924 - Password validated
2019-02-02 04:43:24,930 - The repository with version 3.0.1.0-187 for this command has been marked as resolved. It will be used to report the version of the component which was installed
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ranger
12-04-2018
06:44 AM
Thanks @Akhil S Naik, the issue was resolved using the above method.
... View more
11-27-2018
02:56 PM
Hi @Akhil S Naik I got the following result after running the command: WARNING 2018-11-27 13:42:59,507 NetUtil.py:124 - Server at https://devdp30.eng.s sn:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-11-27 13:43:09,507 NetUtil.py:70 - Connecting to https://devdp30.eng.ssn:8440/ca
ERROR 2018-11-27 13:43:09,509 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:618)
ERROR 2018-11-27 13:43:09,509 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-11-27 13:43:09,509 NetUtil.py:124 - Server at https://devdp30.eng.ssn:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-11-27 13:43:19,510 main.py:439 - Connecting to Ambari server at https://devdp30.eng.ssn:8440 (172.23.222.59)
INFO 2018-11-27 13:43:19,510 NetUtil.py:70 - Connecting to https://devdp30.eng.ssn:8440/ca
ERROR 2018-11-27 13:43:19,511 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:618)
ERROR 2018-11-27 13:43:19,511 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-11-27 13:43:19,511 NetUtil.py:124 - Server at https://devdp30.eng.ssn:8440 is not reachable, sleeping for 10 seconds...
INFO 2018-11-27 13:43:29,512 NetUtil.py:70 - Connecting to https://devdp30.eng.ssn:8440/ca
ERROR 2018-11-27 13:43:29,513 NetUtil.py:96 - EOF occurred in violation of protocol (_ssl.c:618)
ERROR 2018-11-27 13:43:29,513 NetUtil.py:97 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2018-11-27 13:43:29,514 NetUtil.py:124 - Server at https://devdp30.eng.ssn:8440 is not reachable, sleeping for 10 seconds...
... View more
11-27-2018
11:46 AM
The output of the command is attached: 8433 as i changed the url port to 8433,it is still not reachable
... View more
11-27-2018
08:11 AM
Hi @Sampath Kumar, I have followed all the steps stated above already for the installation but still while registering I am getting error. devdp30.eng.ssn - is the hostname I have given and changed in all the required places.
... View more
11-27-2018
06:24 AM
Hi @Akhil S Naik I have manually nstalled the agent and made the changes in .ini file of ambari-agent and restarted everything but still I am getting the error that Registration with the server failed. I have changes the server name of .ini file as the output of hostname -f command.
... View more
11-27-2018
05:54 AM
OS: RHL 7 I am still not very clear regarding the FQDN of the system needed while registering the host. The screenshot is attached.
... View more
Labels:
- Labels:
-
Apache Ambari
09-24-2018
11:52 AM
@amarnath reddy pappu Could you help me to figure out the error in the images below and after 3 days I am not able to see any host while running command http://ambari-server-host:8080/api/v1/hosts.
... View more
09-24-2018
09:44 AM
Please find attached the screenshot of the result of command executed.
... View more
09-24-2018
08:18 AM
I have tried uninstalling and reinstalling Ambari-agent and conform that Ambari-agent is running on all three hosts with /etc/hosts file edited with IP address. How can I get back the heart beat of agents to contact with the server.
... View more
Labels:
- Labels:
-
Apache Ambari
09-21-2018
08:52 AM
Thanks Akhil, that worked for me.
... View more
09-20-2018
01:15 PM
RHEL 7 Amazon EC2 large instance with necessary security check done. The HDP version is 2.5 and all completion of two steps it fails with the registration of nodes with the server and after uninstalling and again installing back the same error persists. Any help would be appreciated.
... View more
Labels:
- Labels:
-
Apache Ambari