Reply
Explorer
Posts: 13
Registered: ‎12-30-2014

Upgrade not working

Hi I am trying to upgrade apache 1.2.1 to 2.6 but namenode upgrade hanging .

 

can you please check below log and let me know whats wrong with this . even i deleted the one blocking which it showing as corrupt but nothing worked .

 

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.6.0_30
************************************************************/
2015-03-08 14:10:22,814 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-03-08 14:10:22,816 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [-upgrade]
2015-03-08 14:10:23,178 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-03-08 14:10:23,260 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-03-08 14:10:23,260 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-03-08 14:10:23,262 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://node01:54310
2015-03-08 14:10:23,263 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use node01:54310 to access this namenode/service.
2015-03-08 14:10:30,819 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-03-08 14:10:30,850 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-03-08 14:10:30,852 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-03-08 14:10:30,862 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-03-08 14:10:30,864 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-03-08 14:10:30,864 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-03-08 14:10:30,865 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-03-08 14:10:30,894 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-03-08 14:10:30,901 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-03-08 14:10:30,927 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-03-08 14:10:30,927 INFO org.mortbay.log: jetty-6.1.26
2015-03-08 14:10:31,470 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-03-08 14:10:46,602 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/tmp/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-03-08 14:10:46,602 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/tmp/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-03-08 14:10:46,602 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-03-08 14:10:46,602 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-03-08 14:10:46,613 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/tmp/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-03-08 14:10:46,613 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/tmp/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-03-08 14:10:46,658 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-03-08 14:10:46,663 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-03-08 14:10:46,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-03-08 14:10:46,691 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true

 

2015-03-08 14:10:46,977 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of files under construction = 0
2015-03-08 14:10:46,977 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Image file /app/hadoop/tmp/dfs/name/current/fsimage of size 971 bytes loaded in 0 seconds.
2015-03-08 14:10:46,978 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /app/hadoop/tmp/dfs/name/current/fsimage
2015-03-08 14:10:46,988 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading /app/hadoop/tmp/dfs/name/current/edits expecting start txid #1
2015-03-08 14:10:46,988 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file /app/hadoop/tmp/dfs/name/current/edits
2015-03-08 14:10:47,017 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file /app/hadoop/tmp/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds
2015-03-08 14:10:47,019 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Starting upgrade of local storage directories.
   old LV = -41; old CTime = 0.
   new LV = -60; new CTime = 1425841847018
2015-03-08 14:10:47,019 INFO org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil: Starting upgrade of storage directory /app/hadoop/tmp/dfs/name
2015-03-08 14:10:47,240 INFO org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector: No version file in /app/hadoop/tmp/dfs/name
2015-03-08 14:10:47,244 INFO org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil: Performing upgrade of storage directory /app/hadoop/tmp/dfs/name
2015-03-08 14:10:47,249 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-03-08 14:10:47,249 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2015-03-08 14:10:47,296 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-03-08 14:10:47,296 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 508 msecs
2015-03-08 14:10:47,450 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to node01:54310
2015-03-08 14:10:47,455 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-03-08 14:10:47,469 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 54310
2015-03-08 14:10:47,545 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-03-08 14:10:47,545 WARN org.apache.hadoop.hdfs.server.common.Util: Path /app/hadoop/tmp/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
2015-03-08 14:10:47,560 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2015-03-08 14:10:47,560 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2015-03-08 14:10:47,560 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: initializing replication queues
2015-03-08 14:10:47,560 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 0 secs
2015-03-08 14:10:47,560 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2015-03-08 14:10:47,560 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2015-03-08 14:10:47,580 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Total number of blocks            = 1
2015-03-08 14:10:47,581 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: Number of invalid blocks          = 0

 

 

2015-03-08 14:10:47,581 INFO org.apache.hadoop.hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 20 msec
2015-03-08 14:10:47,612 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-03-08 14:10:47,613 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 54310: starting
2015-03-08 14:10:47,628 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: node01/192.168.171.132:54310
2015-03-08 14:10:47,628 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2015-03-08 14:10:47,636 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds
2015-03-08 14:10:47,636 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 90605862 milliseconds
2015-03-08 14:10:47,636 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2015-03-08 14:11:17,639 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30003 milliseconds
2015-03-08 14:11:17,639 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).
2015-03-08 14:11:47,640 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds
2015-03-08 14:11:47,640 INFO org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s).

 

 

Cloudera Employee
Posts: 578
Registered: ‎01-20-2014

Re: Upgrade not working

Does the log actually end there? Does the Namenode quit? Have you taken a
jstack of the Namenode 5 seconds apart say 5 time, is there a change in the
process's stack trace?

Regards,
Gautam Gopalakrishnan