Member since
09-01-2024
11
Posts
8
Kudos Received
0
Solutions
10-03-2024
03:09 AM
1 Kudo
Dear Cloudera Support Team, I am experiencing an issue when running the ImportTsv command in HBase using the following command: hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator='\t' -Dimporttsv.columns='HBASE_ROW_KEY,cf:gender,cf:age,cf:income,cf:exp' Customers /hbase/data.txt The job is failing with the following error message: 2024-10-03T15:00:41,530 INFO [main] mapreduce.Job: Running job: job_1727859616916_0004 2024-10-03T15:00:50,594 INFO [main] mapreduce.Job: Job job_1727859616916_0004 running in uber mode : false 2024-10-03T15:00:50,597 INFO [main] mapreduce.Job: map 0% reduce 0% 2024-10-03T15:00:50,616 INFO [main] mapreduce.Job: 9616916_0004_02_000001 Exit code: 1 [2024-10-03 15:00:49.597]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. For more detailed output, I have checked the logs on the application tracking page as well: http://dc1-apache-hbase.mobitel.lk:8088/cluster/app/application_1727859616916_0004 here is the above mentioned (application_1727859616916_0004) result. User:Name:Application Type:Application Tags:Application Priority:YarnApplicationState:Queue:FinalStatus Reported by AM:Started:Launched:Finished:Elapsed:Tracking URL:Log Aggregation Status:Application Timeout (Remaining Time):Diagnostics:Unmanaged Application:Application Node Label expression:AM container Node Label expression: super importtsv_Customers MAPREDUCE 0 (Higher Integer value indicates higher priority) FAILED default FAILED Thu Oct 03 15:00:41 +0530 2024 Thu Oct 03 15:00:42 +0530 2024 Thu Oct 03 15:00:49 +0530 2024 8sec History DISABLED Unlimited Application application_1727859616916_0004 failed 2 times due to AM Container for appattempt_1727859616916_0004_000002 exited with exitCode: 1 Failing this attempt.Diagnostics: [2024-10-03 15:00:49.594]Exception from container-launch. Container id: container_1727859616916_0004_02_000001 Exit code: 1 [2024-10-03 15:00:49.597]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. [2024-10-03 15:00:49.598]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. For more detailed output, check the application tracking page: http://dc1-apache-hbase.mobitel.lk:8088/cluster/app/application_1727859616916_0004 Then click on links to logs of each attempt. . Failing the application. false <Not set> <DEFAULT_PARTITION> Could you please assist in resolving this issue? Thank you for your support.
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
09-29-2024
09:55 PM
Hi rki_, It seems like both hbase:meta and hbase:namespace tables are not online. I am attaching the master log for your review, and if you know a way to fix this, can you check it? 2024-09-30 10:11:28,981 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:12:28,982 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:13:19,391 ERROR [ActiveMasterInitializationMonitor-1727422999267] master.MasterInitializationMonitor (MasterInitializationMonitor.java:run(67)) - Master failed to complete initialization after 900000ms. Please consider submitting a bug report including a thread dump of this process. 2024-09-30 10:13:28,982 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:13:36,668 INFO [master:store-WAL-Roller] monitor.StreamSlowMonitor (StreamSlowMonitor.java:<init>(122)) - New stream slow monitor dc1-apache-hbase.mobitel.lk%2C16000%2C1727422992087.1727671416667 2024-09-30 10:13:36,684 INFO [master:store-WAL-Roller] wal.AbstractFSWAL (AbstractFSWAL.java:logRollAndSetupWalProps(834)) - Rolled WAL /hbase/MasterData/WALs/dc1-apache-hbase.mobitel.lk,16000,1727422992087/dc1-apache-hbase.mobitel.lk%2C16000%2C1727422992087.1727670516635 with entries=0, filesize=85 B; new WAL /hbase/MasterData/WALs/dc1-apache-hbase.mobitel.lk,16000,1727422992087/dc1-apache-hbase.mobitel.lk%2C16000%2C1727422992087.1727671416667 2024-09-30 10:13:37,089 INFO [WAL-Archive-0] wal.AbstractFSWAL (AbstractFSWAL.java:archiveLogFile(815)) - Archiving hdfs://192.168.6.205:9000/hbase/MasterData/WALs/dc1-apache-hbase.mobitel.lk,16000,1727422992087/dc1-apache-hbase.mobitel.lk%2C16000%2C1727422992087.1727670516635 to hdfs://192.168.6.205:9000/hbase/MasterData/oldWALs/dc1-apache-hbase.mobitel.lk%2C16000%2C1727422992087.1727670516635 2024-09-30 10:13:37,092 INFO [WAL-Archive-0] region.MasterRegionUtils (MasterRegionUtils.java:moveFilesUnderDir(50)) - Moved hdfs://192.168.6.205:9000/hbase/MasterData/oldWALs/dc1-apache-hbase.mobitel.lk%2C16000%2C1727422992087.1727670516635 to hdfs://192.168.6.205:9000/hbase/oldWALs/dc1-apache-hbase.mobitel.lk%2C16000%2C1727422992087.1727670516635$masterlocalwal$ 2024-09-30 10:14:28,982 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:15:28,983 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:16:28,983 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:16:41,861 INFO [RS-EventLoopGroup-1-1] hbase.Server (ServerRpcConnection.java:processConnectionHeader(550)) - Connection from 192.168.6.205:57364, version=2.5.10, sasl=false, ugi=super (auth:SIMPLE), service=MasterService 2024-09-30 10:17:28,984 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:18:28,985 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:19:28,985 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. 2024-09-30 10:20:28,985 WARN [master/dc1-apache-hbase:16000:becomeActiveMaster] master.HMaster (HMaster.java:isRegionOnline(1373)) - hbase:meta,,1.1588230740 is NOT online; state={1588230740 state=OPEN, ts=1727422999057, server=dc1-apache-hbase.mobitel.lk,16020,1727159057270}; ServerCrashProcedures=true. Master startup cannot progress, in holding-pattern until region onlined. Thank you!
... View more
09-25-2024
07:41 PM
1 Kudo
Hello, I am currently testing HBase for bulk loading purposes, and I am using the pseudo-distributed mode (single-host standalone) for my configuration. After completing all the configurations, I ran the jps command, which displayed all the necessary processes for testing, as shown below: [super@dc1-apache-hbase bin]$ jps 25808 ResourceManager 25585 SecondaryNameNode 36961 HMaster 25203 NameNode 37347 Jps 25925 NodeManager 36805 HQuorumPeer 25334 DataNode 37135 HRegionServer [super@dc1-apache-hbase bin]$ I did not use Zookeeper manually but instead set export HBASE_MANAGES_ZK=true. However, once I enter the HBase shell, I can list the tables but cannot create any new tables. The error displayed is as follows: hbase:001:0> list TABLE 0 row(s) Took 0.7732 seconds => [] hbase:002:0> hbase:003:0> create 'table_1','cf' ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:3215) at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2330) at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:694) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) I tried troubleshooting the issue based on suggestions found online, such as wiping the Zookeeper directory and restarting HBase, but this did not resolve the problem. Here are the details of my setup: Hadoop Version: 3.3.6 HBase Version: 2.5.10 Java Version: OpenJDK 1.8.0_422 Configurations hbase-site.xml: <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/hbase</value> </property> <property> <name>hbase.wal.provider</name> <value>filesystem</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/super/hbase/zookeeper</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>localhost</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> hbase-env.sh: export HBASE_OPTS="$HBASE_OPTS -Dlog4j.configuration=file:/home/super/hbase/conf/log4j2.properties" export HBASE_LOG_DIR=/home/super/hbase/logs export HBASE_LOG_PREFIX=hbase export HBASE_ROOT_LOGGER="INFO,DRFA" export HBASE_SECURITY_LOGGER="INFO,DRFA" export JAVA_HOME=/usr/local/java-1.8.0-openjdk-1.8.0.422.b05-2.el9.x86_64 export HBASE_MANAGES_ZK=true Could anyone please assist me in resolving this issue? Additionally, if anyone has experience with using HBase with HDFS, could you please help me with the setup? Thank you!
... View more
Labels:
- Labels:
-
Apache HBase
-
HDFS
09-23-2024
10:15 AM
Dear Cloudera Community, I am currently testing HBase for bulk loading purposes, and for this, we use the pseudo-distributed mode (single-host standalone) as our configuration. After completing all the configurations, I executed the jps command, which displayed all the necessary processes for testing. [super@dc1-apache-hbase bin]$ jps 1119857 Jps 1118448 NodeManager 1119235 HQuorumPeer 1117730 NameNode 1117863 DataNode 1119627 HRegionServer 1118125 SecondaryNameNode 1119405 HMaster 1118332 ResourceManager Additionally, I did not use Zookeeper manually for the process; instead, I used export HBASE_MANAGES_ZK=true. However, once I enter the HBase shell, the HMaster process disappears, and I encounter the following error message: "ERROR: KeeperErrorCode = NoNode for /hbase/master" [super@dc1-apache-hbase conf]$ jps 1122225 Jps 1118448 NodeManager 1121283 HQuorumPeer 1122019 JarBootstrapMain 1117730 NameNode 1117863 DataNode 1121675 HRegionServer 1118125 SecondaryNameNode 1118332 ResourceManager I have attached all the configuration files and version details that I used for this setup. Could you please assist me in resolving this issue? Hadoop Version: 3.3.6 Hbase Version: 2.5.10 Java Version: openjdk version "1.8.0_422" OpenJDK Runtime Environment (build 1.8.0_422-b05) OpenJDK 64-Bit Server VM (build 25.422-b05, mixed mode) hbase-site.xml: <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/super/zookeeper</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>localhost</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8030/hbase</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> hbase-env.sh # The java implementation to use. Java 1.8+ required. # export JAVA_HOME=/usr/java/jdk1.8.0/ export JAVA_HOME=/usr/local/java-1.8.0-openjdk-1.8.0.422.b05-2.el9.x86_64 # Extra Java CLASSPATH elements. Optional. # export HBASE_CLASSPATH= export HBASE_CLASSPATH=$HADOOP_CONF_DIR:$HBASE_CLASSPATH Thank you for your support.
... View more
Labels:
- Labels:
-
Apache HBase
09-11-2024
08:02 PM
1 Kudo
hi @shubham_sharma , I ran a simple job as you requested, and it seems to have run without any issues. For your reference, I am attaching the output as well. Output: [super@dc1-apache-hbase mapreduce]$ /home/super/hadoop/bin/hadoop jar /home/super/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.4.0.jar pi 10 100 Number of Maps = 10 Samples per Map = 100 2024-09-12 08:25:59,664 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Wrote input for Map #0 Wrote input for Map #1 Wrote input for Map #2 Wrote input for Map #3 Wrote input for Map #4 Wrote input for Map #5 Wrote input for Map #6 Wrote input for Map #7 Wrote input for Map #8 Wrote input for Map #9 Starting Job 2024-09-12 08:26:01,082 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at /0.0.0.0:8032 2024-09-12 08:26:01,580 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/super/.staging/job_1725184331906_0019 2024-09-12 08:26:01,730 INFO input.FileInputFormat: Total input files to process : 10 2024-09-12 08:26:01,775 INFO mapreduce.JobSubmitter: number of splits:10 2024-09-12 08:26:01,931 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1725184331906_0019 2024-09-12 08:26:01,931 INFO mapreduce.JobSubmitter: Executing with tokens: [] 2024-09-12 08:26:02,186 INFO conf.Configuration: resource-types.xml not found 2024-09-12 08:26:02,187 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'. 2024-09-12 08:26:02,284 INFO impl.YarnClientImpl: Submitted application application_1725184331906_0019 2024-09-12 08:26:02,317 INFO mapreduce.Job: The url to track the job: http://dc1-apache-hbase.mobitel.lk:8088/proxy/application_1725184331906_0019/ 2024-09-12 08:26:02,318 INFO mapreduce.Job: Running job: job_1725184331906_0019 2024-09-12 08:26:10,443 INFO mapreduce.Job: Job job_1725184331906_0019 running in uber mode : false 2024-09-12 08:26:10,445 INFO mapreduce.Job: map 0% reduce 0% 2024-09-12 08:26:20,637 INFO mapreduce.Job: map 60% reduce 0% 2024-09-12 08:26:27,696 INFO mapreduce.Job: map 100% reduce 0% 2024-09-12 08:26:29,713 INFO mapreduce.Job: map 100% reduce 100% 2024-09-12 08:26:31,745 INFO mapreduce.Job: Job job_1725184331906_0019 completed successfully 2024-09-12 08:26:31,912 INFO mapreduce.Job: Counters: 54 File System Counters FILE: Number of bytes read=67 FILE: Number of bytes written=3407061 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=2650 HDFS: Number of bytes written=215 HDFS: Number of read operations=45 HDFS: Number of large read operations=0 HDFS: Number of write operations=3 HDFS: Number of bytes read erasure-coded=0 Job Counters Launched map tasks=10 Launched reduce tasks=1 Data-local map tasks=10 Total time spent by all maps in occupied slots (ms)=68883 Total time spent by all reduces in occupied slots (ms)=6614 Total time spent by all map tasks (ms)=68883 Total time spent by all reduce tasks (ms)=6614 Total vcore-milliseconds taken by all map tasks=68883 Total vcore-milliseconds taken by all reduce tasks=6614 Total megabyte-milliseconds taken by all map tasks=70536192 Total megabyte-milliseconds taken by all reduce tasks=6772736 Map-Reduce Framework Map input records=10 Map output records=20 Map output bytes=180 Map output materialized bytes=250 Input split bytes=1470 Combine input records=0 Combine output records=0 Reduce input groups=2 Reduce shuffle bytes=250 Reduce input records=20 Reduce output records=0 Spilled Records=40 Shuffled Maps =10 Failed Shuffles=0 Merged Map outputs=10 GC time elapsed (ms)=1565 CPU time spent (ms)=6180 Physical memory (bytes) snapshot=3867213824 Virtual memory (bytes) snapshot=28280057856 Total committed heap usage (bytes)=3555196928 Peak Map Physical memory (bytes)=369606656 Peak Map Virtual memory (bytes)=2578915328 Peak Reduce Physical memory (bytes)=284368896 Peak Reduce Virtual memory (bytes)=2575523840 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=1180 File Output Format Counters Bytes Written=97 Job Finished in 31.068 seconds Estimated value of Pi is 3.14800000000000000000
... View more
09-11-2024
03:37 AM
1 Kudo
hello shubham, I am encountering errors while running the following command: "/home/super/hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,Name,Age,Gender my_1 /hbase/test2.txt" The error message is: 2024-09-11 15:53:44,808 INFO [main] impl.YarnClientImpl: Submitted application application_1725184331906_0018 2024-09-11 15:53:44,847 INFO [main] mapreduce.Job: The url to track the job: http://dc1-apache-hbase.mobitel.lk:8088/proxy/application_1725184331906_0018/ 2024-09-11 15:53:44,848 INFO [main] mapreduce.Job: Running job: job_1725184331906_0018 2024-09-11 15:53:52,941 INFO [main] mapreduce.Job: Job job_1725184331906_0018 running in uber mode : false 2024-09-11 15:53:52,943 INFO [main] mapreduce.Job: map 0% reduce 0% 2024-09-11 15:53:52,952 INFO [main] mapreduce.Job: 2]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. [2024-09-11 15:53:51.942]Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err : Last 4096 bytes of stderr : log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. For more detailed output, check the application tracking page: http://___________________________________________/cluster/app/application_1725184331906_0018 Then click on links to logs of each attempt. . Failing the application. 2024-09-11 15:53:52,967 INFO [main] mapreduce.Job: Counters: 0 [super@dc1-apache-hbase mapreduce-job]$ What are the possible solutions, and how can I fix this?
... View more
09-08-2024
10:20 PM
Dear Cloudera Community, We are currently conducting warehouse testing using Apache HBase and need to load large files into HBase tables. Could you kindly suggest any tools or specifically designed for bulk loading large datasets into HBase? Thank You!
... View more
Labels:
- Labels:
-
Apache HBase
09-03-2024
02:10 AM
1 Kudo
rki here I am attaching the log of the resource manager: For more detailed output, check the application tracking page: http://dc1-apache/cluster/app/application_1725184331906_0014 Then click on links to logs of each attempt. . Failing the application. 2024-09-03 14:36:38,511 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1725184331906_0014_000002 is done. finalState=FAILED 2024-09-03 14:36:38,511 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1725184331906_0014 requests cleared 2024-09-03 14:36:38,511 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractLeafQueue: Application removed - appId: application_1725184331906_0014 user: super queue: root.default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2024-09-03 14:36:38,512 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1725184331906_0014 State change from FINAL_SAVING to FAILED on event = APP_UPDATE_SAVED 2024-09-03 14:36:38,515 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.AbstractParentQueue: Application removed - appId: application_1725184331906_0014 user: super leaf-queue of parent: root #applications: 0 2024-09-03 14:36:38,515 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1725184331906_0014,name=importtsv_my_1,user=super,queue=root.default,state=FAILED,trackingUrl=http://dc1/cluster/app/application_1725184331906_0014,appMasterHost=dc1-apache-hbase.mobitel.lk,submitTime=1725354391682,startTime=1725354391696,launchTime=1725354392197,finishTime=1725354398511,finalStatus=FAILED,memorySeconds=10919,vcoreSeconds=5,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=MAPREDUCE,resourceSeconds=10919 MB-seconds\, 5 vcore-seconds,preemptedResourceSeconds=0 MB-seconds\, 0 vcore-seconds,applicationTags=,applicationNodeLabel=,diagnostics=Application application_1725184331906_0014 failed 2 times due to AM Container for appattempt_1725184331906_0014_000002 exited with exitCode: 1\nFailing this attempt.Diagnostics: [2024-09-03 14:36:38.499]Exception from container-launch.\nContainer id: container_1725184331906_0014_02_000001\nExit code: 1\n\n[2024-09-03 14:36:38.503]Container exited with a non-zero exit code 1. Error file: prelaunch.err.\nLast 4096 bytes of prelaunch.err :\nLast 4096 bytes of stderr :\nlog4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).\nlog4j:WARN Please initialize the log4j system properly.\nlog4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.\n\n\n[2024-09-03 14:36:38.504]Container exited with a non-zero exit code 1. Error file: prelaunch.err.\nLast 4096 bytes of prelaunch.err :\nLast 4096 bytes of stderr :\nlog4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).\nlog4j:WARN Please initialize the log4j system properly.\nlog4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.\n\n\nFor more detailed output\, check the application tracking page: http://dc1/cluster/app/application_1725184331906_0014 Then click on links to logs of each attempt.\n. Failing the application.,totalAllocatedContainers=2
... View more
09-02-2024
04:07 AM
1 Kudo
The RegionServer host is localhost, and when I run the command jps -l | grep QuorumPeerMain, there is no output. Using the ssh command to connect to the RegionServer host and running the importtsv command still results in the same error as before. jps: 359952 SecondaryNameNode 360176 ResourceManager 361026 DataNode 400017 HQuorumPeer 400353 HRegionServer 400159 HMaster 406172 Jps 401800 ThriftServer 359578 NameNode 360298 NodeManager
... View more
09-02-2024
12:11 AM
1 Kudo
@rki_Thank you for the response. Could you please suggest a solution that doesn’t involve using the Region Server host? Here are the details for port 2181 : [@dc1-apache-hbase ~]$ ss -tuln | grep 2181 tcp LISTEN 0 50 *:2181 *:* [@dc1-apache-hbase ~]$ lsof -i :2181 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 388228 _____ 558u IPv6 10088613 0t0 TCP *:eforward (LISTEN) java 388228 _____ 562u IPv6 10092015 0t0 TCP localhost:eforward->localhost:56476 (ESTABLISHED) java 388228 _____ 565u IPv6 10092023 0t0 TCP localhost:eforward->localhost:56492 (ESTABLISHED) java 388372 _____ 575u IPv6 10092011 0t0 TCP localhost:56476->localhost:eforward (ESTABLISHED) java 388566 _____ 575u IPv6 10088671 0t0 TCP localhost:56492->localhost:eforward (ESTABLISHED)
... View more