Member since
04-05-2016
188
Posts
19
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
993 | 10-30-2017 07:05 AM | |
1280 | 10-12-2017 07:03 AM | |
5434 | 10-12-2017 06:59 AM | |
7642 | 03-01-2017 09:56 AM | |
22085 | 01-26-2017 11:52 AM |
07-20-2016
09:10 AM
@Rajkumar Singh Find below the file system check... ...................................................................Status: HEALTHY
Total size: 102660394304099 B (Total open files size: 1000539276 B)
Total dirs: 278839
Total files: 8403467
Total symlinks: 0 (Files currently being written: 1044)
Total blocks (validated): 8364313 (avg. block size 12273619 B) (Total open file blocks (not validated): 658)
Minimally replicated blocks: 8364313 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 40135 (0.47983617 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.9917786
Corrupt blocks: 0
Missing replicas: 54630 (0.21783337 %)
Number of data-nodes: 8
Number of racks: 1
FSCK ended at Wed Jul 20 10:51:25 SAST 2016 in 152457 milliseconds
The filesystem under path '/' is HEALTHY
... View more
07-20-2016
09:00 AM
I was running an insert query in Hive then i encountered the error below; ERROR : Status: Failed
ERROR : Vertex failed, vertexName=Map 1, vertexId=vertex_1468411845662_1749_1_00, diagnostics=[Task failed, taskId=task_1468411845662_1749_1_00_000024, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.io.IOException: Cannot obtain block length for LocatedBlock{BP-1426797840-<ip_address>-1461158403571:blk_1090740708_17008023; Any clue on how to get past this? Note that when i run hdfs fsck command, it returns a healthy status. Please find the full hive error log and fsck status report attached.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
07-08-2016
04:37 PM
Here is another one below, Josh. Status: HEALTHY
Total size: 84184775260004 B (Total open files size: 36288883 B)
Total dirs: 255954
Total files: 7102482
Total symlinks: 0 (Files currently being written: 456)
Total blocks (validated): 7143238 (avg. block size 11785240 B) (Total open file blocks (not validated): 79)
Minimally replicated blocks: 7143238 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 130 (0.0018199029 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.9979758
Corrupt blocks: 0
Missing replicas: 257 (0.0012000647 %)
Number of data-nodes: 8
Number of racks: 1
FSCK ended at Fri Jul 08 18:29:19 SAST 2016 in 239594 milliseconds
The filesystem under path '/' is HEALTHY
... View more
07-08-2016
04:00 PM
@srai Please find below the report i got from running hdfs fsck / ...............................Status: HEALTHY
Total size: 84086783290897 B (Total open files size: 35725218 B)
Total dirs: 255918
Total files: 7090531
Total symlinks: 0 (Files currently being written: 464)
Total blocks (validated): 7131287 (avg. block size 11791249 B) (Total open file blocks (not validated): 86)
Corrupt blocks: 0
Number of data-nodes: 8
Number of racks: 1
FSCK ended at Fri Jul 08 16:43:47 SAST 2016 in 141368 milliseconds
The filesystem under path '/' is HEALTHY
... View more
07-08-2016
03:44 PM
I get an error "Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 18, <datanode>): java.io.IOException: Cannot obtain block length for LocatedBlock{BP-1426797840-1461158403571:blk_1089439824_15699635; getBlockSize()=0; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[,DISK], DatanodeInfoWithStorage[,DISK], DatanodeInfoWithStorage[,DISK]]}" The background to this is: We changed our Oozie DB to MySQL and on restarting the cluster, the namenode failed with connectionRefused error. It was started manually from the CLI and after restarted with Ambari and it worked fine. I have used hdfs fsck to check for corrupt files but i get a 'healthy' status report. Any clue as to how i can get past this issue? @Kuldeep Kulkarni @Artem Ervits @Sagar Shimpi @Benjamin Leonhardi
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
07-06-2016
10:23 AM
IT's working fine now. Thank you @Sagar Shimpi
... View more
07-06-2016
09:16 AM
The oozie server starts for less than a minute and stops thereafter. I'm still looking for a fix as this critical issue. 2016-07-06 08:03:37,483 FATAL Services:514 - SERVER['server'] Runtime Exception during Services Load. Check your list of 'oozie.services' or 'oozie.services.ext'
2016-07-06 08:03:37,487 FATAL Services:514 - SERVER['server'] E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'
org.apache.oozie.service.ServiceException: E0103: Could not load service classes, Cannot load JDBC driver class 'com.mysql.jdbc.Driver'
... View more
07-06-2016
04:54 AM
Thank you @Benjamin Leonhardi
... View more
07-05-2016
02:03 PM
I just reinstalled oozie on HDP 2.4. I am not clear why the process needs to restart YARN and HDFS. Any clue as to why oozie restarted the whole cluster?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie
-
Apache YARN
07-05-2016
10:25 AM
Thank you @Sagar Shimpi. The mysql-connector-java.jar exist already. I have been using Hive with MySQL on the same node. Is it possible for the mysql-connector-java.jar to have issues and the DB connection test to be OK in Ambari Web UI?- rw-r--r--. 1 oozie hadoop 819803 Jul 5 10:14 mysql-connector-java.jar
... View more