Member since
06-13-2018
6
Posts
2
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2215 | 02-26-2019 11:34 AM |
02-19-2020
10:49 PM
with newer versions of spark, the sqlContext is not load by default, you have to specify it explicitly : scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc) warning: there was one deprecation warning; re-run with -deprecation for details sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@6179af64 scala> import sqlContext.implicits._ import sqlContext.implicits._ scala> sqlContext.sql("describe mytable") res2: org.apache.spark.sql.DataFrame = [col_name: string, data_type: string ... 1 more field] I'm working with spark 2.3.2
... View more
12-16-2019
04:43 AM
Dear all, I'm having an issue with on a kerberized HDP 3.1 ( with ranger on an Active directory 😞 the timeline service V2 never worked either with embedded or not. We are currently trying to Configure an external hBase for Timeline Service 2.0 as describe on hdp manual ( https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/data-operating-system/content/configure_hbase_for_timeline_service_2.0.html) but the the following step failed : create the required HBase tables using the following command export HBASE_CLASSPATH_PREFIX=/usr/hdp/current/hadoop-yarn-client/timelineservice/*; /usr/hdp/current/hbase-client/bin/hbase org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator -Dhbase.client.retries.number=35 -create -s Before lanching the command, we "kinit" with the hbase keytab ( we also tried with yarn-ats.hbase-client one ) but we got the followings errors : [root@MON_SERVEUR_3 hbase]# kinit -kt /etc/security/keytabs/yarn-ats.hbase-client.headless.keytab yarn-ats-datalake_prod [root@MON_SERVEUR_3 hbase]# /usr/hdp/current/hbase-client/bin/hbase org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator -create -s SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/phoenix/phoenix-5.0.0.3.1.0.0-78-server.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2019-12-16 16:10:50,372 INFO [main] storage.TimelineSchemaCreator: Starting the schema creation 2019-12-16 16:10:50,686 INFO [main] common.HBaseTimelineStorageUtils: Using hbase configuration at file:///usr/hdp/3.1.0.0-78/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml 2019-12-16 16:10:50,811 INFO [main] storage.TimelineSchemaCreator: Will skip existing tables and continue on htable creation exceptions! 2019-12-16 16:10:51,127 INFO [main] zookeeper.ReadOnlyZKClient: Connect 0x13b6aecc to MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181 with session timeout=90000ms, retries 6, retry interval 1000ms, keepAlive=60000ms 2019-12-16 16:10:51,143 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-78--1, built on 12/06/2018 12:30 GMT 2019-12-16 16:10:51,143 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:host.name=MON_SERVEUR_3.MY_DOMAIN 2019-12-16 16:10:51,143 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_191 2019-12-16 16:10:51,143 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 2019-12-16 16:10:51,143 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre 2019-12-16 16:10:51,144 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:java.class.path=... 2019-12-16 16:10:51,152 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native/Linux-amd64-64:/usr/hdp/3.1.0.0-78/hadoop/lib/native 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:os.name=Linux 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:os.version=3.10.0-862.el7.x86_64 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:user.name=root 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:user.home=/root 2019-12-16 16:10:51,153 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Client environment:user.dir=/mnt/hadoop/log/hbase 2019-12-16 16:10:51,156 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc] zookeeper.ZooKeeper: Initiating client connection, connectString=MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$13/688367151@5dddec88 2019-12-16 16:10:51,185 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc-SendThread(MON_SERVEUR_01.MY_DOMAIN:2181)] zookeeper.Login: successfully logged in. 2019-12-16 16:10:51,186 INFO [Thread-4] zookeeper.Login: TGT refresh thread started. 2019-12-16 16:10:51,190 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc-SendThread(MON_SERVEUR_01.MY_DOMAIN:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism. 2019-12-16 16:10:51,202 INFO [Thread-4] zookeeper.Login: TGT valid starting at: Mon Dec 16 16:10:35 RET 2019 2019-12-16 16:10:51,202 INFO [Thread-4] zookeeper.Login: TGT expires: Tue Dec 17 02:10:35 RET 2019 2019-12-16 16:10:51,202 INFO [Thread-4] zookeeper.Login: TGT refresh sleeping until: Tue Dec 17 00:16:20 RET 2019 2019-12-16 16:10:51,222 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc-SendThread(MON_SERVEUR_01.MY_DOMAIN:2181)] zookeeper.ClientCnxn: Opening socket connection to server MON_SERVEUR_01.MY_DOMAIN/192.168.82.78:2181. Will attempt to SASL-authenticate using Login Context section 'Client' 2019-12-16 16:10:51,228 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc-SendThread(MON_SERVEUR_01.MY_DOMAIN:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.82.83:58714, server: MON_SERVEUR_01.MY_DOMAIN/192.168.82.78:2181 2019-12-16 16:10:51,241 INFO [ReadOnlyZKClient-MON_SERVEUR_02.MY_DOMAIN:2181,MON_SERVEUR_01.MY_DOMAIN:2181,MON_SERVEUR_3.MY_DOMAIN:2181@0x13b6aecc-SendThread(MON_SERVEUR_01.MY_DOMAIN:2181)] zookeeper.ClientCnxn: Session establishment complete on server MON_SERVEUR_01.MY_DOMAIN/192.168.82.78:2181, sessionid = 0x16f0d3e151100d1, negotiated timeout = 60000 2019-12-16 16:10:56,131 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=8, started=4611 ms ago, cancelled=false, msg=Call to MY_DATANODE_07.MY_DOMAIN/192.168.82.71:16020 failed on local exception: java.io.IOException: Can not send request because relogin is in progress., details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=MY_DATANODE_07.MY_DOMAIN,16020,1576496376743, seqNum=-1 2019-12-16 16:10:57,770 WARN [Relogin-pool4-t1] security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1576498252658 2019-12-16 16:11:00,172 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=7, retries=8, started=8653 ms ago, cancelled=false, msg=Call to MY_DATANODE_07.MY_DOMAIN/192.168.82.71:16020 failed on local exception: java.io.IOException: org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=MY_DATANODE_07.MY_DOMAIN,16020,1576496376743, seqNum=-1 2019-12-16 16:11:02,657 WARN [Relogin-pool4-t1] security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1576498252658 2019-12-16 16:11:04,415 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=8, started=4141 ms ago, cancelled=false, msg=Call to MY_DATANODE_07.MY_DOMAIN/192.168.82.71:16020 failed on local exception: java.io.IOException: org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=MY_DATANODE_07.MY_DOMAIN,16020,1576496376743, seqNum=-1 2019-12-16 16:11:08,018 WARN [Relogin-pool4-t1] security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1576498252658 2019-12-16 16:11:08,446 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=7, retries=8, started=8172 ms ago, cancelled=false, msg=Call to MY_DATANODE_07.MY_DOMAIN/192.168.82.71:16020 failed on local exception: java.io.IOException: org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=MY_DATANODE_07.MY_DOMAIN,16020,1576496376743, seqNum=-1 2019-12-16 16:11:11,950 WARN [Relogin-pool4-t1] security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1576498252658 2019-12-16 16:11:12,790 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=6, retries=8, started=4142 ms ago, cancelled=false, msg=Call to MY_DATANODE_07.MY_DOMAIN/192.168.82.71:16020 failed on local exception: java.io.IOException: org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=MY_DATANODE_07.MY_DOMAIN,16020,1576496376743, seqNum=-1 2019-12-16 16:11:16,744 WARN [Relogin-pool4-t1] security.UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 60 seconds before. Last Login=1576498252658 2019-12-16 16:11:16,805 INFO [main] client.RpcRetryingCallerImpl: Call exception, tries=7, retries=8, started=8157 ms ago, cancelled=false, msg=Call to MY_DATANODE_07.MY_DOMAIN/192.168.82.71:16020 failed on local exception: java.io.IOException: org.apache.hbase.thirdparty.io.netty.handler.codec.DecoderException: org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed, details=row 'prod.timelineservice.entity' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=MY_DATANODE_07.MY_DOMAIN,16020,1576496376743, seqNum=-1 We tried to create some lambda tables and it worked. We tried made hbase public with ranger but the script still fails with the same error. Any idea why it didn't work ? BR,
... View more
Labels:
02-26-2019
11:34 AM
2 Kudos
Hi, Found out that there was a snapshot file hanging around with a reference to the old unsecure URL. I've deleted the /var/lib/nifi/state/local/snapshot file and I nearly works. got an authorization error but some ranger tuning will overcomes it. BR
... View more
02-22-2019
07:51 PM
hello, We have installed a secured hdp 3.1 cluster on Centos 7.5. Then we installed mpack in order to add a nifi single node. The unsecure version worked correctly (at least it displayed the ui correctly ) but upon activating ssl ( with auto generated certificate ) and activating kerberos for authentification, when connecting, we got the following error : Cannot replicate request to Node my_nifi_FDQN_node:9090 because the node is not connected This is strange because we use the secure version and connect to nifi via https://my_nifi_FDQN_node:9091/nifi/ and it should not try to connect to 9090. In nifi-user.log, we can see : 2019-02-22 11:19:10,767 INFO [NiFi Web Server-21] o.a.n.w.s.NiFiAuthenticationFilter Authentication success for my_ldap_user 2019-02-22 11:19:10,772 INFO [NiFi Web Server-21] o.a.n.w.a.c.IllegalClusterStateExceptionMapper org.apache.nifi.cluster.manager.exception.IllegalClusterStateException: Cannot replicate request to Node my_nifi_FDQN_node:9090 because the node is not connected. Returning Conflict response. I don't know if it has something to do with it but I also got the following audit-log error in the nifi-app.log 2019-02-22 11:19:08,991 INFO [Clustering Tasks Thread-1] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2019-02-22 11:19:08,861 and sent to my_nifi_FDQN_node:9088 at 2019-02-22 11:19:08,991; send took 130 millis 2019-02-22 11:19:11,865 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0] o.a.r.audit.provider.BaseAuditHandler Audit Status Log: name=nifi.async.batch.hdfs, interval=11:42.021 minutes, events=1, deferredCount=1, totalEvents=5, totalDeferredCount=5 2019-02-22 11:19:11,866 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0] o.a.r.a.destination.HDFSAuditDestination Returning HDFS Filesystem Config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml 2019-02-22 11:19:11,879 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0] o.a.r.a.destination.HDFSAuditDestination Checking whether log file exists. hdfPath=hdfs://my_master_node:8020/ranger/audit/nifi/20190222/nifi_ranger_audit_my_nifi_FDQN_node.log, UGI=nifi/_HOST@REALM (auth:KERBEROS) 2019-02-22 11:19:11,887 ERROR [org.apache.ranger.audit.queue.AuditBatchQueue0] o.a.r.audit.provider.BaseAuditHandler Error writing to log file. java.io.IOException: DestHost:destPort my_master_node:8020 , LocalHost:localPort my_nifi_FDQN_node/my_nifi_IP_node:0. Failed on local exception: java.io.IOException: Couldn't set up IO streams: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name 2019-02-22 11:19:11,887 INFO [org.apache.ranger.audit.queue.AuditBatchQueue0] o.a.r.a.destination.HDFSAuditDestination Flushing HDFS audit. Event Size:1 2019-02-22 11:19:11,887 WARN [org.apache.ranger.audit.queue.AuditBatchQueue0] o.a.r.audit.provider.BaseAuditHandler failed to log audit event: {"repoType":10,"repo":"datalake_prod_nifi","reqUser":"XXXX","evtTime":"2019-02-22 11:19:10.770","access":"READ","resource":"/flow","resType":"nifi-resource","action":"READ","result":1,"policy":18,"enforcer":"ranger-acl","cliIP":"client_ip","agentHost":"my_nifi_FDQN_node","logType":"RangerAudit","id":"cf2fd979-945c-4461-a1df-c40c42defdd1-5","seq_num":11,"event_count":1,"event_dur_ms":0,"tags":[]}, errorMessage= 2019-02-22 11:19:11,887 WARN [org.apache.ranger.audit.queue.AuditBatchQueue0] o.a.r.audit.provider.BaseAuditHandler Log failure count: 1 in past 11:42.022 minutes; 6 during process lifetime Nifi is very new to me so I'm not sure what information to look for. BR,
... View more
Labels:
- Labels:
-
Apache NiFi
06-14-2018
04:23 AM
Yes, already tried that but with no results. After some time, the hadoop process ends up by correctif the issue by itself but I'm trying to understand where is the difference between the two commands, they should return the same diagnosis. BR,
... View more
06-13-2018
02:33 PM
hello, I'm getting a warning regarding some corrupt blocks replicas when using ambari or hdfs dfsadmin -report commands $ hdfs dfsadmin -report
Configured Capacity: 89582347079680 (81.47 TB)
Present Capacity: 84363014526231 (76.73 TB)
DFS Remaining: 46423774937044 (42.22 TB)
DFS Used: 37939239589187 (34.51 TB)
DFS Used%: 44.97%
Under replicated blocks: 0
Blocks with corrupt replicas: 8
Missing blocks: 0
Missing blocks (with replication factor 1): 0 But when I'm trying to locate them using hdfs fsck / command, it doesn't find anything wrong. Status: HEALTHY
Total size: 11520245257654 B (Total open files size: 912343310015 B)
Total dirs: 1269459
Total files: 1035091
Total symlinks: 0 (Files currently being written: 118)
Total blocks (validated): 1071649 (avg. block size 10750017 B) (Total open file blocks (not validated): 6893)
Minimally replicated blocks: 1071649 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-r
eplicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 4
Number of racks: 1
FSCK ended at Wed Jun 13 16:29:25 RET 2018 in 25763 milliseconds
The filesystem under path '/' is HEALTHY How can I find thoses corrupted replicas and fix them ? The namenode also tell me that it's ok but I've already encountered some issue when using spark jobs dealing with those files. 2018-06-13 16:30:25,870 INFO blockmanagement.BlockManager (BlockManager.java:computeReplicationWorkForBlocks(1660)) - Blocks chosen but could not be replicated = 8; of which 0 have no target, 0 have no source, 0 are UC, 0 are abandoned, 8 already have enough replicas.
BR,
... View more
Labels:
- Labels:
-
Apache Hadoop