Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

MR apps getting stuck at map 100% reduce 100%

avatar
Explorer

In HDP 3.1.5 3 node cluster, MR apps are getting stuck at Map 100 and Reduce 100 with the following message in the container log

 

 

 

 

 

020-02-25 08:51:36,661 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_e27_1582619129875_0003_01_000004 taskAttempt attempt_1582619129875_0003_r_000000_0
2020-02-25 08:51:36,691 INFO [Thread-83] org.apache.hadoop.yarn.event.AsyncDispatcher: Waiting for AsyncDispatcher to drain. Thread state is :WAITING

 

 

 

 

Also, the resource manager log shows the below error message

 

 

 

 

 

2020-02-25 09:25:35,544 INFO  client.RpcRetryingCallerImpl (RpcRetryingCallerImpl.java:callWithRetries(134)) - Call exception, tries=6, retries=6, started=4198 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on node_name,17020,1582617600929
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3341)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3318)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1428)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2983)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3320)
        at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42190)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
, details=row 'prod.timelineservice.app_flow,^?���.���^?���,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=node_name,17020,15826136588
998, seqNum=-1
2020-02-25 09:25:39,572 INFO  client.RpcRetryingCallerImpl (RpcRetryingCallerImpl.java:callWithRetries(134)) - Call exception, tries=7, retries=7, started=8226 ms ago, cancelled=false, msg=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on node_name,17020,1582617600929
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3341)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:3318)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1428)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2983)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3320)
        at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42190)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
, details=row 'prod.timelineservice.app_flow,^?���.���^?���,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=node_name,17020,15826136588

 

 

 

 

 

 

Already tried

  • removing meta-region-server from hbase zkcli
  • removing the timeline.db files from yarn.timeline-service.leveldb-state-store.path. folder
  • restarting ZK and other services. Restarted Hbase service.

Nothing helps. It still throws the error. Any help will be appreciated.

 

Attaching Yarn-site.xml and Hbase-site.xml for reference

Yarn-site.xml

 

 

 

  <configuration  xmlns:xi="<a href="<a href="http://www.w3.org/2001/XInclude" target="_blank">http://www.w3.org/2001/XInclude</a>" target="_blank"><a href="http://www.w3.org/2001/XInclude</a" target="_blank">http://www.w3.org/2001/XInclude</a</a>>">

    <property>
      <name>hadoop.http.cross-origin.allowed-origins</name>
      <value>regex:.*[.]node_name[.]com(:\d*)?</value>
    </property>

    <property>
      <name>hadoop.registry.client.auth</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>hadoop.registry.dns.bind-address</name>
      <value>0.0.0.0</value>
    </property>

    <property>
      <name>hadoop.registry.dns.bind-port</name>
      <value>53</value>
      <hidden>true</hidden>
    </property>

    <property>
      <name>hadoop.registry.dns.domain-name</name>
      <value>EXAMPLE.COM</value>
    </property>

    <property>
      <name>hadoop.registry.dns.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>hadoop.registry.dns.zone-mask</name>
      <value>255.255.255.0</value>
    </property>

    <property>
      <name>hadoop.registry.dns.zone-subnet</name>
      <value>172.17.0.0</value>
    </property>

    <property>
      <name>hadoop.registry.jaas.context</name>
      <value>Client</value>
    </property>

    <property>
      <name>hadoop.registry.secure</name>
      <value>true</value>
    </property>

    <property>
      <name>hadoop.registry.system.accounts</name>
      <value>sasl:yarn,sasl:jhs,sasl:hdfs-hdp721ea,sasl:rm,sasl:hive,sasl:spark</value>
    </property>

    <property>
      <name>hadoop.registry.zk.quorum</name>
node_name:2181</value>
    </property>

    <property>
      <name>manage.include.files</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.acl.enable</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.admin.acl</name>
      <value>activity_analyzer,yarn</value>
    </property>

    <property>
      <name>yarn.application.classpath</name>
      <value>$HADOOP_CONF_DIR,/usr/hdp/3.1.5.0-152/hadoop/*,/usr/hdp/3.1.5.0-152/hadoop/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/*</value>
    </property>

    <property>
      <name>yarn.client.nodemanager-connect.max-wait-ms</name>
      <value>60000</value>
    </property>

    <property>
      <name>yarn.client.nodemanager-connect.retry-interval-ms</name>
      <value>10000</value>
    </property>

    <property>
      <name>yarn.http.policy</name>
      <value>HTTP_ONLY</value>
    </property>

    <property>
      <name>yarn.log-aggregation-enable</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.log-aggregation.retain-seconds</name>
      <value>2592000</value>
    </property>

    <property>
      <name>yarn.log.server.url</name>
node_name:19888/jobhistory/logs</value>
    </property>

    <property>
      <name>yarn.log.server.web-service.url</name>
node_name:8188/ws/v1/applicationhistory</value>
    </property>

    <property>
      <name>yarn.node-labels.enabled</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.node-labels.fs-store.retry-policy-spec</name>
      <value>2000, 500</value>
    </property>

    <property>
      <name>yarn.node-labels.fs-store.root-dir</name>
      <value>/system/yarn/node-labels</value>
    </property>

    <property>
      <name>yarn.nodemanager.address</name>
      <value>0.0.0.0:45454</value>
    </property>

    <property>
      <name>yarn.nodemanager.admin-env</name>
      <value>MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX</value>
    </property>

    <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle,spark2_shuffle,timeline_collector</value>
    </property>

    <property>
      <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
      <name>yarn.nodemanager.aux-services.spark2_shuffle.class</name>
      <value>org.apache.spark.network.yarn.YarnShuffleService</value>
    </property>

    <property>
      <name>yarn.nodemanager.aux-services.spark2_shuffle.classpath</name>
      <value>/usr/hdp/3.1.5.0-152/spark2/aux/*</value>
    </property>

    <property>
      <name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
      <value>org.apache.spark.network.yarn.YarnShuffleService</value>
    </property>

    <property>
      <name>yarn.nodemanager.aux-services.spark_shuffle.classpath</name>
      <value>/usr/hdp/${hdp.version}/spark/aux/*</value>
    </property>

    <property>
      <name>yarn.nodemanager.aux-services.timeline_collector.class</name>
      <value>org.apache.hadoop.yarn.server.timelineservice.collector.PerNodeTimelineCollectorsAuxService</value>
    </property>

    <property>
      <name>yarn.nodemanager.bind-host</name>
      <value>0.0.0.0</value>
    </property>

    <property>
      <name>yarn.nodemanager.container-executor.class</name>
      <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
    </property>

    <property>
      <name>yarn.nodemanager.container-metrics.unregister-delay-ms</name>
      <value>60000</value>
    </property>

    <property>
      <name>yarn.nodemanager.container-monitor.interval-ms</name>
      <value>3000</value>
    </property>

    <property>
      <name>yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.nodemanager.delete.debug-delay-sec</name>
      <value>0</value>
    </property>

    <property>
      <name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
      <value>90</value>
    </property>

    <property>
      <name>yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb</name>
      <value>1000</value>
    </property>

    <property>
      <name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
      <value>0.25</value>
    </property>

    <property>
      <name>yarn.nodemanager.health-checker.interval-ms</name>
      <value>135000</value>
    </property>

    <property>
      <name>yarn.nodemanager.health-checker.script.timeout-ms</name>
      <value>60000</value>
    </property>

    <property>
      <name>yarn.nodemanager.keytab</name>
      <value>/etc/security/keytabs/nm.service.keytab</value>
    </property>

    <property>
      <name>yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.nodemanager.linux-container-executor.group</name>
      <value>hadoop</value>
    </property>

    <property>
      <name>yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.nodemanager.local-dirs</name>
      <value>/hadoop/yarn/local</value>
    </property>

    <property>
      <name>yarn.nodemanager.log-aggregation.compression-type</name>
      <value>gz</value>
    </property>

    <property>
      <name>yarn.nodemanager.log-aggregation.debug-enabled</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.nodemanager.log-aggregation.num-log-files-per-app</name>
      <value>30</value>
    </property>

    <property>
      <name>yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds</name>
      <value>3600</value>
    </property>

    <property>
      <name>yarn.nodemanager.log-dirs</name>
      <value>/hadoop/yarn/log</value>
    </property>

    <property>
      <name>yarn.nodemanager.log.retain-seconds</name>
      <value>604800</value>
    </property>

    <property>
      <name>yarn.nodemanager.principal</name>
node_name</value>
    </property>

    <property>
      <name>yarn.nodemanager.recovery.dir</name>
      <value>/var/log/hadoop-yarn/nodemanager/recovery-state</value>
    </property>

    <property>
      <name>yarn.nodemanager.recovery.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.nodemanager.recovery.supervised</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.nodemanager.remote-app-log-dir</name>
      <value>/app-logs</value>
    </property>

    <property>
      <name>yarn.nodemanager.remote-app-log-dir-suffix</name>
      <value>logs</value>
    </property>

    <property>
      <name>yarn.nodemanager.resource-plugins</name>
      <value></value>
    </property>

    <property>
      <name>yarn.nodemanager.resource-plugins.gpu.allowed-gpu-devices</name>
      <value></value>
    </property>

    <property>
      <name>yarn.nodemanager.resource-plugins.gpu.docker-plugin</name>
      <value></value>
    </property>

    <property>
      <name>yarn.nodemanager.resource-plugins.gpu.docker-plugin.nvidiadocker-v1.endpoint</name>
      <value></value>
    </property>

    <property>
      <name>yarn.nodemanager.resource-plugins.gpu.path-to-discovery-executables</name>
      <value></value>
    </property>

    <property>
      <name>yarn.nodemanager.resource.cpu-vcores</name>
      <value>6</value>
    </property>

    <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>52224</value>
    </property>

    <property>
      <name>yarn.nodemanager.resource.percentage-physical-cpu-limit</name>
      <value>80</value>
    </property>

    <property>
      <name>yarn.nodemanager.runtime.linux.allowed-runtimes</name>
      <value>default,docker</value>
    </property>

    <property>
      <name>yarn.nodemanager.runtime.linux.docker.allowed-container-networks</name>
      <value>host,none,bridge</value>
    </property>

    <property>
      <name>yarn.nodemanager.runtime.linux.docker.capabilities</name>
      <value>
      CHOWN,DAC_OVERRIDE,FSETID,FOWNER,MKNOD,NET_RAW,SETGID,SETUID,SETFCAP,
      SETPCAP,NET_BIND_SERVICE,SYS_CHROOT,KILL,AUDIT_WRITE</value>
    </property>

    <property>
      <name>yarn.nodemanager.runtime.linux.docker.default-container-network</name>
      <value>host</value>
    </property>

    <property>
      <name>yarn.nodemanager.runtime.linux.docker.privileged-containers.acl</name>
      <value></value>
    </property>

    <property>
      <name>yarn.nodemanager.runtime.linux.docker.privileged-containers.allowed</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.nodemanager.vmem-check-enabled</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.nodemanager.vmem-pmem-ratio</name>
      <value>2.1</value>
    </property>

    <property>
      <name>yarn.nodemanager.webapp.cross-origin.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.nodemanager.webapp.spnego-keytab-file</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>

    <property>
      <name>yarn.nodemanager.webapp.spnego-principal</name>
node_name</value>
    </property>

    <property>
      <name>yarn.resourcemanager.address</name>
node_name:8050</value>
    </property>

    <property>
      <name>yarn.resourcemanager.admin.address</name>
node_name:8141</value>
    </property>

    <property>
      <name>yarn.resourcemanager.am.max-attempts</name>
      <value>2</value>
    </property>

    <property>
      <name>yarn.resourcemanager.bind-host</name>
      <value>0.0.0.0</value>
    </property>

    <property>
      <name>yarn.resourcemanager.connect.max-wait.ms</name>
      <value>900000</value>
    </property>

    <property>
      <name>yarn.resourcemanager.connect.retry-interval.ms</name>
      <value>30000</value>
    </property>

    <property>
      <name>yarn.resourcemanager.display.per-user-apps</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.fs.state-store.retry-policy-spec</name>
      <value>2000, 500</value>
    </property>

    <property>
      <name>yarn.resourcemanager.fs.state-store.uri</name>
      <value> </value>
    </property>

    <property>
      <name>yarn.resourcemanager.ha.enabled</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.resourcemanager.hostname</name>
node_name</value>
    </property>

    <property>
      <name>yarn.resourcemanager.keytab</name>
      <value>/etc/security/keytabs/rm.service.keytab</value>
    </property>

    <property>
      <name>yarn.resourcemanager.monitor.capacity.preemption.intra-queue-preemption.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval</name>
      <value>15000</value>
    </property>

    <property>
      <name>yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor</name>
      <value>1</value>
    </property>

    <property>
      <name>yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round</name>
      <value>0.33</value>
    </property>

    <property>
      <name>yarn.resourcemanager.nodes.exclude-path</name>
      <value>/etc/hadoop/conf/yarn.exclude</value>
    </property>

    <property>
      <name>yarn.resourcemanager.placement-constraints.handler</name>
      <value>scheduler</value>
    </property>

    <property>
      <name>yarn.resourcemanager.principal</name>
node_name</value>
    </property>

    <property>
      <name>yarn.resourcemanager.proxy-user-privileges.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.proxyuser.*.groups</name>
      <value></value>
    </property>

    <property>
      <name>yarn.resourcemanager.proxyuser.*.hosts</name>
      <value></value>
    </property>

    <property>
      <name>yarn.resourcemanager.proxyuser.*.users</name>
      <value></value>
    </property>

    <property>
      <name>yarn.resourcemanager.recovery.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.resource-tracker.address</name>
node_name:8025</value>
    </property>

    <property>
      <name>yarn.resourcemanager.scheduler.address</name>
node_name:8030</value>
    </property>

    <property>
      <name>yarn.resourcemanager.scheduler.class</name>
      <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    </property>

    <property>
      <name>yarn.resourcemanager.scheduler.monitor.enable</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.state-store.max-completed-applications</name>
      <value>${yarn.resourcemanager.max-completed-applications}</value>
    </property>

    <property>
      <name>yarn.resourcemanager.store.class</name>
      <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>

    <property>
      <name>yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size</name>
      <value>10</value>
    </property>

    <property>
      <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.webapp.address</name>
node_name:8088</value>
    </property>

    <property>
      <name>yarn.resourcemanager.webapp.cross-origin.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.resourcemanager.webapp.https.address</name>
node_name:8090</value>
    </property>

    <property>
      <name>yarn.resourcemanager.webapp.spnego-keytab-file</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>

    <property>
      <name>yarn.resourcemanager.webapp.spnego-principal</name>
node_name</value>
    </property>

    <property>
      <name>yarn.resourcemanager.work-preserving-recovery.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms</name>
      <value>10000</value>
    </property>

    <property>
      <name>yarn.resourcemanager.zk-acl</name>
      <value>sasl:rm:rwcda</value>
    </property>

    <property>
      <name>yarn.resourcemanager.zk-address</name>
node_name:2181,%HOSTGROUP:slave%:2181</value>
    </property>

    <property>
      <name>yarn.resourcemanager.zk-num-retries</name>
      <value>1000</value>
    </property>

    <property>
      <name>yarn.resourcemanager.zk-retry-interval-ms</name>
      <value>1000</value>
    </property>

    <property>
      <name>yarn.resourcemanager.zk-state-store.parent-path</name>
      <value>/rmstore</value>
    </property>

    <property>
      <name>yarn.resourcemanager.zk-timeout-ms</name>
      <value>10000</value>
    </property>

    <property>
      <name>yarn.rm.system-metricspublisher.emit-container-events</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.scheduler.capacity.ordering-policy.priority-utilization.underutilized-preemption.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>52224</value>
    </property>

    <property>
      <name>yarn.scheduler.maximum-allocation-vcores</name>
      <value>6</value>
    </property>

    <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>1024</value>
    </property>

    <property>
      <name>yarn.scheduler.minimum-allocation-vcores</name>
      <value>1</value>
    </property>

    <property>
      <name>yarn.service.framework.path</name>
      <value>/hdp/apps/${hdp.version}/yarn/service-dep.tar.gz</value>
    </property>

    <property>
      <name>yarn.service.system-service.dir</name>
      <value>/services</value>
    </property>

    <property>
      <name>yarn.system-metricspublisher.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.timeline-service.address</name>
node_name:10200</value>
    </property>

    <property>
      <name>yarn.timeline-service.bind-host</name>
      <value>0.0.0.0</value>
    </property>

    <property>
      <name>yarn.timeline-service.client.max-retries</name>
      <value>30</value>
    </property>

    <property>
      <name>yarn.timeline-service.client.retry-interval-ms</name>
      <value>1000</value>
    </property>

    <property>
      <name>yarn.timeline-service.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.active-dir</name>
      <value>/ats/active/</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.app-cache-size</name>
      <value>10</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds</name>
      <value>3600</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.done-dir</name>
      <value>/ats/done/</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.group-id-plugin-classes</name>
      <value>org.apache.hadoop.yarn.applications.distributedshell.DistributedShellTimelinePlugin</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.group-id-plugin-classpath</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.retain-seconds</name>
      <value>604800</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.scan-interval-seconds</name>
      <value>60</value>
    </property>

    <property>
      <name>yarn.timeline-service.entity-group-fs-store.summary-store</name>
      <value>org.apache.hadoop.yarn.server.timeline.RollingLevelDBTimelineStore</value>
    </property>

    <property>
      <name>yarn.timeline-service.generic-application-history.save-non-am-container-meta-info</name>
      <value>false</value>
    </property>

    <property>
      <name>yarn.timeline-service.generic-application-history.store-class</name>
      <value>org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore</value>
    </property>

    <property>
      <name>yarn.timeline-service.hbase-schema.prefix</name>
      <value>prod.</value>
    </property>

    <property>
      <name>yarn.timeline-service.hbase.configuration.file</name>
      <value>file:///usr/hdp/3.1.5.0-152/hadoop/conf/embedded-yarn-ats-hbase/hbase-site.xml</value>
    </property>

    <property>
      <name>yarn.timeline-service.hbase.coprocessor.jar.hdfs.location</name>
      <value>file:///usr/hdp/3.1.5.0-152/hadoop-yarn/timelineservice/hadoop-yarn-server-timelineservice-hbase-coprocessor.jar</value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.cookie.domain</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.cookie.path</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.kerberos.keytab</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.kerberos.name.rules</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.kerberos.principal</name>
node_name</value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.proxyuser.*.groups</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.proxyuser.*.hosts</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.proxyuser.*.users</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.proxyuser.ambari-server-hdp721ea.groups</name>
      <value>*</value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.proxyuser.ambari-server-hdp721ea.hosts</name>
node_name</value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.signature.secret</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.signature.secret.file</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.signer.secret.provider</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.signer.secret.provider.object</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.simple.anonymous.allowed</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.token.validity</name>
      <value></value>
    </property>

    <property>
      <name>yarn.timeline-service.http-authentication.type</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>yarn.timeline-service.http-cross-origin.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.timeline-service.keytab</name>
      <value>/etc/security/keytabs/yarn.service.keytab</value>
    </property>

    <property>
      <name>yarn.timeline-service.leveldb-state-store.path</name>
      <value>/hadoop/yarn/timeline</value>
    </property>

    <property>
      <name>yarn.timeline-service.leveldb-timeline-store.path</name>
      <value>/hadoop/yarn/timeline</value>
    </property>

    <property>
      <name>yarn.timeline-service.leveldb-timeline-store.read-cache-size</name>
      <value>104857600</value>
    </property>

    <property>
      <name>yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size</name>
      <value>10000</value>
    </property>

    <property>
      <name>yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size</name>
      <value>10000</value>
    </property>

    <property>
      <name>yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms</name>
      <value>300000</value>
    </property>

    <property>
      <name>yarn.timeline-service.principal</name>
node_name</value>
    </property>

    <property>
      <name>yarn.timeline-service.reader.webapp.address</name>
node_name:8198</value>
    </property>

    <property>
      <name>yarn.timeline-service.reader.webapp.https.address</name>
node_name:8199</value>
    </property>

    <property>
      <name>yarn.timeline-service.recovery.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.timeline-service.state-store-class</name>
      <value>org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore</value>
    </property>

    <property>
      <name>yarn.timeline-service.store-class</name>
      <value>org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore</value>
    </property>

    <property>
      <name>yarn.timeline-service.ttl-enable</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.timeline-service.ttl-ms</name>
      <value>2678400000</value>
    </property>

    <property>
      <name>yarn.timeline-service.version</name>
      <value>2.0f</value>
    </property>

    <property>
      <name>yarn.timeline-service.versions</name>
      <value>1.5f,2.0f</value>
    </property>

    <property>
      <name>yarn.timeline-service.webapp.address</name>
node_name:8188</value>
    </property>

    <property>
      <name>yarn.timeline-service.webapp.https.address</name>
node_name:8190</value>
    </property>

    <property>
      <name>yarn.webapp.api-service.enable</name>
      <value>true</value>
    </property>

    <property>
      <name>yarn.webapp.ui2.enable</name>
      <value>true</value>
    </property>

  </configuration>

 

 

 

Hbase-site.xml

 

 

 

  <configuration  xmlns:xi="<a href="<a href="http://www.w3.org/2001/XInclude" target="_blank">http://www.w3.org/2001/XInclude</a>" target="_blank"><a href="http://www.w3.org/2001/XInclude</a" target="_blank">http://www.w3.org/2001/XInclude</a</a>>">

    <property>
      <name>dfs.domain.socket.path</name>
      <value>/var/lib/hadoop-hdfs/dn_socket</value>
    </property>

    <property>
      <name>hbase.bulkload.staging.dir</name>
      <value>/apps/hbase/staging</value>
    </property>

    <property>
      <name>hbase.client.keyvalue.maxsize</name>
      <value>1048576</value>
    </property>

    <property>
      <name>hbase.client.retries.number</name>
      <value>35</value>
    </property>

    <property>
      <name>hbase.client.scanner.caching</name>
      <value>100</value>
    </property>

    <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.coprocessor.master.classes</name>
      <value>org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

    <property>
      <name>hbase.coprocessor.region.classes</name>
      <value>org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint</value>
    </property>

    <property>
      <name>hbase.coprocessor.regionserver.classes</name>
      <value>org.apache.hadoop.hbase.security.access.AccessController</value>
    </property>

    <property>
      <name>hbase.defaults.for.version.skip</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.hregion.majorcompaction</name>
      <value>604800000</value>
    </property>

    <property>
      <name>hbase.hregion.majorcompaction.jitter</name>
      <value>0.50</value>
    </property>

    <property>
      <name>hbase.hregion.max.filesize</name>
      <value>10737418240</value>
    </property>

    <property>
      <name>hbase.hregion.memstore.block.multiplier</name>
      <value>4</value>
    </property>

    <property>
      <name>hbase.hregion.memstore.flush.size</name>
      <value>134217728</value>
    </property>

    <property>
      <name>hbase.hregion.memstore.mslab.enabled</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.hstore.blockingStoreFiles</name>
      <value>100</value>
    </property>

    <property>
      <name>hbase.hstore.compaction.max</name>
      <value>10</value>
    </property>

    <property>
      <name>hbase.hstore.compactionThreshold</name>
      <value>3</value>
    </property>

    <property>
      <name>hbase.local.dir</name>
      <value>${hbase.tmp.dir}/local</value>
    </property>

    <property>
      <name>hbase.master.info.bindAddress</name>
      <value>0.0.0.0</value>
    </property>

    <property>
      <name>hbase.master.info.port</name>
      <value>16010</value>
    </property>

    <property>
      <name>hbase.master.kerberos.principal</name>
      <value>hbase/_HOST@node_name.COM</value>
    </property>

    <property>
      <name>hbase.master.keytab.file</name>
      <value>/etc/security/keytabs/hbase.service.keytab</value>
    </property>

    <property>
      <name>hbase.master.namespace.init.timeout</name>
      <value>2400000</value>
    </property>

    <property>
      <name>hbase.master.port</name>
      <value>16000</value>
    </property>

    <property>
      <name>hbase.master.ui.readonly</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.master.wait.on.regionservers.timeout</name>
      <value>30000</value>
    </property>

    <property>
      <name>hbase.region.server.rpc.scheduler.factory.class</name>
      <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
    </property>

    <property>
      <name>hbase.regionserver.executor.openregion.threads</name>
      <value>20</value>
    </property>

    <property>
      <name>hbase.regionserver.global.memstore.size</name>
      <value>0.4</value>
    </property>

    <property>
      <name>hbase.regionserver.handler.count</name>
      <value>70</value>
    </property>

    <property>
      <name>hbase.regionserver.info.port</name>
      <value>16030</value>
    </property>

    <property>
      <name>hbase.regionserver.kerberos.principal</name>
      <value>hbase/_HOST@node_name.COM</value>
    </property>

    <property>
      <name>hbase.regionserver.keytab.file</name>
      <value>/etc/security/keytabs/hbase.service.keytab</value>
    </property>

    <property>
      <name>hbase.regionserver.port</name>
      <value>16020</value>
    </property>

    <property>
      <name>hbase.regionserver.thread.compaction.small</name>
      <value>3</value>
    </property>

    <property>
      <name>hbase.regionserver.wal.codec</name>
      <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
    </property>

    <property>
      <name>hbase.rootdir</name>
      <value>/apps/hbase/data</value>
    </property>

    <property>
      <name>hbase.rpc.controllerfactory.class</name>
      <value></value>
    </property>

    <property>
      <name>hbase.rpc.protection</name>
      <value>authentication</value>
    </property>

    <property>
      <name>hbase.rpc.timeout</name>
      <value>90000</value>
    </property>

    <property>
      <name>hbase.security.authentication</name>
      <value>kerberos</value>
    </property>

    <property>
      <name>hbase.security.authentication.spnego.kerberos.keytab</name>
      <value>/etc/security/keytabs/spnego.service.keytab</value>
    </property>

    <property>
      <name>hbase.security.authentication.spnego.kerberos.principal</name>
      <value>HTTP/_HOST@node_name.COM</value>
    </property>

    <property>
      <name>hbase.security.authorization</name>
      <value>true</value>
    </property>

    <property>
      <name>hbase.superuser</name>
      <value>hbase</value>
    </property>

    <property>
      <name>hbase.tmp.dir</name>
      <value>/tmp/hbase-${user.name}</value>
    </property>

    <property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2181</value>
    </property>

    <property>
      <name>hbase.zookeeper.quorum</name>
      <value>tnode107.node_name.com,tnode108.node_name.com,wnode101.node_name.com</value>
    </property>

    <property>
      <name>hbase.zookeeper.useMulti</name>
      <value>true</value>
    </property>

    <property>
      <name>hfile.block.cache.size</name>
      <value>0.4</value>
    </property>

    <property>
      <name>phoenix.functions.allowUserDefinedFunctions</name>
      <value>true</value>
    </property>

    <property>
      <name>phoenix.query.timeoutMs</name>
      <value>60000</value>
    </property>

    <property>
      <name>phoenix.rpc.index.handler.count</name>
      <value>20</value>
    </property>

    <property>
      <name>zookeeper.recovery.retry</name>
      <value>6</value>
    </property>

    <property>
      <name>zookeeper.session.timeout</name>
      <value>90000</value>
    </property>

    <property>
      <name>zookeeper.znode.parent</name>
      <value>/hbase-secure</value>
    </property>

  </configuration>

 

 

 

Al other application run just fine. Spark, Tez etc. Only MR has this problem.

2 REPLIES 2

avatar

From the stack trace the hbase:meta region itself not available means you cannot operate anything. From HBase master/regionserver logs we need to find what's the reason for hbase:meta region is not getting assigned. Would you mind sharing the logs?

avatar
Explorer

@rchintaguntla 

 

All the logs are here

 

https://we.tl/t-cCQWq6tdmZ