Member since
01-21-2016
11
Posts
0
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2900 | 04-20-2018 12:51 PM | |
1202 | 02-14-2018 12:28 PM | |
2048 | 01-21-2016 03:51 PM |
04-20-2018
12:51 PM
I finally solved the issue. I removed Ambari Metrics service from Ambari and install, with default parameters, again. Its working without previous issues now.
... View more
04-09-2018
08:33 AM
Hi, I have a strange issue with Ambari 2.6.1.0 on HDP 2.6.4.0. When I stop Metrics Collector, delete all files in /var/lib/ambari-metrics-collector and start Metrics Collector again, new checkpoint, hbase and hbase-tmp dirs are re-created. Metrics Collector shows graphs and everything seems to be alright for a couple of minutes. But then Metrics Collector crashed and I see those outputs: ambari-metrics-collector.log : java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.apache.helix.controller.stages.ClusterEventBlockingQueue.take(ClusterEventBlockingQueue.java:85)
at org.apache.helix.controller.GenericHelixController$ClusterEventProcessor.run(GenericHelixController.java:594)
2018-04-09 09:27:20,582 INFO org.apache.helix.controller.GenericHelixController: END ClusterEventProcessor thread
2018-04-09 09:27:20,580 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@host.domain.net:6188
2018-04-09 09:27:20,585 WARN org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
javax.ws.rs.WebApplicationException: org.apache.phoenix.exception.PhoenixIOException: Interrupted calling coprocessor service org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService for row \x00\x00METRIC_RECORD
at org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TimelineWebServices.getTimelineMetrics(TimelineWebServices.java:377)
at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:895)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:843)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:804)
at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
[org.mortbay.jetty.EofException]
at com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:159)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:895)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:843)
at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:804)
at com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1426)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
Caused by: javax.xml.bind.MarshalException
- with linked exception:
[org.mortbay.jetty.EofException]
at com.sun.xml.bind.v2.runtime.MarshallerImpl.write(MarshallerImpl.java:325)
at com.sun.xml.bind.v2.runtime.MarshallerImpl.marshal(MarshallerImpl.java:249)
at javax.xml.bind.helpers.AbstractMarshallerImpl.marshal(AbstractMarshallerImpl.java:95)
at com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:179)
at com.sun.jersey.core.provider.jaxb.AbstractRootElementProvider.writeTo(AbstractRootElementProvider.java:157)
... 37 more
Caused by: org.mortbay.jetty.EofException
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:634)
at org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:580)
at com.sun.jersey.spi.container.servlet.WebComponent$Writer.write(WebComponent.java:307)
at com.sun.jersey.spi.container.ContainerResponse$CommittingOutputStream.write(ContainerResponse.java:134)
at com.sun.xml.bind.v2.runtime.output.UTF8XmlOutput.flushBuffer(UTF8XmlOutput.java:416)
at com.sun.xml.bind.v2.runtime.output.UTF8XmlOutput.endDocument(UTF8XmlOutput.java:141)
at com.sun.xml.bind.v2.runtime.XMLSerializer.endDocument(XMLSerializer.java:856)
at com.sun.xml.bind.v2.runtime.MarshallerImpl.postwrite(MarshallerImpl.java:374)
at com.sun.xml.bind.v2.runtime.MarshallerImpl.write(MarshallerImpl.java:321)
hbase-ams-master-host.domain.net.log WARN [RpcServer.FifoWFPBQ.default.handler=27,queue=0,port=35935] io.FSDataInputStreamWrapper: Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream . So there may be a TCP socket connection left open in CLOSE_WAIT state.
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.io.FSDataInputStreamWrapper.unbuffer(FSDataInputStreamWrapper.java:263)
at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.unbufferStream(HFileBlock.java:1788)
at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.unbufferStream(HFileReaderV2.java:1403)
at org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$Scanner.close(AbstractHFileReader.java:343)
at org.apache.hadoop.hbase.regionserver.StoreFileScanner.close(StoreFileScanner.java:252)
at org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:222)
at org.apache.hadoop.hbase.regionserver.StoreScanner.close(StoreScanner.java:449)
at org.apache.hadoop.hbase.regionserver.KeyValueHeap.close(KeyValueHeap.java:217)
at org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.close(HRegion.java:6198)
at org.apache.phoenix.cache.aggcache.SpillableGroupByCache$2.close(SpillableGroupByCache.java:347)
at org.apache.phoenix.coprocessor.BaseScannerRegionObserver$1.close(BaseScannerRegionObserver.java:244)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.closeScanner(RSRpcServices.java:2717)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2674)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32385)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2150)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
Caused by: java.lang.UnsupportedOperationException: this stream does not support unbuffering.
at org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:233)
I have all Metrics Collector configs with recommended values, I use embedded option for around 10 hosts. Please suggest what to check.
... View more
Labels:
- Labels:
-
Apache Ambari
02-11-2018
09:52 AM
Hi, after trying to express upgrade to 2.6.4 and failing because of Solr mpack, I proceeded to a downgrade back to 2.5.0. The downgrade finished with: org.apache.ambari.server.AmbariException: The following 3 host component(s) have not been downgraded to their desired versions:
host3-slave.domain.net: SOLR_SERVER (current = UNKNOWN, desired = 2.5.0.0-1245)
host4-slave.domain.net: SOLR_SERVER (current = UNKNOWN, desired = 2.5.0.0-1245)
host5-slave.domain.net: SOLR_SERVER (current = UNKNOWN, desired = 2.5.0.0-1245)
at org.apache.ambari.server.serveraction.upgrades.FinalizeUpgradeAction.finalizeDowngrade(FinalizeUpgradeAction.java:276)
at org.apache.ambari.server.serveraction.upgrades.FinalizeUpgradeAction.execute(FinalizeUpgradeAction.java:93)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(ServerActionExecutor.java:550)
at org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(ServerActionExecutor.java:466)
at java.lang.Thread.run(Thread.java:748) Because I have installed solr-service-mpack-5.5.2.2.5.tar.gz and the Ambari predict version of Solr is the same as HDP 2.5.0 stack, but it was 5.5.2.2.5 from mpack. This was the same reason why the upgrade did not finished. Anyway, I removed from all slave hosts the Solr from mpack and succefully deleted Solr service from Ambari. When I try to do a new upgrade without cause of problems, I get those: Reason: There is an existing downgrade from 2.6.4.0-91 which has not completed. This downgrade must be completed before a new upgrade or downgrade can begin. I already tried those commands: [hdfs@host1-master ~]$ hdfs dfsadmin -finalizeUpgrade
Finalize upgrade successful for host1-master.domain.net/12.34.56.78:8020
Finalize upgrade successful for host2-master.domain.net/12.34.56.79:8020
[hdfs@host1-master ~]$ hdfs dfsadmin -rollingUpgrade finalize
FINALIZE rolling upgrade ...
There is no rolling upgrade in progress or rolling upgrade has already been finalized. I also tried to disable "preUpgradeCheck" in Ambari's experimental mode, but it didn't helped as well. I get: An internal system exception occurred: Unable to perform upgrade as another downgrade (request ID 449) is in progress. Please, can you help me with stuck on upgrade?
... View more
Labels:
02-03-2016
07:07 AM
Hi, I put my next question about downgrade to new question here.
... View more
02-03-2016
07:02 AM
Hi,
I need a advice how to correctly downgrade Ambari from 2.2 to 2.1, I haven't seen any docs about it.
My issue is described here, I already upgraded Ambari from 2.1 to 2.2, but I have HDP 2.0.6 (HDFS/MapReduce2/YARN on version 2.1.0.2.0), so my Ambari 2.2 doesn't work correctly with HDP 2.0. I have to downgrade Ambari or upgrade HDP, but when I want to upgrade HDP: Ambari 2.2 does not support managing an HDP 2.0 cluster. If you are running HDP 2.0, in order to use Ambari 2.2 you must first upgrade to HDP 2.1 or higher using either Ambari 1.7 or 2.0 prior to upgrading to Ambari 2.2. Once completed, upgrade your current Ambari to Ambari 2.2. I know how to change repo for downgrade, but Ambari server DB schema 2.2 is not working with Ambari 2.1, is there any script howto change back DB schema? Or is there a any chance to upgrade my HDP 2.0.6 to version of HDP which works with Ambari 2.2 without using Ambari? I see there is a Non-Ambari Cluster Upgrade Guide.
... View more
Labels:
- Labels:
-
Apache Ambari
01-24-2016
08:14 AM
Didn't help me.
... View more
01-22-2016
06:00 AM
Hi, I migrated Ambari from 2.1 to 2.2 and have a strange issue with folders permission, using HDP 2.0.6 (HDFS, YARN and MapReduce2 on 2.1.0.2) on CentOS 6. When I start MapReduce2 service through init.d script everything is working well, but when I would to start it through Ambari 2.2 it automatically change the permission of /app-logs and /mr-history folders on HDFS to: dr----x--t - yarn hadoop 02014-02-1407:24 /app-logs
dr----x--t - mapred hadoop 02014-02-1208:52 /mr-history The result of changing the permission on those folder, when I would start MapReduce2 service through Ambari is: Error creating done directory:[hdfs://nnha:8020/mr-history/done]
Permission denied: user=mapred, access=EXECUTE, inode="/mr-history":mapred:hadoop:dr----x--t When I change it back to default as follows: hdfs dfs -chmod -R 1777 /app-logs
hdfs dfs -chmod -R 1777 /mr-history
hdfs dfs -chown -R yarn:hadoop /app-logs
hdfs dfs -chown -R mapred:hadoop /mr-history Ambari change the permission of folders again to "dr----x--t", do you have please any recommendation?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
01-21-2016
03:51 PM
I solved the issue with stating YARN service by insert property lines about yarn.timeline-service.
... View more
01-21-2016
01:28 PM
Hi, I just upgraded Ambari from 2.1 to 2.2, my HDP version is 2.1.0, all on CentOS 6.7. I have a new strange issue with starting of MapReduce2 and YARN services. All is ending with this message: stderr: /var/lib/ambari-agent/data/errors-4003.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py", line 221, in <module>
Resourcemanager().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py", line 110, in start
import params
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/params.py", line 28, in <module>
from params_linux import *
File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py", line 153, in <module>
ats_leveldb_lock_file = os.path.join(ats_leveldb_dir, "leveldb-timeline-store.ldb", "LOCK")
File "/usr/lib64/python2.6/posixpath.py", line 67, in join
elif path == '' or path.endswith('/'):
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'yarn.timeline-service.leveldb-timeline-store.path' was not found in configurations dictionary! stdout: /var/lib/ambari-agent/data/output-4003.txt 2016-01-21 14:05:32,174 - Using hadoop conf dir: /etc/hadoop/conf
2016-01-21 14:05:32,282 - Using hadoop conf dir: /etc/hadoop/conf
2016-01-21 14:05:32,284 - Group['hadoop'] {}
2016-01-21 14:05:32,286 - Group['users'] {}
2016-01-21 14:05:32,286 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,288 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2016-01-21 14:05:32,289 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,290 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,291 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,292 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,292 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-01-21 14:05:32,294 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-01-21 14:05:32,343 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-01-21 14:05:32,344 - User['hdfs'] {'ignore_failures': False}
2016-01-21 14:05:32,345 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop']}
2016-01-21 14:05:32,345 - Directory['/etc/hadoop'] {'mode': 0755}
2016-01-21 14:05:32,346 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'hadoop', 'recursive': True}
2016-01-21 14:05:32,346 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2016-01-21 14:05:32,393 - Skipping Link['/etc/hadoop/conf'] due to not_if
2016-01-21 14:05:32,407 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-01-21 14:05:32,408 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-01-21 14:05:32,423 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-01-21 14:05:32,477 - Skipping Execute[('setenforce', '0')] due to not_if
2016-01-21 14:05:32,477 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-01-21 14:05:32,480 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-01-21 14:05:32,480 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-01-21 14:05:32,481 - File['/var/lib/ambari-agent/lib/fast-hdfs-resource.jar'] {'content': StaticFile('fast-hdfs-resource.jar'), 'mode': 0644}
2016-01-21 14:05:32,538 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-01-21 14:05:32,540 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-01-21 14:05:32,541 - File['/etc/hadoop/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-01-21 14:05:32,549 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-01-21 14:05:32,549 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-01-21 14:05:32,550 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-01-21 14:05:32,554 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-01-21 14:05:32,602 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-01-21 14:05:32,845 - Using hadoop conf dir: /etc/hadoop/conf
2016-01-21 14:05:32,845 - Skipping get_hdp_version since hdp-select is not yet available
2016-01-21 14:05:32,846 - Using hadoop conf dir: /etc/hadoop/conf Can you please recommend what shall I do? Thank you so much. Regards, Pavel
... View more
Labels:
- Labels:
-
Apache YARN