Member since
03-02-2016
19
Posts
28
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4867 | 03-22-2016 11:15 AM | |
1896 | 03-16-2016 06:34 AM | |
3447 | 03-09-2016 04:30 PM | |
2306 | 03-04-2016 03:53 AM |
06-08-2016
07:55 AM
I already attached the log. Bellow is the link to the script. https://github.com/intel-hadoop/Big-Data-Benchmark-for-Big-Bench/blob/master/bin/runBenchmark Big bench bin folder https://github.com/intel-hadoop/Big-Data-Benchmark-for-Big-Bench/blob/master/bin Big bench Documentation https://github.com/intel-hadoop/Big-Data-Benchmark-for-Big-Bench Exact script https://github.com/intel-hadoop/Big-Data-Benchmark-for-Big-Bench/blob/master/engines/hive/queries/q20/run.sh
... View more
06-08-2016
06:55 AM
q20-hive-engine-validation-power-test-0.txt Hello, I am working on big bench performance benchmark on hpd 2.3. While running 20th query. Im facing the bellow issue. Files /root/BigBench/Big-Data-Benchmark-for-Big-Bench/engines/hive/queries/q20/results/q20-result and /dev/fd/62 differ Validation of /root/BigBench/Big-Data-Benchmark-for-Big-Bench/engines/hive/queries/q20/results/q20-result failed: Query returned incorrect results Validation failed: Query results are not OK cat: Unable to write to output stream. Please find attachment of Error Log.
... View more
Labels:
05-20-2016
01:52 PM
1 Kudo
I am trying to fetch the data from arcsight to elastic search using flume 1.6. Unfortunately, flume official version is not supported the elasticsearch 2.3. So I used the third party code. https://github.com/lucidfrontier45/ElasticsearchSink2 Everything is working which they mentioned in the above link. while executing the flume, it is producing the serializer error. java.lang.IllegalArgumentException: org.apache.flume.sink.elasticsearch.ElasticSearchDynamicSerializer is not an ElasticSearchEventSerializer
at com.frontier45.flume.sink.elasticsearch2.ElasticSearchSink.configure(ElasticSearchSink.java:278)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:413)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:98)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-05-20 09:36:25,003 (conf-file-poller-0) [ERROR - org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:427)] Sink k1 has been removed due to an error during configuration
java.lang.IllegalArgumentException: org.apache.flume.sink.elasticsearch.ElasticSearchDynamicSerializer is not an ElasticSearchEventSerializer
at com.frontier45.flume.sink.elasticsearch2.ElasticSearchSink.configure(ElasticSearchSink.java:278)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:413)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:98)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache Hadoop
03-22-2016
11:15 AM
1 Kudo
Problem is with java 1.8 version. Bigbench is compatible with java 1.7. After running with java 1.7. Its working fine.
... View more
03-16-2016
05:43 PM
2 Kudos
While running bigbench Benchmark on HDP 2.3.0.0 using ambari, the following error occurred in data generation stage ------------------------------------------------------------------------------------------------------------------------------------------------------- JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51-2.4.5.5.el7.x86_64/jre-abrt/bin/java /tmp/pdgfLog/1/pdgf.log: DEBUG main pdgf.generator.BigBenchReviewGenerator - 'Clothing & Accessories_Tops & Tees'
DEBUG main pdgf.generator.BigBenchReviewGenerator - 'Toys & Games_Electronics for Kids'
DEBUG main pdgf.generator.BigBenchReviewGenerator - 'Toys & Games_Vehicles & Remote-Control'
DEBUG main pdgf.core.dataGenerator.scheduler.DefaultPartitioner - Using default Pre-Partitioner from class pdgf.core.dataGenerator.scheduler.TemplatePartitioner
11: <generation>
84: <schema name="default">
85: <tables>
554: <table name="product_reviews">
555: <scheduler name="DefaultScheduler">
556: <partitioner name="pdgf.core.dataGenerator.scheduler.TemplatePartitioner">
DEBUG main pdgf.output.FileOutputSkeleton - def path != null: '"/user/root/benchmarks/bigbench/data_refresh/"+table.getName()+"/"' && !Constants.OUTPUT_FILE_KEEP_OUTPUTDIR:false => ignoring specified <ouputDir> nodes
DEBUG main pdgf.core.dataGenerator.scheduler.DefaultPartitioner - Using default Pre-Partitioner from class pdgf.core.dataGenerator.scheduler.TemplatePartitioner 11: <generation>
59: <scheduler name="DefaultScheduler">
60: <partitioner name="pdgf.core.dataGenerator.scheduler.TemplatePartitioner" staticTableOnAllNodes="false">
DEBUG main pdgf.output.FileOutputSkeleton - def path != null: '"/user/root/benchmarks/bigbench/data_refresh/"+table.getName()+"/"' && !Constants.OUTPUT_FILE_KEEP_OUTPUTDIR:false => ignoring specified <ouputDir> nodes
DEBUG main pdgf.core.dataGenerator.DataGenerator - MemoryAllocatorInterface: add Element: <schema name="bigbench"><table name="store"><field name="s_rec_start_date"><gen name="DateTimeGenerator">
WARN main pdgf.core.dataGenerator.DataGenerator - A 'pdgf.core.exceptions.ConfigurationException Exception occurred during initialization.
Message: The template contains errors: java.lang.RuntimeException: java.io.IOException: invalid constant type: 18
Copy this class in an IDE of your choice to ease debugging:
private class TemplateTester extends pdgf.generator.template.NextValueTemplate {
public void getValue(pdgf.plugin.AbstractPDGFRandom rng,pdgf.core.dataGenerator.beans.FieldValueDTO fvdto, pdgf.core.dataGenerator.beans.GenerationContext gc) throws Exception{
fvdto.setBothValues(generator(0, rng, gc, fvdto) + " " + generator(1, rng, gc, fvdto));
}
} Location: Location:
14: <schema name="bigbench">
2076: <table name="store">
2170: <field name="s_manager" primary="false" size="40" type="VARCHAR">
2171: <gen_NullGenerator name="NullGenerator" probability="${NULL_CHANCE}">
2172: <gen_TemplateGenerator name="TemplateGenerator">
DebugInformation:
:pdgf.core.exceptions.ConfigurationException: The template contains errors: java.lang.RuntimeException: java.io.IOException: invalid constant type: 18
Copy this class in an IDE of your choice to ease debugging:
private class TemplateTester extends pdgf.generator.template.NextValueTemplate {
public void getValue(pdgf.plugin.AbstractPDGFRandom rng,pdgf.core.dataGenerator.beans.FieldValueDTO fvdto, pdgf.core.dataGenerator.beans.GenerationContext gc) throws Exception{
fvdto.setBothValues(generator(0, rng, gc, fvdto) + " " + generator(1, rng, gc, fvdto));
}
}
at pdgf.generator.template.NextValueTemplate.instance(NextValueTemplate.java:97)
at pdgf.generator.TemplateGenerator.initialize(TemplateGenerator.java:102)
at pdgf.core.dbSchema.Element.initStage8_initialize_(Element.java:514)
at pdgf.core.dbSchema.Element.initStage8_initialize_(Element.java:528)
at pdgf.core.dbSchema.Element.initStage8_initialize_(Element.java:528)
at pdgf.core.dbSchema.Element.initStage8_initialize_(Element.java:528)
at pdgf.core.dbSchema.Element.initStage8_initialize_(Element.java:528)
at pdgf.core.dbSchema.Project.initStage8_initialize_(Project.java:722)
at pdgf.core.dataGenerator.DataGenerator.initRootProject(DataGenerator.java:171)
at pdgf.core.dataGenerator.DataGenerator.initialize(DataGenerator.java:139)
at pdgf.core.dataGenerator.DataGenerator.start(DataGenerator.java:214)
at pdgf.actions.StartAction.execute(StartAction.java:112)
at pdgf.actions.ActionPrioritySortObject.execute(ActionPrioritySortObject.java:50)
at pdgf.Controller.parseCmdLineArgs(Controller.java:1248)
at pdgf.Controller.start(Controller.java:1385)
at pdgf.Controller.main(Controller.java:1226)
... View more
Labels:
03-16-2016
06:34 AM
5 Kudos
Initially I installed manually ambari agent and that one I didn’t
uninstall it properly. So, while running the ambari to deploy the agent, script is pointing to wrong link resource_management ->
/usr/lib/ambari-agent/lib/resource_management instead of resource_management ->
/usr/lib/ambari-server/lib/resource_management Solution: goto cd /usr/lib/python2.6/site-packages/ create the correct link ln -s /usr/lib/ambari-server/lib/resource_management installation agent is deployment is working fine.
... View more
03-16-2016
06:33 AM
2 Kudos
Traceback
(most recent call last): File
"/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py", line
41, in <module> from resource_management.core.shell import
quote_bash_args ImportError:
No module named resource_management.core.shell
... View more
Labels:
03-11-2016
11:53 AM
1 Kudo
Copied the below path in the flume-env.sh FLUME_CLASSPATH="/home/hadoop/hadoop/share/hadoop/hdfs/" Hdfs sink is working fine.
... View more
03-09-2016
04:30 PM
3 Kudos
FLUME_CLASSPATH=/root/flume/lib/ copied comon jar files from hadoop folder to the flume folder. cp /root/hadoop/share/hadoop/common/*.jar
/root/flume/lib cp
/root/hadoop/share/hadoop/common/lib/*.jar /root/flume/lib Now the above error is rectified.
... View more
03-09-2016
07:22 AM
3 Kudos
org.apache.flume.sink.DefaultSinkFactory.create:42) - Creating instance of sink: hdfs-sink, type: hdfs
09 Mar 2016 02:07:33,594 ERROR [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:145) (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:145) - Failed to start agent because dependencies were not found in classpath. Error follows.
java.lang.NoClassDefFoundError: org/apache/hadoop/io/SequenceFile$CompressionType
at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:239)
at org.apache.flume.conf.Configurables.configure(Configurables.java:41)
at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:413)
at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:98)
at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.io.SequenceFile$CompressionType
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache Hadoop
03-07-2016
05:00 AM
It is network issue. I just fired the below command. iptables --flush Now metrics is showing in dashboard.
... View more
03-04-2016
04:47 AM
1 Kudo
It is network issue. I just fired the below command. iptables --flush Now metrics is showing in dashboard.
... View more
03-04-2016
03:53 AM
1 Kudo
I installed it on fresh VM. Now it is working fine. I installed the ambari metrics also. But still it is not capturing any metrics.
... View more
03-03-2016
06:18 AM
1 Kudo
While installing from ambari, it is pointing to HDP 2.3.4.0 version only. Is there any other method to upgrade it now. I uploaded the error log, ambari log and output log in my main question.
... View more
03-03-2016
05:57 AM
1 Kudo
var-lib-ambari-agent-data-output-91.txt
... View more
03-03-2016
05:56 AM
1 Kudo
ambari-server-02.txtvar-lib-ambari-agent-data-errors-91.txt
... View more
03-02-2016
06:56 PM
2 Kudos
Im installing the single HDP-2.3.0.0-2557 on single node through ambari 2.2.0.0 on centos7 and installation is failed during Restart App Timeline Server. Below is the log. stderr: /var/lib/ambari-agent/data/errors-91.txt Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 147, in <module> ApplicationTimelineServer().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 524, in restart self.start(env, upgrade_type=upgrade_type) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44, in start service('timelineserver', action='start') File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/service.py", line 79, in service try_sleep=1, File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__ self.env.run() File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action provider_action() File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner result = function(command, **kwargs) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call tries=tries, try_sleep=try_sleep) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD stdout: /var/lib/ambari-agent/data/output-91.txt 2016-03-02 10:24:11,693 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:11,693 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:11,694 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:11,713 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:11,713 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:11,732 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:11,732 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:11,732 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:11,820 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:11,820 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:11,820 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:11,841 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:11,841 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:11,860 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:11,860 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:11,860 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:11,861 - Group['hadoop'] {} 2016-03-02 10:24:11,862 - Group['users'] {} 2016-03-02 10:24:11,862 - User['zookeeper'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,863 - User['ams'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,863 - User['ambari-qa'] {'gid': 'hadoop', 'groups': [u'users']} 2016-03-02 10:24:11,864 - User['hdfs'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,864 - User['yarn'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,865 - User['mapred'] {'gid': 'hadoop', 'groups': [u'hadoop']} 2016-03-02 10:24:11,865 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-03-02 10:24:11,867 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-03-02 10:24:11,872 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2016-03-02 10:24:11,872 - Group['hdfs'] {'ignore_failures': False} 2016-03-02 10:24:11,872 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', u'hdfs']} 2016-03-02 10:24:11,873 - Directory['/etc/hadoop'] {'mode': 0755} 2016-03-02 10:24:11,883 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2016-03-02 10:24:11,884 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-03-02 10:24:11,895 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2016-03-02 10:24:11,901 - Skipping Execute[('setenforce', '0')] due to not_if 2016-03-02 10:24:11,902 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:11,903 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:11,903 - Changing owner for /var/run/hadoop from 1003 to root 2016-03-02 10:24:11,903 - Changing group for /var/run/hadoop from 1000 to root 2016-03-02 10:24:11,904 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:11,907 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:11,908 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:11,909 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2016-03-02 10:24:11,915 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:11,915 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2016-03-02 10:24:11,916 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2016-03-02 10:24:11,919 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2016-03-02 10:24:11,922 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2016-03-02 10:24:12,065 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:12,065 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:12,065 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:12,086 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:12,086 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:12,104 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:12,104 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:12,104 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:12,123 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.3.0.0-2557 2016-03-02 10:24:12,123 - Checking if need to create versioned conf dir /etc/hadoop/2.3.0.0-2557/0 2016-03-02 10:24:12,123 - call['conf-select create-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1} 2016-03-02 10:24:12,142 - call returned (1, '/etc/hadoop/2.3.0.0-2557/0 exist already', '') 2016-03-02 10:24:12,142 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.3.0.0-2557 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False} 2016-03-02 10:24:12,162 - checked_call returned (0, '/usr/hdp/2.3.0.0-2557/hadoop/conf -> /etc/hadoop/2.3.0.0-2557/0') 2016-03-02 10:24:12,162 - Ensuring that hadoop has the correct symlink structure 2016-03-02 10:24:12,162 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-03-02 10:24:12,165 - Execute['export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh --config /usr/hdp/current/hadoop-client/conf stop timelineserver'] {'user': 'yarn'} 2016-03-02 10:24:12,236 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,236 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,237 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,237 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,237 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,238 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'ignore_failures': True, 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,239 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2016-03-02 10:24:12,245 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml 2016-03-02 10:24:12,245 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,257 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...} 2016-03-02 10:24:12,263 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml 2016-03-02 10:24:12,263 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,296 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2016-03-02 10:24:12,302 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml 2016-03-02 10:24:12,303 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,332 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 1005 to yarn 2016-03-02 10:24:12,332 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2016-03-02 10:24:12,339 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml 2016-03-02 10:24:12,339 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,399 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...} 2016-03-02 10:24:12,405 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml 2016-03-02 10:24:12,405 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2016-03-02 10:24:12,413 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 1003 to yarn 2016-03-02 10:24:12,413 - Directory['/mongodb/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,414 - Directory['/mongodb/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:12,414 - HdfsResource['/ats/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0755} 2016-03-02 10:24:12,416 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmppOZiFG 2>/tmp/tmppGwVeJ''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,470 - call returned (0, '') 2016-03-02 10:24:12,471 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmpQSW9ms 2>/tmp/tmpQgpORU''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,524 - call returned (0, '') 2016-03-02 10:24:12,525 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp7cw0fP 2>/tmp/tmpdL22N4''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,576 - call returned (0, '') 2016-03-02 10:24:12,577 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp1cM5ex 2>/tmp/tmptE9t39''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,627 - call returned (0, '') 2016-03-02 10:24:12,628 - HdfsResource['/ats/done/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0700} 2016-03-02 10:24:12,628 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpMrhe9r 2>/tmp/tmphZCDYZ''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,681 - call returned (0, '') 2016-03-02 10:24:12,682 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/done/?op=SETPERMISSION&user.name=hdfs&permission=700'"'"' 1>/tmp/tmpzB10Zg 2>/tmp/tmpr5AniY''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,734 - call returned (0, '') 2016-03-02 10:24:12,735 - HdfsResource['/ats/active'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0755} 2016-03-02 10:24:12,736 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmp_v3VuR 2>/tmp/tmppyk0MP''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,790 - call returned (0, '') 2016-03-02 10:24:12,791 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmpC_1I3y 2>/tmp/tmpFy8XD8''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,843 - call returned (0, '') 2016-03-02 10:24:12,843 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp3GkTbJ 2>/tmp/tmpbO9rNK''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,895 - call returned (0, '') 2016-03-02 10:24:12,896 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active/?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmpQrzvfB 2>/tmp/tmp9msR_O''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:12,950 - call returned (0, '') 2016-03-02 10:24:12,951 - HdfsResource['/ats/active/'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 01777} 2016-03-02 10:24:12,952 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active/?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpcNejNF 2>/tmp/tmpTpYnwR''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:13,015 - call returned (0, '') 2016-03-02 10:24:13,016 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://dbnode1.dev.local:50070/webhdfs/v1/ats/active/?op=SETPERMISSION&user.name=hdfs&permission=1777'"'"' 1>/tmp/tmpO5kfCm 2>/tmp/tmpm59TAS''] {'logoutput': None, 'quiet': False} 2016-03-02 10:24:13,066 - call returned (0, '') 2016-03-02 10:24:13,066 - HdfsResource[None] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'default_fs': 'hdfs://dbnode1.dev.local:8020', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'} 2016-03-02 10:24:13,066 - File['/etc/hadoop/conf/yarn.exclude'] {'owner': 'yarn', 'group': 'hadoop'} 2016-03-02 10:24:13,069 - File['/etc/security/limits.d/yarn.conf'] {'content': Template('yarn.conf.j2'), 'mode': 0644} 2016-03-02 10:24:13,071 - File['/etc/security/limits.d/mapreduce.conf'] {'content': Template('mapreduce.conf.j2'), 'mode': 0644} 2016-03-02 10:24:13,075 - File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] {'content': InlineTemplate(...), 'owner': 'yarn', 'group': 'hadoop', 'mode': 0755} 2016-03-02 10:24:13,075 - Writing File['/usr/hdp/current/hadoop-client/conf/yarn-env.sh'] because contents don't match 2016-03-02 10:24:13,075 - File['/usr/hdp/current/hadoop-yarn-timelineserver/bin/container-executor'] {'group': 'hadoop', 'mode': 02050} 2016-03-02 10:24:13,077 - File['/usr/hdp/current/hadoop-client/conf/container-executor.cfg'] {'content': Template('container-executor.cfg.j2'), 'group': 'hadoop', 'mode': 0644} 2016-03-02 10:24:13,077 - Directory['/cgroups_test/cpu'] {'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:13,078 - File['/usr/hdp/current/hadoop-client/conf/mapred-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'mode': 0755} 2016-03-02 10:24:13,080 - File['/usr/hdp/current/hadoop-client/conf/taskcontroller.cfg'] {'content': Template('taskcontroller.cfg.j2'), 'owner': 'hdfs'} 2016-03-02 10:24:13,081 - XmlConfig['mapred-site.xml'] {'owner': 'mapred', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,087 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml 2016-03-02 10:24:13,087 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'mapred', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,112 - Changing owner for /usr/hdp/current/hadoop-client/conf/mapred-site.xml from 1004 to mapred 2016-03-02 10:24:13,113 - XmlConfig['capacity-scheduler.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,119 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml 2016-03-02 10:24:13,119 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,127 - Changing owner for /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml from 1004 to hdfs 2016-03-02 10:24:13,127 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,133 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml 2016-03-02 10:24:13,133 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,138 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'} 2016-03-02 10:24:13,138 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,144 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml 2016-03-02 10:24:13,144 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,148 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...} 2016-03-02 10:24:13,155 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml 2016-03-02 10:24:13,155 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2016-03-02 10:24:13,160 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml.example'] {'owner': 'mapred', 'group': 'hadoop'} 2016-03-02 10:24:13,160 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml.example'] {'owner': 'mapred', 'group': 'hadoop'} 2016-03-02 10:24:13,161 - File['/var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid'] {'action': ['delete'], 'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'"} 2016-03-02 10:24:13,205 - File['/mongodb/hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK'] {'action': ['delete'], 'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'", 'ignore_failures': True, 'only_if': 'ls /mongodb/hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK'} 2016-03-02 10:24:13,251 - Skipping File['/mongodb/hadoop/yarn/timeline/leveldb-timeline-store.ldb/LOCK'] due to only_if 2016-03-02 10:24:13,251 - Execute['ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec && /usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh --config /usr/hdp/current/hadoop-client/conf start timelineserver'] {'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'", 'user': 'yarn'} 2016-03-02 10:24:14,361 - Execute['ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`''] {'not_if': "ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'", 'tries': 5, 'try_sleep': 1} 2016-03-02 10:24:14,468 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD 2016-03-02 10:24:15,530 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD 2016-03-02 10:24:16,593 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD 2016-03-02 10:24:17,657 - Retrying after 1 seconds. Reason: Execution of 'ambari-sudo.sh su yarn -l -s /bin/bash -c 'ls /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid && ps -p `cat /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid`'' returned 1. /var/run/hadoop-yarn/yarn/yarn-yarn-timelineserver.pid PID TTY TIME CMD
... View more
Labels: