Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Configuration parameter 'yarn.timeline-service.leveldb-timeline-store.path' was not found in configurations dictionary!

avatar

Hi,

I just upgraded Ambari from 2.1 to 2.2, my HDP version is 2.1.0, all on CentOS 6.7. I have a new strange issue with starting of MapReduce2 and YARN services. All is ending with this message:

stderr: /var/lib/ambari-agent/data/errors-4003.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py", line 221, in <module>
    Resourcemanager().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/resourcemanager.py", line 110, in start
    import params
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/params.py", line 28, in <module>
    from params_linux import *
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py", line 153, in <module>
    ats_leveldb_lock_file = os.path.join(ats_leveldb_dir, "leveldb-timeline-store.ldb", "LOCK")
  File "/usr/lib64/python2.6/posixpath.py", line 67, in join
    elif path == '' or path.endswith('/'):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/config_dictionary.py", line 81, in __getattr__
    raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'yarn.timeline-service.leveldb-timeline-store.path' was not found in configurations dictionary!
stdout: /var/lib/ambari-agent/data/output-4003.txt
2016-01-21 14:05:32,174 - Using hadoop conf dir: /etc/hadoop/conf
2016-01-21 14:05:32,282 - Using hadoop conf dir: /etc/hadoop/conf
2016-01-21 14:05:32,284 - Group['hadoop'] {}
2016-01-21 14:05:32,286 - Group['users'] {}
2016-01-21 14:05:32,286 - User['mapred'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,288 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}
2016-01-21 14:05:32,289 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,290 - User['hdfs'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,291 - User['yarn'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,292 - User['ams'] {'gid': 'hadoop', 'groups': ['hadoop']}
2016-01-21 14:05:32,292 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-01-21 14:05:32,294 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-01-21 14:05:32,343 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-01-21 14:05:32,344 - User['hdfs'] {'ignore_failures': False}
2016-01-21 14:05:32,345 - User['hdfs'] {'ignore_failures': False, 'groups': ['hadoop']}
2016-01-21 14:05:32,345 - Directory['/etc/hadoop'] {'mode': 0755}
2016-01-21 14:05:32,346 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'hadoop', 'recursive': True}
2016-01-21 14:05:32,346 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2016-01-21 14:05:32,393 - Skipping Link['/etc/hadoop/conf'] due to not_if
2016-01-21 14:05:32,407 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-01-21 14:05:32,408 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-01-21 14:05:32,423 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-01-21 14:05:32,477 - Skipping Execute[('setenforce', '0')] due to not_if
2016-01-21 14:05:32,477 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-01-21 14:05:32,480 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-01-21 14:05:32,480 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-01-21 14:05:32,481 - File['/var/lib/ambari-agent/lib/fast-hdfs-resource.jar'] {'content': StaticFile('fast-hdfs-resource.jar'), 'mode': 0644}
2016-01-21 14:05:32,538 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-01-21 14:05:32,540 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-01-21 14:05:32,541 - File['/etc/hadoop/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-01-21 14:05:32,549 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2016-01-21 14:05:32,549 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-01-21 14:05:32,550 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-01-21 14:05:32,554 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-01-21 14:05:32,602 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-01-21 14:05:32,845 - Using hadoop conf dir: /etc/hadoop/conf
2016-01-21 14:05:32,845 - Skipping get_hdp_version since hdp-select is not yet available
2016-01-21 14:05:32,846 - Using hadoop conf dir: /etc/hadoop/conf

Can you please recommend what shall I do? Thank you so much.

Regards,

Pavel

1 ACCEPTED SOLUTION

avatar

I solved the issue with stating YARN service by insert property lines about yarn.timeline-service.

View solution in original post

6 REPLIES 6

avatar
Master Mentor
@Pavel Hladík

See the compatibility matrix http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.0.0/bk_Installing_HDP_AMB/content/_determine_sta...

HDP 2.1.0 is compatible but deprecated. You should look into the option to upgrade HDP.

I don't see any existing jira based on this message.

Check Yarn logs to see more details.

avatar
Master Mentor
@Pavel Hladík

timeline server was introduced to decouple job history from resource management in Hadoop 2.0. Here's documentation for you to configure TimelineServer. Choose the doc version appropriate for your cluster.

avatar
Master Mentor

@Pavel Hladík additionally, you will have to configure timeline server specific properties going from HDP 2.1 to HDP 2.2 and 2.3. You can read more about it here.

avatar

I solved the issue with stating YARN service by insert property lines about yarn.timeline-service.

avatar
Master Mentor

@Pavel Hladík look in you mapred logs, I would not change permissions, Ambari will handle it as necessary.

avatar
Master Mentor

excellent, please accept the best answer to close out the thread. @Pavel Hladík