Member since
02-16-2016
21
Posts
8
Kudos Received
0
Solutions
06-14-2016
05:54 PM
1 Kudo
@Chris Nauroth, thank you very much! You saved me a lot of time, and finally your solution have to solve my problem. I had a feeling that the issue was with buffers, but I didn't guess that value may be reused in mapper...
... View more
06-14-2016
05:22 PM
Hi, @Chris Nauroth Here's implementation of mapper and reducer. As I've said, everything is trivial. public class ArchiveMergeMapper extends Mapper<LongWritable, BytesWritable, Text, Text> {
private final Logger log = Logger.getLogger(ArchiveMergeMapper.class);
private Text outKey = new Text();
@Override
protected void map(LongWritable key, BytesWritable value,
Mapper<LongWritable, BytesWritable, Text, Text>.Context context) throws IOException,
InterruptedException {
final String json = new String(value.getBytes(), "UTF-8");
IMyInterface myObj = MyUtil.parseJson(json);
if (myObj.getId() != null) {
outKey.set(myObj.getId());
context.write(outKey, new Text(json));
} else {
log.warn("Incorrect string" + json);
}
}
}
public class ArchiveMergeReducer extends Reducer<Text, Text, LongWritable, Text> {
private LongWritable keyLW = new LongWritable(1);
@Override
protected void reduce(Text key, Iterable<Text> values, Reducer<Text, Text, LongWritable, Text>.Context context)
throws IOException, InterruptedException {
if (values.iterator().hasNext()) {
context.write(keyLW, values.iterator().next());
}
}
}
... View more
06-14-2016
02:56 PM
My job is rather simple. It just reads the input and emits everything to output: JobConf jobConf = new JobConf(getConf(), ArchiveMergeJob.class);
jobConf.setJobName(JOB_NAME);
Job job = Job.getInstance(jobConf);
job.setJarByClass(ArchiveMergeRunner.class);
SequenceFileInputFormat.addInputPath(job, new Path(args[0]));
job.setInputFormatClass(SequenceFileInputFormat.class);
job.setMapperClass(ArchiveMergeMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setReducerClass(ArchiveMergeReducer.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(Text.class);
TextOutputFormat.setOutputPath(job, new Path(args[1]));
return job.waitForCompletion(true) ? 0 : 1;
... View more
06-14-2016
02:38 PM
1 Kudo
Hi everybody.
Today I've ran into a strange situation which is even hard to explain.
I use MapReduce to read a sequence file, where every row represents a JSON entry. It was a big surprise for me that some rows, which are SHORTER than previous ones, contain chunks of data from previous rows.
For example: {"id":"121B5A8FE08B1F13E050007F010016F6","data":"foo1=603; foo2=31; foo14=foo15; foo9=0; foo10=foo39; foo3 foo28=foo29; foo30 foo28=foo31; foo3 foo26=foo29; foo27=foo32; foo25=foo32; foo19=180,000; foo44=foo24 ","docId":"EF989D8481C4EE9CE040600AB8000D36","foo21":"ins603bh","ts":1389341504951,"foo13":"603","docType":"foo17","operationType":"Modify"}
{"id":"121B5A8FE08C1F13E050007F010016F6","data":"foo1=613;foo3=foo47;foo40=foo35;foo41=4 foo45 foo46;foo36;foo37=0;foo38=foo20;foo33=foo20;foo34;foo12=foo42,foo19 foo43=715554;","docId":"EF9A4646E84E3C73E040600AB8003289","foo21":"64_613","ts":1389341548640,"foo13":"613","docType":"foo18","operationType":"Create"}51,"foo13":"603","docType":"foo17","operationType":"Modify"}
{"id":"121B5A8FE08D1F13E050007F010016F6","data":"foo1=619; foo3=foo5; foo6=33; foo7=foo8; foo9=1001; foo10=foo11; foo12=foo20; foo19=142,211,020","docId":"EF9A2796D8BC2F01E040600AB8002F81","foo21":"foo22","ts":1389341549845,"foo13":"619","docType":"foo23","operationType":"Create"}6E84E3C73E040600AB8003289","foo21":"64_613","ts":1389341548640,"foo13":"613","docType":"foo18","operationType":"Create"}51,"foo13":"603","docType":"foo17","operationType":"Modify"}
{"id":"121B5A8FE08E1F13E050007F010016F6","data":"foo1=619; foo3=foo5; foo6=33; foo7=foo8; foo9=0901; foo10=foo11; foo12=foo20; foo19=32,937","docId":"EF9A2796D8C02F01E040600AB8002F81","foo21":"foo22","ts":1389341549866,"foo13":"619","docType":"foo23","operationType":"Create"}ate"}6E84E3C73E040600AB8003289","foo21":"64_613","ts":1389341548640,"foo13":"613","docType":"foo18","operationType":"Create"}51,"foo13":"603","docType":"foo17","operationType":"Modify"}
{"id":"121B5A8FE08F1F13E050007F010016F6","data":"foo1=619; foo3=foo5; foo6=33; foo7=foo8; foo9=0202; foo10=foo39; foo12=foo20; foo19=80,000,000","docId":"EF9A2796D8C72F01E040600AB8002F81","foo21":"foo22","ts":1389341549895,"foo13":"619","docType":"foo23","operationType":"Create"}e":"Create"}ate"}6E84E3C73E040600AB8003289","foo21":"64_613","ts":1389341548640,"foo13":"613","docType":"foo18","operationType":"Create"}51,"foo13":"603","docType":"foo17","operationType":"Modify"}
{"id":"121B5A8FE0901F13E050007F010016F6","data":"foo1=619; foo3=foo5; foo6=M0; foo7=foo8; foo9=1001; foo10=foo11; foo12=foo20; foo19=142,211,020","docId":"EF9A2796D8CB2F01E040600AB8002F81","foo21":"foo22","ts":1389341549929,"foo13":"619","docType":"foo23","operationType":"Create"}6E84E3C73E040600AB8003289","foo21":"64_613","ts":1389341548640,"foo13":"613","docType":"foo18","operationType":"Create"}51,"foo13":"603","docType":"foo17","operationType":"Modify"} As you can see, starting from the second JSON item, we got incorrect JSON with appended text after the closing bracket '}': "51,"foo13":"603","docType":"foo17","operationType":"Modify"}" (which is actually is a chunk of the tail of the first record). It looks like there is some kind of byte buffer somewhere in mapreduce, which is used to read sequence file data, and it is not emptied after each line. And in case when the following line is shorter than the previous one, we get some chunks on old data. Please, can anyone help me with this issue?
... View more
Labels:
- Labels:
-
Apache Hadoop
06-10-2016
10:08 AM
Thank you @Rajkumar Singh! You would only add that it's rather convenient for me to use the following approach: yarn logs -applicationId application_1465548978834_0004 | grep my.foo.class > /home/user/log2.txt With this command I can filter all the log entries for the class I want to analyze.
... View more
06-10-2016
09:58 AM
I have a YARN map/reduce application. In mapper I use log4j to log certain cases. After job exection is finished I want to analyze logs from ALL mappers.
As there're a lot of mappers in my job, log analysis becomes rather painful job... Is there a way to write log from mappers to some aggregated file to have all the record in one place? Or probaly there's an approach to combine log files from all mappers from a concrete job?
... View more
Labels:
- Labels:
-
Apache Hadoop
05-25-2016
12:29 PM
@Sindhu, here's what env returns: [root@h1 hive]# env | grep LAN
LANG=ru_RU.UTF-8
... View more
05-25-2016
12:15 PM
Hi, I'm trying to install and run Hive on my test environment. During Hive startup I get the message, that Hive Metastore startup has failed. Here's an output: stderr: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 245, in <module>
HiveMetastore().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_metastore.py", line 60, in start
hive_service('metastore', action='start', upgrade_type=upgrade_type)
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py", line 68, in hive_service
pid = get_user_call_output.get_user_call_output(format("cat {pid_file}"), user=params.hive_user, is_checked_call=False)[1]
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 58, in get_user_call_output
err_msg = Logger.filter_text(("Execution of '%s' returned %d. %s") % (command_string, code, all_output))
File "/usr/lib/python2.6/site-packages/resource_management/core/logger.py", line 101, in filter_text
text = text.replace(unprotected_string, protected_string)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 117: ordinal not in range(128)
stdout: 2016-05-25 14:58:18,934 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:18,935 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:18,935 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:18,955 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:18,956 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:18,976 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:18,976 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:18,976 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,065 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,065 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,065 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,085 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,086 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,105 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,105 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,105 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,107 - Group['spark'] {}
2016-05-25 14:58:19,108 - Group['hadoop'] {}
2016-05-25 14:58:19,108 - Group['users'] {}
2016-05-25 14:58:19,108 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,109 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,110 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,110 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,111 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,111 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2016-05-25 14:58:19,112 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,112 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,113 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,114 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2016-05-25 14:58:19,115 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,117 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-05-25 14:58:19,122 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-05-25 14:58:19,123 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}
2016-05-25 14:58:19,123 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-05-25 14:58:19,124 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-05-25 14:58:19,130 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-05-25 14:58:19,130 - Group['hdfs'] {}
2016-05-25 14:58:19,130 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2016-05-25 14:58:19,131 - FS Type:
2016-05-25 14:58:19,131 - Directory['/etc/hadoop'] {'mode': 0755}
2016-05-25 14:58:19,143 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,144 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
2016-05-25 14:58:19,155 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-05-25 14:58:19,162 - Skipping Execute[('setenforce', '0')] due to not_if
2016-05-25 14:58:19,163 - Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,164 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,165 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:19,168 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,170 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2016-05-25 14:58:19,171 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,182 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-05-25 14:58:19,183 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-05-25 14:58:19,187 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-05-25 14:58:19,193 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-05-25 14:58:19,355 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.4.0.0-169
2016-05-25 14:58:19,355 - Checking if need to create versioned conf dir /etc/hadoop/2.4.0.0-169/0
2016-05-25 14:58:19,356 - call['conf-select create-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-05-25 14:58:19,377 - call returned (1, '/etc/hadoop/2.4.0.0-169/0 exist already', '')
2016-05-25 14:58:19,377 - checked_call['conf-select set-conf-dir --package hadoop --stack-version 2.4.0.0-169 --conf-version 0'] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-05-25 14:58:19,397 - checked_call returned (0, '/usr/hdp/2.4.0.0-169/hadoop/conf -> /etc/hadoop/2.4.0.0-169/0')
2016-05-25 14:58:19,397 - Ensuring that hadoop has the correct symlink structure
2016-05-25 14:58:19,397 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-05-25 14:58:19,436 - Directory['/etc/hive'] {'mode': 0755}
2016-05-25 14:58:19,437 - Directory['/usr/hdp/current/hive-metastore/conf'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,438 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,452 - Generating config: /usr/hdp/current/hive-metastore/conf/mapred-site.xml
2016-05-25 14:58:19,452 - File['/usr/hdp/current/hive-metastore/conf/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,485 - File['/usr/hdp/current/hive-metastore/conf/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - File['/usr/hdp/current/hive-metastore/conf/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,486 - Directory['/usr/hdp/current/hive-metastore/conf/conf.server'] {'owner': 'hive', 'group': 'hadoop', 'recursive': True}
2016-05-25 14:58:19,486 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,494 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml
2016-05-25 14:58:19,494 - File['/usr/hdp/current/hive-metastore/conf/conf.server/mapred-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-default.xml.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,525 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh.template'] {'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-exec-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,526 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-log4j.properties'] {'content': ..., 'owner': 'hive', 'group': 'hadoop', 'mode': 0644}
2016-05-25 14:58:19,527 - XmlConfig['hive-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hive-metastore/conf/conf.server', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hive', 'configurations': ...}
2016-05-25 14:58:19,534 - Generating config: /usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml
2016-05-25 14:58:19,534 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-site.xml'] {'owner': 'hive', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2016-05-25 14:58:19,638 - File['/usr/hdp/current/hive-metastore/conf/conf.server/hive-env.sh'] {'content': InlineTemplate(...), 'owner': 'hive', 'group': 'hadoop'}
2016-05-25 14:58:19,638 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}
2016-05-25 14:58:19,641 - File['/etc/security/limits.d/hive.conf'] {'content': Template('hive.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-05-25 14:58:19,642 - File['/usr/lib/ambari-agent/DBConnectionVerification.jar'] {'content': DownloadSource('http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar'), 'mode': 0644}
2016-05-25 14:58:19,642 - Not downloading the file from http://h1.sdd.d4.org:8080/resources/DBConnectionVerification.jar, because /var/lib/ambari-agent/tmp/DBConnectionVerification.jar already exists
2016-05-25 14:58:19,642 - File['/var/lib/ambari-agent/tmp/start_metastore_script'] {'content': StaticFile('startMetastore.sh'), 'mode': 0755}
2016-05-25 14:58:19,644 - Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] {'not_if': "ambari-sudo.sh su hive -l -s /bin/bash -c 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -info -dbType mysql -userName hive -passWord [PROTECTED]'", 'user': 'hive'}
2016-05-25 14:58:23,524 - Skipping Execute['export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType mysql -userName hive -passWord [PROTECTED]'] due to not_if
2016-05-25 14:58:23,525 - Directory['/var/run/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,525 - Directory['/var/log/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,526 - Directory['/var/lib/hive'] {'owner': 'hive', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
2016-05-25 14:58:23,527 - call['ambari-sudo.sh su hive -l -s /bin/bash -c 'cat /var/run/hive/hive.pid 1>/tmp/tmppLcf6w 2>/tmp/tmpw1ccfI''] {'quiet': False}
2016-05-25 14:58:23,565 - call returned (1, '')
I'm using HDP-2.4.0.0-169 stack with Ambari-2.2.2.0
... View more
Labels:
04-29-2016
09:47 AM
@Ignacio Pérez Torres Thanks, the solution helped me! Just needed to restart ambari-server.
... View more