Member since
09-29-2015
123
Posts
216
Kudos Received
47
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9128 | 06-23-2016 06:29 PM | |
3156 | 06-22-2016 09:16 PM | |
6255 | 06-17-2016 06:07 PM | |
2879 | 06-16-2016 08:27 PM | |
6868 | 06-15-2016 06:44 PM |
06-16-2016
08:27 PM
1 Kudo
@Eric Periard, there is a known issue right now in the way Ambari determines HA status for the NameNodes. Ambari uses a JMX query to each NameNode. The current implementation of that query fetches more data than is strictly necessary for checking HA status, and this can cause delays in processing that query. The symptom of this is that the Ambari UI will misreport the active/standby status of the NameNodes as you described. The problem is intermittent, so a browser refresh is likely to show correct behavior. There is a fix in development now for Ambari to use a lighter-weight JMX query that won't be to prone to this problem. This does not indicate a problem with the health of HDFS. As you noted, users are still able to read and write files. The problem is limited to the reporting of HA status displayed in Ambari.
... View more
06-16-2016
08:03 PM
@Zack Riesland, the S3N file system will buffer to a local disk area first before flushing data to the S3 bucket. I suspect that depending on the amount of concurrent copy activity happening on the node (number of DistCp mapper tasks actively copying to S3N concurrently), you might hit the limit of available disk space for that buffering. The directory used by S3N for this buffering is configurable via property fs.s3.buffer.dir in core-site.xml. See below for the full specification of that property and its default value. I recommend reviewing this in your cluster to make sure that it's configured to point to a large enough volume to support the workload. You can specify a comma-separated list of multiple paths too if you want to use multiple disks. <property>
<name>fs.s3.buffer.dir</name>
<value>${hadoop.tmp.dir}/s3</value>
<description>Determines where on the local filesystem the s3:/s3n: filesystem
should store files before sending them to S3
(or after retrieving them from S3).
</description>
</property>
... View more
06-16-2016
04:13 PM
@Payel Datta, thank you for sharing the full stack trace. I expect this will turn out to be some kind of misconfiguration, either of the host network settings or of ZooKeeper's connection configuration. On the ZooKeeper side, I recommend reviewing the zoo.cfg files and the myid files. On each host, the myid file must match up correctly with the address settings in zoo.cfg. For example, on the node with myid=1, look in zoo.cfg for the server.1 settings. Make sure those settings have the correct matching host or IP address. Perhaps the addresses in zoo.cfg do not match correctly with the network interface on the host. If the settings in zoo.cfg refer to a hostname/IP address for which the host does not have a listening network interface, then the bind won't be able to succeed. On the networking side, you might try using basic tools like NetCat to see if it's possible to set up a listening server bound to port 3888. If that succeeds, then it's likely not a host OS networking problem. If it fails though, then that's worth further investigation on the networking side, independent of ZooKeeper.
... View more
06-15-2016
09:30 PM
@Zack Riesland, yes, there is a -bandwidth option. For full documentation of the available command line options, refer to the Apache documentation on DistCp.
... View more
06-15-2016
07:11 PM
@Zack Riesland, your understanding of DistCp is correct. It performs a raw byte-by-byte copy from the source to the destination. If that data is compressed ORC at the source, then that's what it will be at the destination too.
According to AWS blog posts, Elastic MapReduce does support use of ORC. This is not a scenario I have tested myself though. I'd recommend a quick prototyping end-to-end test to make sure it meets your requirements: DistCp a small ORC data set to S3, and then see if you can query it successfully from EMR.
... View more
06-15-2016
06:49 PM
@Payel Datta, you won't need to declare the leader explicitly. The ZooKeeper ensemble negotiates a leader node automatically by itself. Do you have more details on that bind exception? Is it possible that something else on the host is already using that port?
... View more
06-15-2016
06:44 PM
2 Kudos
@Zack Riesland, have you considered trying DistCp to copy the raw files from a source hdfs: URI to a destination s3n: or s3a: URI? It's possible this would be able to move the data more quickly than the Hive insert into/select from. If it's still important to have Hive metadata referencing the table at the s3n: or s3a: location, then you could handle that by creating an external table after completion of the DistCp.
... View more
06-15-2016
06:32 PM
@Thees Gieselmann, here are a few follow-ups. Note that by setting HADOOP_ROOT_LOGGER, you would set this for multiple Hadoop processes, not just the NameNode. I just wanted to make sure that was your goal. The duplication of arguments on the command line is a known bug, which will be fixed in Apache Hadoop 3. You can refer to configs at the path /etc/hadoop/conf. There are symlinks in place there that will redirect to the correct configuration directory based on the currently active HDP version, such as 2.3.4.0-3485. Unfortunately, I don't have any further experience with logstash to address your remaining question. Passing the logger configuration looks right at this point, so perhaps it's time to investigate what logstash itself provides for troubleshooting.
... View more
06-14-2016
05:40 PM
1 Kudo
@Dennis Fridlyand, thank you for sharing the mapper and reducer code. I think I've spotted a bug in the reducer that I can help with. protected void map(LongWritable key, BytesWritable value,
Mapper<LongWritable, BytesWritable, Text, Text>.Context context) throws IOException,
InterruptedException {
final String json = new String(value.getBytes(), "UTF-8"); Here, value is an instance of BytesWritable. A BytesWritable is a wrapper over an underlying byte array. That underlying buffer may be reused multiple times to represent different records. The actual length of the data within the buffer that is considered valid is tracked separately using a numeric size field. For more details, please see the source code for BytesWritable. By calling value.getBytes(), the mapper code is accessing the raw underlying buffer. This buffer might still contain trailing data from a previous record. Only the data in the buffer up to the length returned by value.getLength() is truly valid. The recommendation is to switch to using the copyBytes() method, which contains additional logic to copy only the currently valid bytes of the underlying buffer. I recommend making the following change in the mapper code. final String json = new String(value.copyBytes(), "UTF-8"); Would you please try that?
... View more