Member since
09-29-2015
123
Posts
216
Kudos Received
47
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
9064 | 06-23-2016 06:29 PM | |
3113 | 06-22-2016 09:16 PM | |
6178 | 06-17-2016 06:07 PM | |
2833 | 06-16-2016 08:27 PM | |
6635 | 06-15-2016 06:44 PM |
06-14-2016
05:29 PM
1 Kudo
@Thees Gieselmann, Hadoop daemons may override the setting of hadoop.root.logger at process launch time by passing a -Dhadoop.root.logger argument. hadoop.root.logger=INFO,console,logstash If you have the above setting in log4j.properties, then it acts as the default if the process launch does not pass -Dhadoop.root.logger. However, if the argument is passed, then that argument acts as an override, and the value specified in log4j.properties is ignored. One way to check if this is happening is to look at the process table, such as by running this: ps auxwww | grep NameNode | grep 'hadoop.root.logger' Then, look at the full command line of the NameNode process. If it does not include your logstash appender, then this is the likely explanation for what you are seeing. To change the arguments passed at launch of the NameNode, edit hadoop-env.sh and find the variable HADOOP_NAMENODE_OPTS. Within the value for that environment variable, you can add the setting for -Dhadoop.root.logger=INFO,console,logstash. If you add your logstash appender there and restart the NameNode, then it will pass that argument value down to the new process launch, and I expect it will activate your logstash appender. Also, the more common setting in HDP deployments is INFO,RFA, so it might be more appropriate for you to set the value to -Dhadoop.root.logger=INFO,RFA,logstash.
... View more
06-14-2016
05:08 PM
1 Kudo
@Payel Datta, these exceptions indicate that for this member of the ZooKeeper ensemble, it cannot connect to port 3888 for the other 2 members of the ensemble. 2016-06-14 06:38:38,664 - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@383] - Cannot open channel to 2 at election address zookeeper-2/54.253.26.67:3888 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:404) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:795) 2016-06-14 06:38:38,665 - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@383] - Cannot open channel to 3 at election address zookeeper-3/54.66.23.197:3888 java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368) at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:404) at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:840) at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:795) Port 3888 is used for ZooKeeper's leader election protocol. Without a successful connection, the ensemble cannot successfully elect a leader. Note that the message occurs for the connection to both of the other hosts: zookeeper-2/54.253.26.67 and zookeeper-3/54.66.23.197. This warning indicates that client connections were rejected, because the ZooKeeper ensemble is not fully initialized. This is expected behavior if an ensemble cannot elect a leader and complete its initialization. 2016-06-14 06:38:40,477 - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running I recommend reviewing ZooKeeper logs from all 3 nodes in the ensemble to try to find root cause. If netstat reports that there is nothing listening on port 3888 on all 3 nodes, then try looking earlier in the logs to see if there was possibly a bind error when ZooKeeper tried to use port 3888. If nothing is easily found in the logs, try restarting all 3 ZooKeeper processes to get a fresh run. That might make it easier to see what is happening when it tries to bind to port 3888.
... View more
06-14-2016
04:50 PM
@Dennis Fridlyand, would you please also share the code for the ArchiveMergeMapper and ArchiveMergeReducer classes?
... View more
06-14-2016
05:43 AM
1 Kudo
@subacini balakrishnan, HTTP calls to both the NameNode and the DataNode will utilize SSL. Since it utilizes SSL for the data transfer performed with the DataNode, the bytes in transit are encrypted and cannot be read by a man-in-the-middle attacker. The way this works is that the HTTP client first initiates a call to the NameNode using either the "http" or "https" scheme. For a file read or write operation, the NameNode will select an appropriate DataNode and send an HTTP 302 redirect response back to the client telling it to reconnect to that DataNode to complete its request. When the NameNode performs this redirect, it detects the scheme of the incoming call that was sent to it and preserves that scheme in the Location header of the HTTP 302 redirect response. Thus, for a request originating at the NameNode via "http", the redirection will point to an "http" URL on a DataNode, and for a request originating at the NameNode via "https", the redirection will point to an "https" URL on a DataNode.
... View more
06-14-2016
05:32 AM
1 Kudo
@Alexander Yau, the error shown here is caused by a mismatch between the value class configured for the job at job submission time and what the reducer is attempting to write for the job output. The exception text indicates it expects IntWritable, but instead received an instance of MapWritable. java.io.IOException: wrong value class: class org.apache.hadoop.io.MapWritable is not class org.apache.hadoop.io.IntWritable At job submission time, the output class is set to IntWritable. job.setOutputValueClass(IntWritable.class); However, the reducer class parameterizes the output value type to MapWritable. public static class IntSumReducer extends Reducer<Text, IntWritable, Text, MapWritable> { Likewise, the reducer logic writes a MapWritable instance to the context. private MapWritable result = new MapWritable();
...
result.put(myurl, new IntWritable(sum));
context.write(mykey, result); To fix this error, you'll need to set up the job submission and the reducer to use the same output value class. Judging from the description you gave for what you're trying to achieve with this job, it sounds like you want MapWritable for the outputs. Therefore, I recommend testing again with the line of code from the job submission changed to this: job.setOutputValueClass(MapWritable.class);
... View more
06-13-2016
06:38 PM
@Tom Ellis, you mentioned finding the SaslRpcClient class. That's a very important piece. This is the class that handles SASL authentication for any client-server interaction that uses Hadoop's common RPC framework. The core Hadoop daemons in HDFS and YARN, such as NameNode and ResourceManager, make use of this RPC framework. Many other services throughout the Hadoop ecosystem also use this RPC framework. Clients of those servers will use the SaslRpcClient class as the entry point for SASL negotiation. This is typically performed on connection establishment to a server, such as the first time a Hadoop process attempts an RPC to the NameNode or the ResourceManager. The exact service to use is negotiated between client and server at the beginning of the connection establishment, during the negotiation code that you mentioned finding. The service value will be different per Hadoop daemon, driven by the shortened principal name, e.g. "nn". However, you won't find anything in the Hadoop source code that explicitly references the TGS. Instead, the Hadoop code delegates to the GSS API provided by the JDK for the low-level implementation of the Kerberos protocol, including handling of the TGS. If you're interested in digging into that, the code is visible in the OpenJDK project. Here is a link to the relevant Java package in the OpenJDK 7 tree: http://hg.openjdk.java.net/jdk7u/jdk7u/jdk/file/f51368baecd9/src/share/classes/sun/security/jgss/krb5 Some of the most relevant classes there would be Krb5InitCredential and Krb5Context.
... View more
06-13-2016
06:59 AM
1 Kudo
@Sumit Nigam, answering your individual questions: 1. No, I expect SmartSense would not be able to analyze an HBase instance deployed via Slider at this time. SmartSense works by constructing a model of the cluster, including its configuration files, and then running a set of rules against those configuration files to generate recommendations. In the case of Slider, the HBase configuration files would reside inside its Slider Application Package, not the typical file system location. I don't believe SmartSense currently is equipped to inspect Slider application packages. @sheetal or @Paul Codding, could you please confirm (or deny) this? 2. SmartSense operates by running a set of rules against captured information of a cluster's configuration, including HDP component configuration files and host OS configuration. It does not perform an exhaustive capture of all logs in the cluster and execute rules against those logs. Some of the cases you described likely would be better served by runtime operational monitoring in Ambari. 3. SmartSense is capable of identifying and recommending use of secure mode and additional security best practices. There is also a rule that checks configured open file limits and makes recommendations if limits are not within an acceptable range.
... View more
06-13-2016
06:28 AM
2 Kudos
@Artem Ervits, the risk of executing as the yarn user relates to several statements from the Apache Hadoop documentation on Secure Mode. Specifically, the section on the NodeManager states the following:
For maximum security, this executor sets up restricted permissions and user/group ownership of local files and directories used by the containers such as the shared objects, jars, intermediate files, log files etc. Particularly note that, because of this, except the application owner and NodeManager, no other user can access any of the local files/directories including those localized as part of the distributed cache.
Therefore, by executing YARN containers as user "yarn", which is the same as the user running the NodeManager, the container process can get full access to localized file content. This would open a risk of users writing arbitrary application code that scans the local disk looking for localized files that potentially contain sensitive data, or even changing the contents of user-submitted executables to mount a code injection attack. It would also be possible to access files owned by the yarn user on HDFS.
... View more
06-13-2016
06:13 AM
@ScipioTheYounger, I expect this is similar to another question you asked.
https://community.hortonworks.com/questions/35574/switch-namenode-ha-zookeeper-access-from-no-securi.html I'll repeat the same information here for simplicity. change ha.zookeeper.acl in core-site.xml to this: <property>
<name>ha.zookeeper.acl</name>
<value>sasl:nn:rwcda</value>
</property> Then, you'd want to run the following to reformat ZooKeeper for NameNode HA, which would reinitialize the znode used by NameNode HA to coordinate automatic failover. hdfs zkfc -formatZK -force The tricky part, as you noticed, is getting that command to authenticate with SASL. The ZooKeeper and SASL guide in the Apache documentation discusses implementation and configuration of SASL in ZooKeeper in detail. For this particular command, you can use this procedure. First, create a JAAS configuration file at /etc/hadoop/conf/hdfs_jaas.conf: Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/nn.service.keytab"
principal="nn/<HOST>@EXAMPLE.COM";
}; Note that the will be different depending on the NameNode hostnames in your environment. Likewise, you'll need to change EXAMPLE.COM to the correct Kerberos realm. Next, edit /etc/hadoop/conf/hadoop-env.sh, and add the following line to enable SASL when running the zkfc command. export HADOOP_ZKFC_OPTS="-Dzookeeper.sasl.client=true -Dzookeeper.sasl.client.username=zookeeper -Djava.security.auth.login.config=/etc/hadoop/conf/hdfs_jaas.conf -Dzookeeper.sasl.clientconfig=Client ${HADOOP_ZKFC_OPTS}" Then, run the "hdfs zkfc -formatZK -force" command.
... View more
06-13-2016
06:07 AM
1 Kudo
@Manoj Dhake , the error you mentioned comes from a Hadoop class named Path. This class is used widely throughout the Hadoop ecosystem to represent a path to a particular file or directory in a file system. Conceptually, it is similar to a URI with some additional validation and normalizing logic specific to the Hadoop ecosystem's expectations. As stated in the exception, it is illegal to attempt to make a Path from an empty string. This looks like something in the call sequence either failed to obtain correct information to create a Path, or possibly dropped information. Is there a longer stack trace that accompanies the error? If so, then I recommend looking at that stack trace in more detail. Depending on the classes and methods referenced in the stack trace, I expect it will pinpoint whether the issue is happening in Hive or Oozie.
... View more