Member since
07-16-2016
13
Posts
0
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2416 | 09-18-2017 07:26 PM | |
2242 | 06-19-2017 05:47 AM | |
2133 | 06-19-2017 05:35 AM |
10-18-2017
09:30 AM
The API call you provided does not only start the components with stale configs, but actually all components. It has two typos, this one worked for me: curl -u admin:admin -H "X-Requested-By: ambari" -X POST -d '{"RequestInfo":{"command":"RESTART","context":"Restart all required services","operation_level":"host_components"},"Requests/resource_filters":[{"hosts_predicate":"HostRoles/stale_configs=true"}]}' http://erie1.example.com:8080/api/v1/clusters/ErieCluster/requests
... View more
09-18-2017
07:26 PM
For any of those getting the same error, I have solved it finally. I forgot to properties in hdfs-site.xml and core-site.xml as you can see in this commit: https://github.com/Knappek/docker-hadoop-secure/commit/2214e8723048bc5403a006a57cbb9732d5cec838 .
... View more
08-02-2017
05:04 PM
any help?
... View more
07-15-2017
04:13 PM
For testing purposes I have set up a one node Kerberos secured cluster. Now I am trying to start HBase in this cluster, zookeeper starts, but HBase master giving me the error (the hostname is phoenix.docker.com as I futher want to install phoenix): 2017-07-11 23:37:06,304 INFO [master/phoenix.docker.com/172.21.0.3:16000] client.ZooKeeperRegistry: ClusterId read in ZooKeeper is null
2017-07-11 23:37:06,620 FATAL [phoenix:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/hbase":root:root:drwxr-xr-x
I am wondering why it tries to use the hbase user instead of the root user, as this is the user with which I start HBase with the `start-hbase.sh` script. Anyway, when I manually create th hdfs dir `/hbase` and giving permissions to the hbase user, I get the next error: 2017-07-11 23:43:11,512 INFO [Thread-66] hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1998)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1356)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526)
2017-07-11 23:43:11,527 INFO [Thread-66] hdfs.DFSClient: Abandoning BP-1176169754-172.21.0.3-1499814944202:blk_1073741837_1013
2017-07-11 23:43:11,557 INFO [Thread-66] hdfs.DFSClient: Excluding datanode 172.21.0.3:50010
2017-07-11 23:43:11,604 WARN [Thread-66] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
Checking the datanode logs it seems like there is a problem with the SASL connection: 2017-07-11 23:43:41,885 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Failed to read expected SASL data transfer protection handshake from client at /172.21.0.3:49322. Perhaps the client is running an older version of Hadoop which does not support SASL data transfer protection Anyone has an idea how to solve this? You can reproduce the error using my docker project: https://github.com/Knappek/docker-phoenix-secure
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache HBase
06-19-2017
05:47 AM
I finally solved it by myself by not using JSVC but using SASL instead as described in the official Apache documentation and in the HWX docu. I've created a single node Hadoop cluster using Docker which is on Github and Dockerhub.
... View more
06-19-2017
05:35 AM
I finally solved it by myself by re-building the container-executor binary from the source code manually. This post explains how to do that. Everything can also be seen in the Dockerfile of my project of a single node kerberized hadoop cluster using docker on Github and Dockerhub.
... View more
06-18-2017
02:47 PM
@Jay SenSharma, actually I don't really want to upgrade glibc, I just want to run nodemanager in a Kerberos secured mode on CentOS 6.6. Upgrading glibc was just an assumption from myself to solve this issue. I assume there should be another solution to run Nodemanager with kerberos enabled on CentOS 6.6? For the sake of completeness here is my yarn-site.xml and container-executor.cfg. This is my yarn-site.xml <configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.application.classpath</name>
<value>/usr/local/hadoop/etc/hadoop, /usr/local/hadoop/share/hadoop/common/*, /usr/local/hadoop/share/hadoop/common/lib/*, /usr/local/hadoop/share/hadoop/hdfs/*, /usr/local/hadoop/share/hadoop/hdfs/lib/*, /usr/local/hadoop/share/hadoop/mapreduce/*, /usr/local/hadoop/share/hadoop/mapreduce/lib/*, /usr/local/hadoop/share/hadoop/yarn/*, /usr/local/hadoop/share/hadoop/yarn/lib/*</value>
</property>
<property>
<description>
Number of seconds after an application finishes before the nodemanager's
DeletionService will delete the application's localized file directory
and log directory.
To diagnose Yarn application problems, set this property's value large
enough (for example, to 600 = 10 minutes) to permit examination of these
directories. After changing the property's value, you must restart the
nodemanager in order for it to have an effect.
The roots of Yarn applications' work directories is configurable with
the yarn.nodemanager.local-dirs property (see below), and the roots
of the Yarn applications' log directories is configurable with the
yarn.nodemanager.log-dirs property (see also below).
</description>
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>600</value>
</property>
<property>
<name>yarn.resourcemanager.principal</name>
<value>rm/HOSTNAME@EXAMPLE.COM</value>
</property>
<property>
<name>yarn.resourcemanager.keytab</name>
<value>/etc/security/keytabs/rm.service.keytab</value>
</property>
<property>
<name>yarn.nodemanager.principal</name>
<value>nm/HOSTNAME@EXAMPLE.COM</value>
</property>
<property>
<name>yarn.nodemanager.keytab</name>
<value>/etc/security/keytabs/nm.service.keytab</value>
</property>
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.path</name>
<value>/usr/local/hadoop/bin/container-executor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.group</name>
<value>root</value>
</property>
<property>
<name>yarn.timeline-service.principal</name>
<value>yarn/HOSTNAME@EXAMPLE.COM</value>
</property>
<property>
<name>yarn.timeline-service.keytab</name>
<value>/etc/security/keytabs/yarn.service.keytab</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.timeline-service.http-authentication.type</name>
<value>kerberos</value>
</property>
<property>
<name>yarn.timeline-service.http-authentication.kerberos.principal</name>
<value>HTTP/HOSTNAME@EXAMPLE.COM</value>
</property>
<property>
<name>yarn.timeline-service.http-authentication.kerberos.keytab</name>
<value>/etc/security/keytabs/yarn.service.keytab</value>
</property>
</configuration>
The container-executor.cfg looks like this yarn.nodemanager.local-dirs=/usr/local/hadoop/nodemanager
yarn.nodemanager.linux-container-executor.group=root
yarn.nodemanager.log-dirs=/usr/local/hadoop/nodemanager
banned.users=bin
min.user.id=1000 Am I missing something? Could it be that I have a wrong container-executor binary? the binary seems to be compiled for RHEL7 but I should have one for RHEL6. Can I download such a binary from somewhere manually?
... View more
06-18-2017
08:16 AM
I have a 1-node kerberized Hadoop cluster on Centos 6.6. Starting Nodemanager fails with this error container-executor: /lib64/libc.so.6: version `GLIBC_2.14' not found
Ok, I did a little bit research and found that on RHEL 6 glibc 2.12 is the highest supported version. However it should be possible to run glibc 2.14 along with 2.12 according to this stackexchange post. But still I get the same error. It seams that the binary `container-executor` does not use the env variable `LD_LIBRARY_PATH` or the installation of glibc 2.14 failed. How can I verify the installation? How can I start nodemanager in kerberos secured mode?
... View more
Labels:
- Labels:
-
Apache YARN
05-23-2017
08:09 PM
I have set up a single node Hadoop cluster (not HDP) where I want to enable Kerberos. Namenode and secondary Namenode start fine, but Datanode does not start. I try to start it via JSVC. When executing "start-secure-dns.sh" to start Datanode in a secure mode I don't get an output and it also does not log to the datanode log file. But I can see the following in "jsvc.err": 23/05/2017 01:12:59 1279 jsvc.exec error: Cannot find daemon loader org/apache/commons/daemon/support/DaemonLoader
23/05/2017 01:12:59 1257 jsvc.exec error: Service exit with a return value of 1
I have installed JSVC via paket manager with "yum install jsvc" and I export the JSVC_HOME properly (I verified this by setting a wrong JSVC_HOME where I get an error when executing the starting script). Did I install jsvc incorrectly?
... View more
Labels:
- Labels:
-
Apache Hadoop
05-23-2017
08:08 PM
I have set up a single node Hadoop cluster (not HDP) where I want to enable Kerberos. Namenode and secondary Namenode start fine, but Datanode does not start.
I try to start it via JSVC. When executing "start-secure-dns.sh" to start Datanode in a secure mode I don't get an output and it also does not log to the datanode log file. But I can see the following in "jsvc.err": 23/05/2017 01:12:59 1279 jsvc.exec error: Cannot find daemon loader org/apache/commons/daemon/support/DaemonLoader
23/05/2017 01:12:59 1257 jsvc.exec error: Service exit with a return value of 1
I have installed JSVC via paket manager with "yum install jsvc" and I export the JSVC_HOME properly (I verified this by setting a wrong JSVC_HOME where I get an error when executing the starting script). Did I install jsvc incorrectly?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Spark