Member since
04-24-2019
8
Posts
1
Kudos Received
0
Solutions
01-12-2021
12:25 AM
1 Kudo
I want to obtain credentials by create support cacse in https://my.cloudera.com. But when i click the "Support caces" in https://my.cloudera.com/support.html , the next page show something : Access to page is restricted Accessing the requested page requires special permissions. Someone can help me, thanks.
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
04-17-2020
10:03 PM
You can set auth in config. such as : configuration.set("yarn.nodemanager.webapp.spnego-principal", "HTTP/_HOST@DEMO.CN");
configuration.set("yarn.resourcemanager.webapp.spnego-principal", "HTTP/_HOST@DEMO.CN"); becsuse of cache file in cluster, when reducer is going, job clould fetch data from others node, so set yarn web auth is needed. so your all auth config is : package demo.utils;
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
public class Auth {
private String keytab;
public Auth(String keytab) {
this.keytab = keytab;
}
public void authorization(Configuration configuration) {
System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");
configuration.set("hadoop.security.authentication", "Kerberos");
configuration.set("fs.defaultFS", "hdfs://m1.DEMO.CN");
configuration.set("dfs.namenode.kerberos.principal.pattern", "nn/*@DEMO.CN");
configuration.set("yarn.nodemanager.principal", "nm/_HOST@DEMO.CN");
configuration.set("yarn.resourcemanager.principal", "rm/_HOST@DEMO.CN");
configuration.set("yarn.nodemanager.webapp.spnego-principal", "HTTP/_HOST@DEMO.CN");
configuration.set("yarn.resourcemanager.webapp.spnego-principal", "HTTP/_HOST@DEMO.CN");
UserGroupInformation.setConfiguration(configuration);
try {
UserGroupInformation.setConfiguration(configuration);
UserGroupInformation.loginUserFromKeytab("user@DEMO.CN", this.keytab);
} catch (IOException e) {
e.printStackTrace();
}
}
}
... View more
05-21-2019
07:13 AM
environment: - Ambari 2.7 - HDP 3.1 - HDF 3.4 - UTC/GMT +8 (Asia/Shanghai) --- When I use logsearch, the logs for the service are generated in local time (UTC/GMT +8). Log in solr ` logtime ` field value is UTC time, didn't do time transformation! I also had to specify the current time +8 hours to see the log in logsearch. I think this is a false indication. I use a log in the configuration of logseach verified the JSON configuration, results show that the ` logtime ` correct timestamp. I think that's a problem for logfeeders, but I don't know how to change it. If you know, thank you for helping me.
... View more
Labels:
- Labels:
-
Apache Solr
04-29-2019
01:14 PM
After I upgraded from HDP 2.6.5(ambari 2.6.2) to HDP 3.1.0 (ambari 2.7.3), I don't know why this configuration appears in the HDFS core-sit.xml configuration. <property>
<name>hadoop.proxyuser.HTTP.hosts</name>
<value>${clusterHostInfo/webhcat_server_host|append(core-site/hadoop.proxyuser.HTTP.hosts, \\,, true)}</value>
</property> This configuration leads to a startup error for my yarn use. In the earlier stage, I manually modified the value of this configuration in core-sit.xml. While I was reinstalling the other services, if it changed the core-sit.xml configuration, it came back when I restarted the service. Finally, I deleted hive, hbase, yarn and mapreduce2 and zookeeper, and cleared the local relevant data directory and the relevant directory on HDFS for re-installation. This problem would still occur in hive. I am really helpless. I have been looking through the documents related to HDP security, but I have never seen the configuration of this parameter. Accidentally clicking on "Culster Admin" > "Kerberos" > "ADVANCED" to the left of the ambari dashboard shows that there is such a parameter in the Hive related configuration. When i try to edit, and then restart, i find that the configuration in core-sit.xml has also changed. The original parameter was configured here. But problems remain. A second attempt to uninstall and reinstall Hive failed, but after all affected services were restarted, Hive was up and running. And found this configured as a host name other than HIVE METASTORE and HIVESERVER2. It's automatically configured. I'm not sure how that works. But here's how I solved it. If anyone knows the reason, please let me know, thank you And I have another question: I don't find Hive for 'Culster Admin' > 'Kerberos' >' GENERAL '>' Ambari Principals, but for kerber and AD there are Hive Principals. It is possible that I actively clicked "life-keytabs" and checked "Only for missing hosts and components".
... View more
04-25-2019
12:41 PM
ambari 2.7 HDP 3.1.0 enbaled kerberos useing openldap --------------------- when i add hive service , the yarn resourcemanager error . 2019-04-24 13:41:15,621 WARN util.MachineList (MachineList.java:<init>(112)) - Invalid CIDR syntax : ${clusterHostInfo/webhcat_server_host|append(core-site/hadoop.proxyuser.HTTP.hosts
2019-04-24 13:41:15,622 WARN ipc.Server (Server.java:logException(2724)) - IPC Server handler 0 on 8141, call Call#0 Retry#0 org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB.refreshSuperUserGroupsConfiguration from 192.168.10.2:35354
java.lang.IllegalArgumentException: Could not parse [${clusterHostInfo/webhcat_server_host|append(core-site/hadoop.proxyuser.HTTP.hosts]
at org.apache.commons.net.util.SubnetUtils.calculate(SubnetUtils.java:275)
at org.apache.commons.net.util.SubnetUtils.<init>(SubnetUtils.java:51)
at org.apache.hadoop.util.MachineList.<init>(MachineList.java:108)
at org.apache.hadoop.util.MachineList.<init>(MachineList.java:82)
at org.apache.hadoop.util.MachineList.<init>(MachineList.java:74)
at org.apache.hadoop.security.authorize.DefaultImpersonationProvider.init(DefaultImpersonationProvider.java:98)
at org.apache.hadoop.security.authorize.ProxyUsers.refreshSuperUserGroupsConfiguration(ProxyUsers.java:75)
at org.apache.hadoop.security.authorize.ProxyUsers.refreshSuperUserGroupsConfiguration(ProxyUsers.java:85)
at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshSuperUserGroupsConfiguration(AdminService.java:505)
at org.apache.hadoop.yarn.server.resourcemanager.AdminService.refreshSuperUserGroupsConfiguration(AdminService.java:488)
at org.apache.hadoop.yarn.server.api.impl.pb.service.ResourceManagerAdministrationProtocolPBServiceImpl.refreshSuperUserGroupsConfiguration(ResourceManagerAdministrationProtocolPBServiceImpl.java:163)
at org.apache.hadoop.yarn.proto.ResourceManagerAdministrationProtocol$ResourceManagerAdministrationProtocolService$2.callBlockingMethod(ResourceManagerAdministrationProtocol.java:275)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) The cluster use kerberos. I unloaded KNOX after upgrading from 2.6. This happens whenever I install hive/hbase. And I can't properly install KNOX... May I ask what caused it? Has Hcat been removed from the HDP 3.x document? Why would automatically configure ` webhcat_server_host `? Now this kind of problem appears, how should I solve? Thank you for your help.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Hive