Member since
02-17-2015
40
Posts
25
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2673 | 01-31-2017 04:47 AM | |
2611 | 07-26-2016 05:46 PM | |
7600 | 05-02-2016 10:12 AM |
05-24-2016
01:27 PM
1 Kudo
Hi @Jonas Straub, we configured a secure SolrCloud cluster, with success.
There is one MAJOR issue: https://issues.apache.org/jira/browse/RANGER-678 The ranger plugins (hive, hdfs, kafka, hbase, solr) generating audit logs, are not able to send the audit-logs to a secure Solr. The bug was reported 06/Oct/15, but not yet addressed. How do we get it addressed so people can start using a secure Solr for audit logging? Greetings, Alexander
... View more
05-24-2016
01:00 PM
1 Kudo
Great article, When testing the connection to Solr from Ranger as @Jonas Straub mentions the /var/log/ranger/admin/xa_portal.log shows the URL. It tries to access ${Solr URL}/admin/collections. So you should enter an URL ending with /solr. Than the log gives an Authentication Required 401. Now Solr is Kerbors-secured the request from Ranger to fetch collections should also use a kerberos-ticket... Did someone manage to make the lookup from Ranger to Solr (/w kerberos) work?
... View more
05-02-2016
11:03 AM
I assume you are using the ambari-metrics-system to collect statistics. You need to add a jar to the flume classpath in order to make the charts work. Edit the 'Advanced flume-env' config in Ambari. Make sure that the flume-env template contains: ...
if [ -e "/usr/lib/flume/lib/ambari-metrics-flume-sink.jar" ]; then
export FLUME_CLASSPATH=$FLUME_CLASSPATH:/usr/lib/flume/lib/ambari-metrics-flume-sink.jar
fi
... Restart flume, now you should be able to see the collected metrics.
... View more
05-02-2016
10:12 AM
Depending on your OS the setting might be different then you expect. To check the actual value become root and switch to the user hbase and print the actual limits. # on Hbase Region Server:
sudo -i
su hbase
# print limits for the user hbase:
ulimit -a On our RedHat 6 system, there was a file 90-nproc.conf in /etc/security/limits.d/ deployed. This limits the nr of processes for users to 1024. The user ambari received these limits and when starting hbase from ambari the limits are passed over somehow. As @rmaruthiyodan mentions you can check the running process limits. grep 'open files' /proc/<Ambari Agent PID>/limits
grep 'open files' /proc/<Region Server PID>/limits Hbase book config suggests: 'Set it to north of 10k'
... View more
04-29-2016
09:15 AM
1 Kudo
You can locate them through ambari. When you (re)start a service you can click on the operations > operation > tasks and inspect the commands: If you look closely the script being executed for restarting the nodemanager is at 08:53:13,592. The script is located in /usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh. This file is shipped with the distribution. Before executing this file users are created and config is pushed. The preparation of these steps happen on the AmbariServer. You can search for the python scripts. For example the nodemanager in the /var/lib/ambari-server/resources/common-services/YARN/2.1.0.2.0/package/scripts/. If you change one these files, don't forget to restart the ambari-server, because the files are cached. After an ambari-server upgrade these changes will be overridden reverted. Hope this helps.
... View more
04-21-2016
07:38 AM
8 Kudos
Hi Stefan Kupstaitis-Dunkler, We are using HDP-2.3.4.0 and use Kafka en SparkStreaming (Scala & Python) on a (Kerberos + Ranger) secured Cluster. You need to add a jaas config location to the spark-sumbit command. We are using it in yarn-client mode. The kafka_client_jaas.conf file is send as a resource with the --files option and available in the yarn-container. We did not get ticket renewal working yet... spark-submit (all your stuff) \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.conf=kafka_client_jaas.conf" \
--files "your_other_files,kafa_client_jaas.conf,serviceaccount.headless.keytab" \
(rest of your stuff)
# --principal and --keytab does not work and conflict with --files keytab.
# The jaas file will be placed in the yarn-containers by Spark.
# The file contains the reference to the keytab-file and the principal for Kafka and ZooKeeper:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
principal="serviceaccount@DOMAIN.COM"
keyTab="serviceaccount.headless.keytab"
renewTicket=true
storeKey=true
serviceName="kafka";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useTicketCache=false
useKeyTab=true
principal="serviceaccount@DOMAIN.COM"
keyTab="serviceaccount.headless.keytab"
renewTicket=true
storeKey=true
serviceName="zookeeper";
}
If you need more info, feel free to ask. Greetings, Alexander
... View more
02-18-2015
01:16 AM
1 Kudo
Reset authorized_proxy_user_config to default (hue=*) still works.
... View more
02-18-2015
01:13 AM
Hi, thx for your quick response! This solution did indeed solve the problem. I was also tried to change the setting in the Clusters > Impala authorized_proxy_user_config (default: hue=*) I have changed this to hue=*;yarn=*. Let me reset this to default and test, without my modifications.
... View more
02-17-2015
11:57 PM
Using Cloudera Manager we want to enable Impala on YARN. We did so by adding the service llama ApplicationMaster and changing the min cores/mem to 0 and enabling cgroups. We restarted the whole cluster. (HDFS works, Hive on YARN works) Problem: (Shell build version: Impala Shell v2.1.0-cdh5 (e48c2b4) built on Tue Dec 16 19:00:35 PST 2014)
[Not connected] > connect data01;
Error connecting: TTransportException, TSocket read 0 bytes
Kerberos ticket found in the credentials cache, retrying the connection with a secure transport.
Connected to data01:21000
Server version: impalad version 2.1.0-cdh5 RELEASE (build e48c2b48c53ea9601b8f47a39373aa83ff7ca6e2)
[data01:21000] > use mydb;
Query: use mydb
[data01:21000] > select * from mytable limit 10;
Query: select * from mytable limit 10
ERROR: com.cloudera.llama.util.LlamaException: AM_CANNOT_REGISTER - cannot register AM 'application_1424245272359_0001' for queue 'root.alexanderbij' : java.lang.reflect.UndeclaredThrowableException, com.cloudera.llama.util.LlamaException: AM_CANNOT_REGISTER - cannot register AM 'application_1424245272359_0001' for queue 'root.alexanderbij' : java.lang.reflect.UndeclaredThrowableException, at com.cloudera.llama.am.yarn.YarnRMConnector.register(YarnRMConnector.java:270), at com.cloudera.llama.am.cache.CacheRMConnector.register(CacheRMConnector.java:178), at com.cloudera.llama.am.impl.NormalizerRMConnector.register(NormalizerRMConnector.java:107), at com.cloudera.llama.am.impl.PhasingOutRMConnector.register(PhasingOutRMConnector.java:139), at com.cloudera.llama.am.impl.SingleQueueLlamaAM.start(SingleQueueLlamaAM.java:158), at com.cloudera.llama.am.impl.ThrottleLlamaAM.start(ThrottleLlamaAM.java:164), at com.cloudera.llama.am.impl.MultiQueueLlamaAM.getSingleQueueAMInfo(MultiQueueLlamaAM.java:169), at com.cloudera.llama.am.impl.MultiQueueLlamaAM.reserve(MultiQueueLlamaAM.java:286), at com.cloudera.llama.am.impl.GangAntiDeadlockLlamaAM.reserve(GangAntiDeadlockLlamaAM.java:205), at com.cloudera.llama.am.impl.ExpansionReservationsLlamaAM.reserve(ExpansionReservationsLlamaAM.java:131), at com.cloudera.llama.am.impl.APIContractLlamaAM.reserve(APIContractLlamaAM.java:144), at com.cloudera.llama.am.LlamaAMServiceImpl.Reserve(LlamaAMServiceImpl.java:132), at com.cloudera.llama.am.MetricLlamaAMService.Reserve(MetricLlamaAMService.java:140), at com.cloudera.llama.thrift.LlamaAMService$Processor$Reserve.getResult(LlamaAMService.java:512), at com.cloudera.llama.thrift.LlamaAMService$Processor$Reserve.getResult(LlamaAMService.java:497), at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39), at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39), at com.cloudera.llama.server.ClientPrincipalTProcessor.process(ClientPrincipalTProcessor.java:47), at com.cloudera.llama.server.AuthzTProcessor.process(AuthzTProcessor.java:89), at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206), at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145), at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615), at java.lang.Thread.run(Thread.java:745), Caused by: java.lang.reflect.UndeclaredThrowableException, at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1655), at com.cloudera.llama.am.yarn.YarnRMConnector.register(YarnRMConnector.java:239), ... 22 more, Caused by: com.cloudera.llama.util.LlamaException: AM_TIMED_OUT_STARTING_STOPPING - AM 'application_1424245272359_0001' timed out ('30000' ms) in state 'FAILED' transitioning to '[ACCEPTED]' while 'starting', at com.cloudera.llama.am.yarn.YarnRMConnector._monitorAppState(YarnRMConnector.java:429), at com.cloudera.llama.am.yarn.YarnRMConnector._initYarnApp(YarnRMConnector.java:294), at com.cloudera.llama.am.yarn.YarnRMConnector.access$400(YarnRMConnector.java:83), at com.cloudera.llama.am.yarn.YarnRMConnector$4.run(YarnRMConnector.java:243), at com.cloudera.llama.am.yarn.YarnRMConnector$4.run(YarnRMConnector.java:240), at java.security.AccessController.doPrivileged(Native Method), at javax.security.auth.Subject.doAs(Subject.java:415), at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642), ... 23 more
[data01:21000] > Looking at log in Cloudera Manager (Diagnostics) PriviledgedActionException as:llama (auth:PROXY) via yarn/master01.mydomain.int@MYDOMAIN (auth:KERBEROS) cause:org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: yarn/master01.mydomain.int@MYDOMAIN is not allowed to impersonate llama In the configuration of YARN Service-Wide > Proxy: all services including llama have a *. Looking at the YARN ResourceManager on master01 running process, inspecting the core-site.xml. I can confirm that these values are applied. Do you have any clue where the problem might be?
... View more
02-17-2015
07:56 AM
1 Kudo
The suggestion from Daisuke could be teh solution, When you forget to install the JCE you will see messages like: INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8022: readAndProcess threw exception javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled)] from client 127.0.0.1. Count of bytes read: 0
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled)]
at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:159) You can see more details in the kerberos security logs using this startup parameter: HADOOP_OPTS="-Dsun.security.krb5.debug=true" Greetings, Alexander Bij
... View more
- « Previous
-
- 1
- 2
- Next »