Member since
05-17-2016
41
Posts
5
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4051 | 10-11-2017 05:31 AM | |
724 | 08-01-2017 01:16 PM | |
1040 | 05-17-2016 01:22 PM |
07-26-2018
06:16 AM
Hello @Matt Clarke it seems all messages similar, it is repeating. I can't see a diffirent message. 2018-07-26 09:04:27,722 WARN [Timer-Driven Process Thread-3] o.a.n.r.util.SiteToSiteRestApiClient Failed to get controller from https://my-nifi-server:9443/nifi-api due to javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure 2018-07-26 09:04:27,723 ERROR [Timer-Driven Process Thread-3] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=RGP1,targets=https://my-nifi-server:9443] failed to communicate with https://my-nifi-server:9443 due to org.apache.nifi.remote.exception.UnreachableClusterException: Unable to refresh details from any of the configured remote instances. 2018-07-26 09:04:27,723 WARN [Timer-Driven Process Thread-2] o.a.n.r.util.SiteToSiteRestApiClient Failed to get controller from https://my-nifi-server:9443/nifi-api due to javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure 2018-07-26 09:04:27,723 INFO [StandardProcessScheduler Thread-6] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled TailFile[id=7b70e94e-2b44-32a6-0000-000000000000] to run with 1 threads 2018-07-26 09:04:27,723 ERROR [Timer-Driven Process Thread-2] o.a.nifi.remote.StandardRemoteGroupPort RemoteGroupPort[name=RGP2,targets=https://my-nifi-server:9443] failed to communicate with https://my-nifi-server:9443 due to org.apache.nifi.remote.exception.UnreachableClusterException: Unable to refresh details from any of the configured remote instances. 2018-07-26 09:04:27,726 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.a.n.r.util.SiteToSiteRestApiClient Failed to get controller from https://my-nifi-server:9443/nifi-api due to javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure 2018-07-26 09:04:27,726 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.a.n.r.util.SiteToSiteRestApiClient Failed to get controller from https://my-nifi-server:9443/nifi-api due to javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure 2018-07-26 09:04:27,727 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@4f1c5017 Unable to refresh Remote Group's peers due to Received fatal alert: handshake_failure 2018-07-26 09:04:27,727 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@13a87254 Unable to refresh Remote Group's peers due to Received fatal alert: handshake_failure 2018-07-26 09:04:27,727 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.a.n.r.util.SiteToSiteRestApiClient Failed to get controller from https://my-nifi-server:9443/nifi-api due to javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure 2018-07-26 09:04:27,727 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.apache.nifi.remote.client.PeerSelector org.apache.nifi.remote.client.PeerSelector@69c7fbb Unable to refresh Remote Group's peers due to Received fatal alert: handshake_failure
... View more
07-25-2018
02:59 PM
Hello, I have a nifi server and minifi agents running on clients. I need to create secure communication between nifi server and minifi agents. Created certificates, keystores with nifi-toolkit in nifi server and could establish secure connection to nifi server via browser(HTTPS). Then I copied keystore and trustore files, I updated config.yml of minifi security and nifi url parts. Security Properties:
keystore: './conf/keystore.jks'
keystore type: 'JKS'
keystore password: 'mypassword'
key password: 'mypassword'
truststore: './conf/truststore.jks'
truststore type: 'JKS'
truststore password: 'mypassword'
ssl protocol: 'TLS'
Sensitive Props:
key:
algorithm: PBEWITHMD5AND256BITAES-CBC-OPENSSL
provider: BC
...
Remote Process Groups:
- id: a3889178-0571-3798-0000-000000000000
name: ''
url: https://my-nifi-server:9443
... When I restart minifi agent it gives this error; 2018-07-25 17:47:38,681 WARN [NiFi Site-to-Site Connection Pool Maintenance] o.a.n.r.util.SiteToSiteRestApiClient Failed to get controller from https://my-nifi-server:9443/nifi-api due to javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure What is wrong? Regards.
... View more
Labels:
- Labels:
-
Apache MiNiFi
-
Apache NiFi
06-28-2018
08:19 AM
Hello, I created keystore and truststore files from pfx file, did settings in nifi.properties as below; nifi.security.keystore=my_keystore_file
nifi.security.keystoreType=JKS
nifi.security.keystorePasswd=mypass
nifi.security.keyPasswd=mypass
nifi.security.truststore=my_truststore_file
nifi.security.truststoreType=JKS
nifi.security.truststorePasswd=mypass I enabled https, disabled http etc. When I start nifi there is no error log in app.log and bootstrap.log, everything seems work but UI doesn't work.
... View more
Labels:
- Labels:
-
Apache NiFi
04-30-2018
02:56 PM
Hi @Geoffrey Shelton Okot, I decided to test with local user. I added also a principle to kerberos with same name. I did configurations in url that you mentioned before. [users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
admin = admin, admin
zptest = password, role1
user1 = user1, role1, role2
user2 = user2, role3
user3 = user3, role2
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
### A sample for configuring Active Directory Realm
#activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealm
#activeDirectoryRealm.systemUsername = userNameA
#use either systemPassword or hadoopSecurityCredentialPath, more details in http://zeppelin.apache.org/docs/latest/security/shiroauthentication.html
#activeDirectoryRealm.systemPassword = passwordA
#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/zeppelin.jceks
#activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM
#activeDirectoryRealm.url = ldap://ldap.test.com:389
#activeDirectoryRealm.groupRolesMap = "CN=admin,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"admin","CN=finance,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"finance","CN=hr,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"hr"
#activeDirectoryRealm.authorizationCachingEnabled = false
### A sample PAM configuration
#pamRealm=org.apache.zeppelin.realm.PamRealm
#pamRealm.service=sshd
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### If caching of user is required then uncomment below lines
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
role1 = *
role2 = *
role3 = *
admin = *
[urls]
# This section is used for url-based security.
# You can secure interpreter, configuration and credential information by urls. Comment or uncomment the below urls that you want to hide.
# anon means the access is anonymous.
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
#/api/interpreter/** = authc, roles[admin]
#/api/configurations/** = authc, roles[admin]
#/api/credential/** = authc, roles[admin]
#/** = anon
/** = authc I am logging in to Zeppelin with zptest user and "whoami" command displays "zeppelin". Impersonation part at advanced env is here; #--- FOR IMPERSONATION -- START ----
ZEPPELIN_IMPERSONATE_USER=`echo ${ZEPPELIN_IMPERSONATE_USER} | cut -d "@" -f1`
export ZEPPELIN_IMPERSONATE_CMD='sudo -H -u zeppelin bash -c'
export SPARK_HOME=/usr/hdp/current/spark2-client/
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
export PYTHONPATH=$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH # This is version based config and needs to be changed based on the spark version
export SPARK_YARN_USER_ENV="PYTHONPATH=${PYTHONPATH}"
#--- FOR IMPERSONATION -- END ----
... View more
04-29-2018
06:58 PM
Hi, here it is; [users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / ...) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INISections
admin = admin, admin
user1 = user1, role1, role2
user2 = user2, role3
user3 = user3, role2
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
### A sample for configuring Active Directory Realm
#activeDirectoryRealm = org.apache.zeppelin.realm.ActiveDirectoryGroupRealm
#activeDirectoryRealm.systemUsername = userNameA
#use either systemPassword or hadoopSecurityCredentialPath, more details in http://zeppelin.apache.org/docs/latest/security/shiroauthentication.html
#activeDirectoryRealm.systemPassword = passwordA
#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://file/user/zeppelin/zeppelin.jceks
#activeDirectoryRealm.searchBase = CN=Users,DC=SOME_GROUP,DC=COMPANY,DC=COM
#activeDirectoryRealm.url = ldap://ldap.test.com:389
#activeDirectoryRealm.groupRolesMap = "CN=admin,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"admin","CN=finance,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"finance","CN=hr,OU=groups,DC=SOME_GROUP,DC=COMPANY,DC=COM":"hr"
#activeDirectoryRealm.authorizationCachingEnabled = false
### A sample for configuring LDAP Directory Realm
ldapRealm = org.apache.zeppelin.realm.LdapRealm
## search base for ldap groups (only relevant for LdapGroupRealm):
ldapRealm.contextFactory.environment[ldap.searchBase] = mysearchbase
ldapRealm.contextFactory.url = myldapserver
ldapRealm.userDnTemplate = myuserdntemplae
ldapRealm.contextFactory.authenticationMechanism = SIMPLE
ldapRealm.authorizationEnabled=true
ldapRealm.userSearchBase=myusersearchbase
ldapRealm.groupSearchBase=mygroupsearchbase
ldapRealm.userObjectClass=inetorgperson
ldapRealm.groupObjectClass=groupofnames
ldapRealm.memberAttribute=member
securityManager.realms = $ldapRealm
ldapRealm.rolesByGroup = "admingroup": "admin"
### A sample PAM configuration
#pamRealm=org.apache.zeppelin.realm.PamRealm
#pamRealm.service=sshd
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### If caching of user is required then uncomment below lines
cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
securityManager.cacheManager = $cacheManager
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
role1 = *
role2 = *
role3 = *
admin = *
[urls]
# This section is used for url-based security.
# You can secure interpreter, configuration and credential information by urls. Comment or uncomment the below urls that you want to hide.
# anon means the access is anonymous.
# authc means Form based Auth Security
# To enfore security, comment the line below and uncomment the next one
/api/version = anon
/** = authc
#/api/interpreter/** = authc, roles[admin]
#/api/configurations/** = authc, roles[admin]
#/api/credential/** = authc, roles[admin]
#/** = anon
... View more
04-29-2018
12:40 PM
Thanks @Geoffrey Shelton Okot But this solution is for local OS users, I need something for LDAP users. Also I need something for all interpreters, my problem is not shell interpreter specific.
... View more
04-28-2018
10:50 PM
Hi, I did those steps for Zeppelin user impersonation, but It doesn't work, am I missing something? Zeppelin has LDAP and kerberos integration. Configurations are done in shiro configurations and authentication works. 1- sudo configuration for zeppelin user is done in /etc/sudoers zeppelin ALL=(ALL) NOPASSWD: ALL 2-Write permission to other users to zeppelin log folder is ok, it is also configured in /var/lib/ambari-server/resources/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py 3-Those lines are added to core-site.xml in HDFS. hadoop.proxyuser.zeppelin.hosts=*
hadoop.procyuser.zeppelin.goups=* 4-advanced-zeppelin-env configured, below lines added. ZEPPELIN_IMPERSONATE_USER=echo ${ZEPPELIN_IMPERSONATE_USER} | cut -d "@" -f1
ZEPPELIN_IMPERSONATE_CMD='sudo -H -u zeppelin bash -c' 5-Any interpreter's configuration is changed as "per user, isolated" and user impersonate is selected(I tried for shell and spark). After restarting zeppelin I am having attached errors.
... View more
Labels:
- Labels:
-
Apache Zeppelin
04-18-2018
07:02 AM
Hello, is it possible to provide authentication to jupyter notebooks via LDAP? I couldn't see something in official documents, there are a few github projects.
... View more
04-10-2018
10:20 AM
Thanks @Geoffrey Shelton Okot I have already a Zeppelin instance in a kerberized cluster. Should I do extra configuration for kerberos authentication? I couldn't login to zeppelin ui with a kerberos principal.
... View more
04-10-2018
08:26 AM
Hello, is it possible to do authentication only with kerberos principal in a kerberized cluster? (without using AD or LDAP) Regard.
... View more
Labels:
- Labels:
-
Apache Zeppelin
12-20-2017
11:49 AM
Hello, I am trying to filter a JSON stream with NiFi. Simply my JSON scheme is something like this; {
"jsonarray":[
{
"attribute1":"AAA",
"attribute2":"BBB",
"attribute3":"CCC",
"element1_1":[
"xyz",
"xyz1",
....
],
"element1_2":[
"abc",
"abc1",
...
],
"element1_3":[
"123",
"1234",
...
]
},
{
"attribute1":"XXX",
"attribute2":"YYY",
"attribute3":"ZZZ",
"element2_1":[
"xyz5",
"xyz6",
....
],
"element2_2":[
"abc5",
"abc6",
...
],
"element2_3":[
"333",
"334",
...
]
},
...
...
} For example I want to filter when object contains "attribute1":"XXX", I would have this as result; {
"attribute1":"XXX",
"attribute2":"YYY",
"attribute3":"ZZZ",
"element2_1":[
"xyz5",
"xyz6",
....
],
"element2_2":[
"abc5",
"abc6",
...
],
"element2_3":[
"333",
"334",
...
]
}
...
Is it possible?
... View more
Labels:
- Labels:
-
Apache NiFi
10-11-2017
05:31 AM
1 Kudo
Hello, I tried a few things. Regenerated keytabs, checked ticket issues, check read accesses of keytabs...There were no problems. I also tried to delete problematic service, remove zookeeper folder(I faced with 'no authentication' error and i could removed with using super digest as explained here; https://community.hortonworks.com/articles/29900/zookeeper-using-superdigest-to-gain-full-access-to.html) and added again but the problem had continued. I resolved issue with adding security.auth_to_local rules to zokeeper environment. I added rules for problematic services to SERVER_JVMFLAGS in zookeeper-env template like this and restart zookeeper and other related services. -Dzookeeper.security.auth_to_local=RULE:[2:\$1@\$0](hbase@MY_REALM)s/.*/hbase/RULE:[2:\$1@\$0](infra-solr@MY_REALM)s/.*/infra-solr/RULE:[2:\$1@\$0](rm@MY_REALM)s/.*/rm/
... View more
10-09-2017
06:12 AM
Hello,
after hadoop kerberization we are facing an issue about some services, these services don't start. These services are yarn resource manager, hbase regionservers, ambari-infra, logsearch. Problem seems same, they are all "No auth" error for related directories. Ambari-infra error;
KeeperErrorCode = NoAuth for /infra-solr
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /infra-solr
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.setACL(ZooKeeper.java:1399)
at org.apache.ambari.logsearch.solr.util.AclUtils.setRecursivelyOn(AclUtils.java:77)
at org.apache.ambari.logsearch.solr.commands.SecureSolrZNodeZkCommand.executeZkCommand(SecureSolrZNodeZkCommand.java:63)
at org.apache.ambari.logsearch.solr.commands.SecureSolrZNodeZkCommand.executeZkCommand(SecureSolrZNodeZkCommand.java:39)
at org.apache.ambari.logsearch.solr.commands.AbstractZookeeperRetryCommand.createAndProcessRequest(AbstractZookeeperRetryCommand.java:38)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.secureSolrZnode(AmbariSolrCloudClient.java:170)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:526)
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Zookeeper
-
Kerberos
08-17-2017
08:27 AM
I resolved by adding a custom kafka-broker variable, advertised.listeners. It is fqdn of broker host. Also changed listener address to 0.0.0.0, it worked.
... View more
08-08-2017
05:56 AM
it is 0.10.0.2.6(which comes with hdp 2.6, it uses zookeeper). It uses port 6667. It is not secure.
... View more
08-07-2017
04:54 PM
Thanks @Abdelkrim Hadjidj. Do you mean software delete by logical deleting? (Not actually delete, update some fields) Are there any native solution for mongodb or cassandra? What do you think about using triggers? (writing insert,delete,update records to a table) Regards.
... View more
08-07-2017
01:57 PM
Hello, QueryDatabaseTable processor provides to capture inserts. Is it possible to get updates and deletes without a trigger or postgresql solution? Regards.
... View more
Labels:
- Labels:
-
Apache NiFi
08-04-2017
02:25 PM
Hello,
I have a hadoop cluster which contain multi network interface nodes. One interface is for public reaches other for cluster internal. Kafka broker was listening internal ip as default but if a server is outside cluster it couldn't reach, couldn't telnet to server.(something like nifi)
I changed listener adress to 0.0.0.0 and restarted kafka. This time can telnet to kafka broker server but when I try to produce a message it fails with this errors(even producer is executed inside cluster);
[2017-08-04 17:18:05,688] ERROR Error when sending message to topic test with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0 due to 1515 ms has passed since batch creation plus linger time
... View more
- Tags:
- Hadoop Core
- Kafka
Labels:
- Labels:
-
Apache Kafka
08-03-2017
06:18 AM
Hello, I installed HDP 2.6 power linux edition. All components are working well except ambari-metrics-monitor. It seems can not build python psutil package. Error is; "ImportError: No module named psutil
psutil binaries need to be built by running, psutil/build.py manually or by running a, mvn clean package, command.
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/resource_monitoring/main.py", line 27, in <module>
from core.controller import Controller
File "/usr/lib/python2.6/site-packages/resource_monitoring/core/controller.py", line 27, in <module>
from metric_collector import MetricsCollector
File "/usr/lib/python2.6/site-packages/resource_monitoring/core/metric_collector.py", line 23, in <module>
from host_info import HostInfo
File "/usr/lib/python2.6/site-packages/resource_monitoring/core/host_info.py", line 22, in <module>
import psutil
ImportError: No module named psutil" I tried to remove /usr/lib/python2.6/site-packages/resource_monitoring/psutil and rebuild psutil, but it fails due to " ImportError: cannot import name archive_util" error. cd /usr/lib/python2.6/site-packages/resource_monitoring mv psutil psutil_old yum reinstall ambari-metrics-monitor cd psutil make install .... ... python setup.py build Traceback (most recent call last): File "setup.py", line 17, in <module> from distutils.core import setup, Extension File "/usr/lib64/python2.7/distutils/core.py", line 21, in <module> from distutils.cmd import Command File "/usr/lib64/python2.7/distutils/cmd.py", line 11, in <module> from distutils import util, dir_util, file_util, archive_util, dep_util ImportError: cannot import name archive_util make: *** [build] Error 1
... View more
Labels:
08-01-2017
01:16 PM
1 Kudo
I found the solution. It happens because "atlas_titan" is a zombie hbase table. It can't be created(hbase says table exists) and it can't be dropped(hbase says table does not exist). This happens when table doesn't exist in hbase but exists in zookeeper. It should be deleted from zookeeper. $ hbase zkcli [zk: ...] ls /hbase-unsecure/table [zk: ...] rmr /hbase-unsecure/table/ATLAS_ENTITY_AUDIT_EVENTS [zk: ...] rmr /hbase-unsecure/table/atlas_titan [zk: ...] quit Then restart atlas, it should recreate hbase tables and application should be up a few seconds later.
... View more
08-01-2017
06:44 AM
Hello, I installed HDP 2.6, atlas server seems working but it has a error which says atlas ui does not work. There is an error like this which occur during start of atlas; ".... Caused by: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Temporary failure in storage backend
at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.ensureTableExists(HBaseStoreManager.java:720)
at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.ensureColumnFamilyExists(HBaseStoreManager.java:792)
at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.openDatabase(HBaseStoreManager.java:474)
at com.thinkaurelius.titan.diskstorage.hbase.HBaseStoreManager.openDatabase(HBaseStoreManager.java:455)
at com.thinkaurelius.titan.diskstorage.Backend.getStandaloneGlobalConfiguration(Backend.java:387)
... 40 more
Caused by: org.apache.hadoop.hbase.TableExistsException: atlas_titan .... "
... View more
Labels:
- Labels:
-
Apache Atlas
06-30-2017
06:30 AM
Hello, I am trying to install HDP 2.6 on RHEL 7.2 ppc64le. Installation over ambari fails due to "Error: Package: hadoop_2_6_0_3_8-hdfs-2.7.3.2.6.0.3-8.ppc64le (HDP-2.6)
Requires: libtirpc-devel" error.
... View more
Labels:
- Labels:
-
Apache Hadoop
04-24-2017
05:36 PM
Hello @Vipin Rathor, 1. it is RedHat IDM. It has a "specialized" kerberos configuration and hadoop can't execute kerberos commands with it. RedHat support also says it is not supported by ambari. There is an article about ambari freeipa(free version of IDM) plugin, but it is an experimental method, doesn't work with HDP 2.5 2. I am planning to use LDAP for only system logins. Hadoop admins will switch to local users which are linked to kerberos.
... View more
04-21-2017
01:03 PM
Hello, I want to do below configurations because of some restriction in my environment(my LDAP software is not supported by Hadoop and I can't use AD). I tested it, everything seems OK but I am curious if I'm missing some points. May be there any problem at this configuration? Is it an enough configuration? -I will use LDAP(and it's built-in kerberos) for ssh login to nodes -I will integrate hadoop to MIT kerberos -I will integrate ambari to MIT kerberos To sum up I will not use LDAP for hadoop and ambari, I will create principals and manage roles via Ranger. Regards.
... View more
03-29-2017
11:49 AM
1 Kudo
Result is; { "href" : "http://AMBARI_IP:8080/api/v1/clusters/testcls/credentials/kdc.admin.credential", "Credential" : { "alias" : "kdc.admin.credential", "cluster_name" : "testcls", "type" : "persisted" }}
Same in firefox.
... View more
03-29-2017
08:27 AM
Hello, when I check ambari-server log, there is an error like this; 29 Mar 2017 10:54:59,309 ERROR [ambari-client-thread-63] BaseManagementHandler:67 - Bad request received: Missing KDC administrator credentials.
The KDC administrator credentials must be set as a persisted or temporary credential resource.This may be done by issuing a POST to the /api/v1/clusters/:clusterName/credentials/kdc.admin.credential API entry point with the following payload:
{
"Credential" : {
"principal" : "(PRINCIPAL)", "key" : "(PASSWORD)", "type" : "(persisted|temporary)"}
}
} I did the settings according to this url; https://community.hortonworks.com/articles/42927/adding-kdc-administrator-credentials-to-the-ambari.html Firstly run ambari-server setup-security and select option 2. Then run these without no error; curl -H "X-Requested-By:ambari" -u admin:admin -X POST -d '{ "Credential" : { "principal" : "hadoopadmin@REALM", "key" : "xxxxx", "type" : "persisted" } }' http://AMBARI_IP:8080/api/v1/clusters/testcls/credentials/kdc.admin.credential
curl -H "X-Requested-By:ambari" -u admin:admin -X PUT -d '{ "Credential" : { "principal" : "hadoopadmin@REALM", "key" : "xxxxx", "type" : "persisted" } }' http://AMBARI_IP:8080/api/v1/clusters/testcls/credentials/kdc.admin.credential But I am still getting above error about credential resource. When I type a wrong password ambari log says password is wrong. So I am sure that password is correct.
... View more
03-28-2017
05:32 PM
Hi, I remember admin principal's password, I am trying it but can't pass this popup. I could validate it's true by "kinit ..."
... View more
03-28-2017
02:28 PM
Hello, I am facing an issue while enabling kerberos in hadoop cluster. It pops-up an error at test section, I can't pass even I enter correct credentials. Here is the error;
... View more
03-24-2017
09:06 PM
Hello, I am in stuck at test section(even I skip it I am facing same problem during kerberizing) Ambari pops-up a window which is attched below. I could not pass it even I typed hadoopadmin@REALM_NAME and password.
... View more