Member since
12-28-2015
74
Posts
17
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
258 | 05-17-2017 03:15 PM | |
2127 | 03-21-2017 11:35 AM | |
4123 | 03-04-2017 09:51 AM | |
672 | 02-09-2017 04:03 PM | |
1007 | 01-19-2017 11:24 AM |
09-22-2017
07:56 AM
Thanks for your answer @Sonu Sahi So as far as I have read one of the cons of having separated cluster is that I would have to maintain the hadoop configuration and kerberos configuration in the HDF cluster manually right?
... View more
09-20-2017
10:49 AM
Hello community! I'm a bit confused with this about HDF and HDP so I would like to ask you. I'm currently testing HDF mostly for NIFI but I would like to also test SAM and I also have an HDP cluster currently working, in order to let NiFi Proccessors access to HDP it's necessary to have both HDF and HDP under the same ambari? I've to say that my HDP cluster is kerberized but I would like to have a login screen at NiFi that's why I didn't secure NiFi with kerberos and used LDAP instead. Thank you in advance, Best Regards.
... View more
09-14-2017
07:49 AM
Thanks @Matt Clarke I added that entry because I had previous issues with the LDAP admin user, now I understand better how it works. I just removed the "Legacy Authorized Users File" value and it works.
... View more
09-13-2017
03:52 PM
Hello community, I'm trying to setup a Nifi cluster with external certifcates (used tinycerts.org) and after setup SSL and LDAP authentication and add my nodes SSL CNs to authorizations.xml via ambari, I have the following message when trying to access to nifi console: Insufficient Permissions
log outhome
Untrusted proxy CN=node04.nifi.int, OU=Laboratorio, O=Arq de Sistemas, L=Tres Cantos, ST=Madrid, C=ES I have tried what is told in this link https://community.hortonworks.com/questions/80246/nifi-untrusted-proxy.html reading the pkcs12 certificate with keytool and getting the CN of the owner part of the certificate: Alias name: 1
Creation date: Sep 13, 2017
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=node01.nifi.int, OU=Laboratorio, O=Arq de Sistemas, L=Tres Cantos, ST=Madrid, C=ES
Issuer: CN=Arq de Sistemas CA, OU=Secure Digital Certificate Signing, O=Arq de Sistemas, L=Tres Cantos, ST=Madrid, C=ES
Serial number: 2cbd
Valid from: Tue Sep 12 11:14:33 CEST 2017 until: Wed Sep 12 11:14:33 CEST 2018
Even with that I still having the same issue so after a bit of research I found this post https://community.hortonworks.com/questions/110527/nifi-hdf30-untrusted-proxy.html When I remove users.xml and authorizations.xml nifi is not able to create from authorizers.xml, and create an empty ones after that the nifi instances are unable to start and shows this error: 2017-09-13 17:26:47,480 ERROR [NiFi logging handler] org.apache.nifi.StdErr Failed to start web server: Error creating bean with name 'niFiWebApiSecurityConfiguration': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire method: public void org.apache.nifi.web.NiFiWebApiSecurityConfiguration.setX509AuthenticationProvider(org.apache.nifi.web.security.x509.X509AuthenticationProvider); nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'x509AuthenticationProvider' defined in class path resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 'authorizer' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'authorizer': FactoryBean threw exception on object creation; nested exception is org.apache.nifi.authorization.exception.AuthorizerCreationException: org.apache.nifi.authorization.exception.AuthorizerCreationException: Cannot provide an Initial Admin Identity and a Legacy Authorized Users File
2017-09-13 17:26:47,491 ERROR [NiFi logging handler] org.apache.nifi.StdErr Shutting down...
SSL works fine with the certificates.. my authorizers.xml is the following: <!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!--
This file lists the authority providers to use when running securely. In order
to use a specific provider it must be configured here and it's identifier
must be specified in the nifi.properties file.
-->
<authorizers>
<!--
The FileAuthorizer is NiFi"s provided authorizer and has the following properties:
- Authorizations File - The file where the FileAuthorizer will store policies.
- Users File - The file where the FileAuthorizer will store users and groups.
- Initial Admin Identity - The identity of an initial admin user that will be granted access to the UI and
given the ability to create additional users, groups, and policies. The value of this property could be
a DN when using certificates or LDAP, or a Kerberos principal. This property will only be used when there
are no other users, groups, and policies defined. If this property is specified then a Legacy Authorized
Users File can not be specified.
NOTE: Any identity mapping rules specified in nifi.properties will also be applied to the initial admin identity,
so the value should be the unmapped identity.
- Legacy Authorized Users File - The full path to an existing authorized-users.xml that will be automatically
converted to the new authorizations model. If this property is specified then an Initial Admin Identity can
not be specified, and this property will only be used when there are no other users, groups, and policies defined.
- Node Identity [unique key] - The identity of a NiFi cluster node. When clustered, a property for each node
should be defined, so that every node knows about every other node. If not clustered these properties can be ignored.
The name of each property must be unique, for example for a three node cluster:
"Node Identity A", "Node Identity B", "Node Identity C" or "Node Identity 1", "Node Identity 2", "Node Identity 3"
NOTE: Any identity mapping rules specified in nifi.properties will also be applied to the node identities,
so the values should be the unmapped identities (i.e. full DN from a certificate).
-->
<authorizer>
<identifier>file-provider</identifier>
<class>org.apache.nifi.authorization.FileAuthorizer</class>
<property name="Authorizations File">/var/lib/nifi/conf/authorizations.xml</property>
<property name="Users File">/var/lib/nifi/conf/users.xml</property>
<property name="Initial Admin Identity">cn=testuser,ou=Users,dc=nifi,dc=int</property>
<property name="Legacy Authorized Users File">/root/authorized-users.xml</property>
<!-- Provide the identity (typically a DN) of each node when clustered (see tool tip for detailed description of Node Identity). Must be specified when Ranger Nifi plugin will not be used for authorization. -->
<property name="Node Identity 1">CN=node01.nifi.int, OU=Laboratorio, O=Arq de Sistemas, L=Tres Cantos, ST=Madrid, C=ES</property>
<property name="Node Identity 2">CN=node03.nifi.int, OU=Laboratorio, O=Arq de Sistemas, L=Tres Cantos, ST=Madrid, C=ES</property>
<property name="Node Identity 3">CN=node04.nifi.int, OU=Laboratorio, O=Arq de Sistemas, L=Tres Cantos, ST=Madrid, C=ES</property>
</authorizer>
</authorizers>
Do you know what maybe happening? Thank you in advance. Best regards.
... View more
Labels:
09-12-2017
11:36 AM
Hello, I'm doing some test with a nifi cluster (HDF 3), and I wanted to configure ldap as authentication service, I've not configured SSL yet but I would like to test the ldap authentication. But when I try to access the cluster it directly logs me as anonymous and I can see the flows without any login screen. My configuration is the following login-identity-providers.xml <provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Identity Strategy">USE_USERNAME</property>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN">cn=Manager,dc=nifi,dc=int</property>
<property encryption="aes/gcm/256" name="Manager Password">mIV4TPuSpfOGzd3E||FZnVyewmvoWGEmf1sF5cCTCy4tztrwo</property>
<property name="TLS - Keystore"/>
<property name="TLS - Keystore Password"/>
<property name="TLS - Keystore Type"/>
<property name="TLS - Truststore"/>
<property name="TLS - Truststore Password"/>
<property name="TLS - Truststore Type"/>
<property name="TLS - Client Auth"/>
<property name="TLS - Protocol"/>
<property name="TLS - Shutdown Gracefully"/>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://node03.nifi.int:389</property>
<property name="User Search Base">ou=Users,dc=nifi,dc=int</property>
<property name="User Search Filter">uid={0}</property>
<property name="Authentication Expiration">12 hours</property>
</provider>
nifi.properties: # Generated by Apache Ambari. Tue Sep 12 12:27:33 2017
nifi.administrative.yield.duration=30 sec
nifi.authorizer.configuration.file=/usr/hdf/current/nifi/conf/authorizers.xml
nifi.bored.yield.duration=10 millis
nifi.cluster.flow.election.max.candidates=3
nifi.cluster.flow.election.max.wait.time=5 mins
nifi.cluster.is.node=true
nifi.cluster.node.address=node01.nifi.int
nifi.cluster.node.connection.timeout=5 sec
nifi.cluster.node.event.history.size=25
nifi.cluster.node.protocol.max.threads=
nifi.cluster.node.protocol.port=9088
nifi.cluster.node.protocol.threads=10
nifi.cluster.node.read.timeout=5 sec
nifi.cluster.protocol.heartbeat.interval=5 sec
nifi.cluster.protocol.is.secure=False
nifi.components.status.repository.buffer.size=1440
nifi.components.status.repository.implementation=org.apache.nifi.controller.status.history.VolatileComponentStatusRepository
nifi.components.status.snapshot.frequency=1 min
nifi.content.claim.max.appendable.size=10 MB
nifi.content.claim.max.flow.files=100
nifi.content.repository.always.sync=false
nifi.content.repository.archive.enabled=true
nifi.content.repository.archive.max.retention.period=12 hours
nifi.content.repository.archive.max.usage.percentage=50%
nifi.content.repository.directory.default=/var/lib/nifi/content_repository
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.viewer.url=/nifi-content-viewer/
nifi.database.directory=/var/lib/nifi/database_repository
nifi.documentation.working.directory=/var/lib/nifi/work/docs/components
nifi.flow.configuration.archive.dir=/var/lib/nifi/archive/
nifi.flow.configuration.archive.enabled=true
nifi.flow.configuration.archive.max.count=
nifi.flow.configuration.archive.max.storage=500 MB
nifi.flow.configuration.archive.max.time=30 days
nifi.flow.configuration.file=/var/lib/nifi/conf/flow.xml.gz
nifi.flowcontroller.autoResumeState=true
nifi.flowcontroller.graceful.shutdown.period=10 sec
nifi.flowfile.repository.always.sync=false
nifi.flowfile.repository.checkpoint.interval=2 mins
nifi.flowfile.repository.directory=/var/lib/nifi/flowfile_repository
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.partitions=256
nifi.flowservice.writedelay.interval=500 ms
nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
nifi.kerberos.krb5.file=
nifi.kerberos.service.keytab.location=
nifi.kerberos.service.principal=
nifi.kerberos.spnego.authentication.expiration=12 hours
nifi.kerberos.spnego.keytab.location=
nifi.kerberos.spnego.principal=
nifi.login.identity.provider.configuration.file=/usr/hdf/current/nifi/conf/login-identity-providers.xml
nifi.nar.library.directory=/usr/hdf/current/nifi/lib
nifi.nar.working.directory=/var/lib/nifi/work/nar
nifi.provenance.repository.always.sync=false
nifi.provenance.repository.buffer.size=100000
nifi.provenance.repository.compress.on.rollover=true
nifi.provenance.repository.debug.frequency=1_000_000
nifi.provenance.repository.directory.default=/var/lib/nifi/provenance_repository
nifi.provenance.repository.encryption.key=
nifi.provenance.repository.encryption.key.id=
nifi.provenance.repository.encryption.key.provider.implementation=
nifi.provenance.repository.encryption.key.provider.location=
nifi.provenance.repository.implementation=org.apache.nifi.provenance.PersistentProvenanceRepository
nifi.provenance.repository.index.shard.size=500 MB
nifi.provenance.repository.index.threads=1
nifi.provenance.repository.indexed.attributes=
nifi.provenance.repository.indexed.fields=EventType, FlowFileUUID, Filename, ProcessorID, Relationship
nifi.provenance.repository.journal.count=16
nifi.provenance.repository.max.attribute.length=65536
nifi.provenance.repository.max.storage.size=1 GB
nifi.provenance.repository.max.storage.time=24 hours
nifi.provenance.repository.query.threads=2
nifi.provenance.repository.rollover.size=100 MB
nifi.provenance.repository.rollover.time=30 secs
nifi.queue.swap.threshold=20000
nifi.remote.input.host=
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.input.secure=False
nifi.remote.input.socket.port=
nifi.security.identity.mapping.pattern.dn=
nifi.security.identity.mapping.pattern.kerb=
nifi.security.identity.mapping.value.dn=
nifi.security.identity.mapping.value.kerb=
nifi.security.keyPasswd=
nifi.security.keystore=/usr/hdf/current/nifi/conf/keystore.jks
nifi.security.keystorePasswd=
nifi.security.keystoreType=jks
nifi.security.needClientAuth=False
nifi.security.ocsp.responder.certificate=
nifi.security.ocsp.responder.url=
nifi.security.truststore=/usr/hdf/current/nifi/conf/truststore.jks
nifi.security.truststorePasswd=
nifi.security.truststoreType=jks
nifi.security.user.authorizer=file-provider
nifi.security.user.login.identity.provider=ldap-provider
nifi.sensitive.props.additional.keys=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.key=wSdxEcJ0QRZGwFfr||CVtSGQsYIUSOXzAQEQBvu+IQFiwFpM/ZldwZgA
nifi.sensitive.props.key.protected=aes/gcm/256
nifi.sensitive.props.provider=BC
nifi.state.management.configuration.file=/usr/hdf/current/nifi/conf/state-management.xml
nifi.state.management.embedded.zookeeper.properties=/usr/hdf/current/nifi/conf/zookeeper.properties
nifi.state.management.embedded.zookeeper.start=false
nifi.state.management.provider.cluster=zk-provider
nifi.state.management.provider.local=local-provider
nifi.swap.in.period=5 sec
nifi.swap.in.threads=1
nifi.swap.manager.implementation=org.apache.nifi.controller.FileSystemSwapManager
nifi.swap.out.period=5 sec
nifi.swap.out.threads=4
nifi.templates.directory=/var/lib/nifi/templates
nifi.ui.autorefresh.interval=30 sec
nifi.ui.banner.text=
nifi.variable.registry.properties=
nifi.version=1.2.0.3.0.1.0-43
nifi.web.http.host=node01.nifi.int
nifi.web.http.network.interface.default=
nifi.web.http.port=9090
nifi.web.https.host=
nifi.web.https.network.interface.default=
nifi.web.https.port=
nifi.web.jetty.threads=200
nifi.web.jetty.working.directory=/var/lib/nifi/work/jetty
nifi.web.war.directory=/usr/hdf/current/nifi/lib
nifi.zookeeper.connect.string=node02.nifi.int:2181,node01.nifi.int:2181,node03.nifi.int:2181
nifi.zookeeper.connect.timeout=3 secs
nifi.zookeeper.root.node=/nifi
nifi.zookeeper.session.timeout=3 secs
Do you have any idea about what is happening? Thank you in advance.
... View more
Labels:
08-16-2017
10:58 AM
1 Kudo
Hello @Timothy Spann, Thanks for you answer, I didn't find kafka and syslog because I didn't notice that had the tag "source" marked. So this was filtering my processors and when I tried to find anything with kafka or syslog it was searching it with that tag.
... View more
08-14-2017
04:15 PM
1 Kudo
Hello I've 2 NiFi clusters one from HDF 3.0 and other with apache binaries, both in version 1.2, Can anyone tell me why are all kafka and syslog processors removed from HDF? is there any way to install them? Thank you in advance.
... View more
Labels:
07-04-2017
01:12 PM
Hello, I'm testing the NFS Gateway service , and when I mount HDFS, I've notice that ranger ACLs doesn't apply, Is this right? or is something wrong with my configuration and should be working? Thank you in advance. Best regards.
... View more
Labels:
06-05-2017
05:06 PM
Hello, After enable security I had a little issue with hue, for some reason it's translating the hostname of the ATS to localhost when in the config file it's node01.int. So when I'm trying to check an application log, it fails to handle the 401 due is finding for HTTP/localhost@REALM service ticket which doesn't exists, I must say that hue and ATS share the same machine. The error is the following: ==> runcpserver.log <==
[03/Jun/2017 17:25:12 +0000] http_client DEBUG REST invocation: curl -X GET --negotiate -u : -H Accept: application/json 'http://localhost:8188/ws/v1/applicationhistory/apps/application_1490961840939_1270'
==> error.log <==
[03/Jun/2017 17:25:12 +0000] http_client DEBUG REST invocation: curl -X GET --negotiate -u : -H Accept: application/json 'http://localhost:8188/ws/v1/applicationhistory/apps/application_1490961840939_1270'
==> runcpserver.log <==
[03/Jun/2017 17:25:12 +0000] kerberos_ DEBUG handle_401(): Handling: 401
==> error.log <==
[03/Jun/2017 17:25:12 +0000] kerberos_ DEBUG handle_401(): Handling: 401
==> runcpserver.log <==
[03/Jun/2017 17:25:12 +0000] kerberos_ ERROR generate_request_header(): authGSSClientStep() failed:
==> error.log <==
[03/Jun/2017 17:25:12 +0000] kerberos_ ERROR generate_request_header(): authGSSClientStep() failed:
==> runcpserver.log <==
[03/Jun/2017 17:25:12 +0000] kerberos_ ERROR (('Unspecified GSS failure. Minor code may provide more information', 851968), ('Server HTTP/localhost@HADOOP.INT not found in Kerberos database', -1765328377))
Traceback (most recent call last):
File "/usr/lib/hue/build/env/lib/python2.6/site-packages/requests_kerberos-0.4-py2.6.egg/requests_kerberos/kerberos_.py", line 112, in generate_request_header
_negotiate_value(response))
Can you guys help me with this issue?
Thank you in advance. Best regards
... View more
Labels:
06-05-2017
09:22 AM
Thanks @yvora , I had seen that before I just didn't know why isn't documented in https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_reference/content/hdfs-ports.html
... View more
06-03-2017
06:03 PM
Hello, For some reason after enable kerberos security, ambari have changed my datanode ports to 1019, anyone know why is this happening? Because of this for some reason now my namenode is detecting the blocks of a full datanode as underreplicated and is replicated them.
... View more
Labels:
05-18-2017
10:30 AM
Hello community, I've a cluster secured with one way trust relationship with an AD, before enable security in the cluster I was able to excute spark via oozie using a shell-action. Is there a way to keep doing it without have to propagate my keytab in every nodemamanager?
I've seen that for hive from shell-action you can pass the HADOOP_TOKEN_FILE_LOCATION variable to use it, can I do something similar with spark? if not what alternatives do I have? The problem with the keytab is that I've to change the password every moth so I would have to copy the keytab everytime it changes... Thank you in advance.
... View more
Labels:
05-17-2017
03:15 PM
Hello @Vipin Rathor, An apology for the delay in the answer, finally I solved it, as you said the problem with the replay was that he was trying to authenticate multiple times in a very short time, this was caused by curl and the -L parameter, for some reason curl wasn't storing the session cookie, I fixed it using -c <file path> -b <file path> parameter to store the cookie. Thank you.
... View more
05-17-2017
03:06 PM
Finally I solved it, It was a problem with jre unlimited encryption, the java alternatives in the OS was pointing to other Oracle JDK without it, looks like oozie client doesn't get the JAVA_HOME from configurations provided by ambari.
... View more
05-17-2017
12:15 PM
Hello community, I've the following issue with oozie since I enabled kerberos security. I can use the oozie client in the server which have oozie server running, but I'm not able to use it from anywhere else. When I try to do it I got the following error: [admin@node02.int ~]$ kinit admin@TEST.INT
Password for admin@TEST.INT:
[admin@node02.int ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_1008
Default principal: admin@TEST.INT
Valid starting Expires Service principal
05/16/17 20:13:59 05/17/17 06:14:20 krbtgt/TEST.INT@TEST.INT
renew until 05/23/17 20:13:59
[admin@node02.int ~]$ oozie jobs -oozie http://node01.int:11000/oozie
Error: IO_ERROR : java.io.IOException: Error while connecting Oozie server. No of retries = 1. Exception = Could not authenticate, GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
[admin@node02.int ~]$
At start I thought that it could be because in the core-site.xml I had "hadoop.proxyuser.oozie.hosts=node01.int" but I changed it to "hadoop.proxyuser.oozie.hosts=*" fully restarted the cluster and the issue persist. Any clue about what is happening? Thank you in advance!
... View more
Labels:
04-27-2017
05:04 PM
Hello community! I've a cluster with 10 datanodes, and 128 GB of RAM each. Our devs use all RAM asigned to yarn, around 100GB per node. But when I check grafana to check real memory consumption I found that when YARN is using the 100% of cluster resources, the memory allocated in servers nearly reach the 70% as avg. YARN nodemanagers used memory:
System servers memory used: In our development servers the breach is greater YARN nodemanagers used memory: System servers memory used: Do you know any way to improve the cluster utilization, or depends on the applications executed in yarn? Thank you in advance!
... View more
Labels:
04-18-2017
05:57 PM
Hello community, I'have a cluster with kerberos and after a restart I'm having the following error when trying to reach the ATS
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 403 GSSException: Failure unspecified at GSS-API level (Mechanism level: Request is a replay (34))</title>
</head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /applicationhistory. Reason:
<pre> GSSException: Failure unspecified at GSS-API level (Mechanism level: Request is a replay (34))</pre></p><hr /><i><small>Powered by Jetty://ll></i><br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
</body>
</html> I've tried with diferent principals but the issue persist. The resource manager and the rest of the spnego authenticated web consoles still working properly. Any idea about what is going on? Thank you in advance
... View more
Labels:
04-12-2017
03:19 PM
@Predrag Minovic that's interesting, I'll keep it in mind, because we actually have local usernames with lowercase in ambari, which are going to be sync in capital letters from AD. Thank you! I think you just avoided me a future headche 😉
... View more
04-06-2017
07:53 AM
1 Kudo
Hello community! I'm using ambari 2.4.1 and I'm trying to use ambari hive view, the problem is that the username is in uppercase and his HDFS user folder is also in uppercase so when I try to create the user with uppercase in ambari, it converts it to lowercase and then when trying to access ambari hive view it propmt an error because it doesn't find the lowercase user folder. I only have this problem in a not secured cluster, in the secured cluster with one way trust relationship with my AD it works properly. Do you know how to fix this? Thank you in advance!
... View more
Labels:
03-30-2017
02:27 PM
Hello community, One of my devs have executed some oozie workflows with a wrong namenode and now the workflows is frozen. I have tried to kill it in any possible way, it prompts like it's sucessfully killed, but the workflows still in the console as RUNNING. [oozie@hadoop01 oozie]$ oozie jobs -kill -filter status=RUNNING
the following jobs have been killed
Job ID App Name Status User Group Started Ended
------------------------------------------------------------------------------------------------------------------------------------
0000006-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING batch - 2017-03-29 14:31 GMT -
------------------------------------------------------------------------------------------------------------------------------------
0000004-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING bigdata - 2017-03-29 14:23 GMT -
------------------------------------------------------------------------------------------------------------------------------------
0000003-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING bigdata - 2017-03-29 14:21 GMT -
------------------------------------------------------------------------------------------------------------------------------------
0000002-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING bigdata - 2017-03-29 13:54 GMT -
------------------------------------------------------------------------------------------------------------------------------------
[oozie@hadoop01 oozie]$ oozie jobs -filter status=RUNNING
Job ID App Name Status User Group Started Ended
------------------------------------------------------------------------------------------------------------------------------------
0000006-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING batch - 2017-03-29 14:31 GMT -
------------------------------------------------------------------------------------------------------------------------------------
0000004-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING bigdata - 2017-03-29 14:23 GMT -
------------------------------------------------------------------------------------------------------------------------------------
0000003-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING bigdata - 2017-03-29 14:21 GMT -
------------------------------------------------------------------------------------------------------------------------------------
0000002-170324203356317-oozie-oozi-W BIGDP46B - AppBigRexManClientRUNNING bigdata - 2017-03-29 13:54 GMT -
------------------------------------------------------------------------------------------------------------------------------------
[oozie@LTBIG01 oozie]$
I don't know what is going on, I've tried restarting the server but the problem persist, I also have tried to change the status to KILLED directly in the DB from the tables WF_JOBS and WF_ACTIONS, but it keeps showing it as RUNNING. I have check the logs and it's clean. Do you know what maybe going on? Thank you in advance!
... View more
Labels:
03-29-2017
12:10 PM
Hello @Jay SenSharma
My ambari version is 2.4.2
My properties of ldap are: [root@hadoop01 ~]# cat /etc/ambari-server/conf/ambari.properties | grep ldap
ambari.ldap.isConfigured=true
authentication.ldap.baseDn=DC=TEST,dc=INT
authentication.ldap.bindAnonymously=false
authentication.ldap.dnAttribute=distinguishedName
authentication.ldap.groupMembershipAttr=member
authentication.ldap.groupNamingAttr=cn
authentication.ldap.groupObjectClass=group
authentication.ldap.managerDn=CN=Administrador,CN=Users,DC=test,DC=int
authentication.ldap.managerPassword=/etc/ambari-server/conf/ldap-password.dat
authentication.ldap.primaryUrl=dns.test.int:389
authentication.ldap.referral=ignore
authentication.ldap.useSSL=false
authentication.ldap.userObjectClass=user
authentication.ldap.usernameAttribute=sAMAccountName
client.security=ldap
I also have tried with authentication.ldap.groupMembershipAttr=member:1.2.840.113556.1.4.1941: I have a cron script for ldapsync which take groups from a csv which contains the following: [root@hadoop01 ~]# cat /etc/ambari-server/ambari-groups.csv
bigdata_pro,bigdata_test As I said the user of the nested groups from bigdata_test are correctly imported but only get the non nested membership, for exampler user A is member of bigdata_dept so ambari with the current config import it fine, also import the group but it doesn't import the nested membership from bigdata_test of the user A. I think this is happening because ambari is using the "member" attribute of the group to resolve membership instead of the "memberOf" from users, so the member attribute in AD is not the sAMAccount attribute is a DN using the name of the user instead of his username. Using ldapsearch I'm able to locate them, I have also configured this with ranger without any problem because ranger uses memberOf, i just had to change memberOf attribute for memberOf:1.2.840.113556.1.4.1941: in the configuration. Example of ldapsearch [root@hadoop01 ~]# ldapsearch -H ldap://dns.test.int:389 -D "CN=Administrador,CN=Users,DC=test,DC=int" -w ******** -b "CN=Users,DC=test,DC=int" "(&(objectClass=group)(member:1.2.840.113556.1.4.1941:=CN=Test User,CN=Users,DC=test,DC=int))"
# extended LDIF
#
# LDAPv3
# base <CN=Users,DC=test,DC=int> with scope subtree
# filter: (&(objectClass=group)(member:1.2.840.113556.1.4.1941:=CN=Test User,CN=Users,DC=test,DC=int))
# requesting: ALL
#
# bigdata_test, Users, test.int
dn: CN=bigdata_test,CN=Users,DC=test,DC=int
objectClass: top
objectClass: group
cn: bigdata_test
member: CN=bigdata_dept,CN=Users,DC=test,DC=int
distinguishedName: CN=bigdata_test,CN=Users,DC=test,DC=int
instanceType: 4
whenCreated: 20170327155638.0Z
whenChanged: 20170327155801.0Z
uSNCreated: 114767
uSNChanged: 114775
name: bigdata_test
objectGUID:: N5RfdDd6BkehR11KV9R60g==
objectSid:: AQUAAAAAAAUVAAAAkdwG8YCru32euQq7bwQAAA==
sAMAccountName: bigdata_test
sAMAccountType: 268435456
groupType: -2147483646
objectCategory: CN=Group,CN=Schema,CN=Configuration,DC=test,DC=int
dSCorePropagationData: 16010101000000.0Z
# bigdata_dept, Users, test.int
dn: CN=bigdata_dept,CN=Users,DC=test,DC=int
objectClass: top
objectClass: group
cn: bigdata_dept
member: CN=Test User,CN=Users,DC=test,DC=int
distinguishedName: CN=bigdata_dept,CN=Users,DC=test,DC=int
instanceType: 4
whenCreated: 20170327155801.0Z
whenChanged: 20170329120729.0Z
uSNCreated: 114771
memberOf: CN=bigdata_test,CN=Users,DC=test,DC=int
uSNChanged: 117001
name: bigdata_dept
objectGUID:: s88OW4HJ1kuBxZljIlRSAw==
objectSid:: AQUAAAAAAAUVAAAAkdwG8YCru32euQq7cAQAAA==
sAMAccountName: bigdata_dept
sAMAccountType: 268435456
groupType: -2147483646
objectCategory: CN=Group,CN=Schema,CN=Configuration,DC=test,DC=int
dSCorePropagationData: 16010101000000.0Z
# search result
search: 2
result: 0 Success
# numResponses: 3
# numEntries: 2
... View more
03-29-2017
07:58 AM
Hello community! I'have ambari with ldap authentication throught Active directory. I have users that are members of bigdata_dept and this group is member of bigdata_test, I've successfuly configured ambari to allow login with nested members from bigdata_test, but when you check their group membership, ambari only recognice bigdata_dept and it doesn't sync the membership to bigdata_test, with this kind of configuration if I've a new group member of bigdata_test I've to configure the roles for it when I want to avoid it giving every nested member of bigdata_test the same role. Do you know any way to acomplish this? Thank you in advance
... View more
Labels:
03-24-2017
06:31 PM
Hello @Balaji Badarla I think that added to @Vipin Rathor analysis which is correct, you also have mistakes in your krb5.conf.
To setup a crossrealm trust, kerberos must be aware of the foreing realm kdc, this is acomplished setting up correctly your krb5.conf. With your current configuration the services (including kdc) of the realm HORTONWORKS.COM doesn't know how to reach EXAMPLE.COM and vice versa. And as I can see in your current configuration you could also have problems with SPNEGO using crossrealm identities because of [domain_realm], you are mapping all subdomains of .hortonworks.com to HORTONWORKS.COM but you are not mapping ambarinode.myhadoop.com to HORTONWORKS.COM and either ambaristandby.myhadoop.com to EXAMPLE.COM in the oposite realm, so when they try to get the http principal keytab of a service of the oposite realm it will take the one of their realm instead, this is a little tricky thing which one I had troubles.
So first you must add athe following entry in [realms] section cluster-DR krb5.conf:
EXAMPLE.COM = {
admin_server = ambaristandby.myhadoop.com
kdc = ambaristandby.myhadoop.com
}
Then add to [realms] section of Cluster-PRIMARY krb5.conf the following: HORTONWORKS.COM = {
kdc = ambarinode.myhadoop.com
admin_server = ambarinode.myhadoop.com
} With this changes the scp should not be a problem, in case you also want to test cross realm spnego authentication you also must set the following (this case is if you have the services in ambarinode and ambaristandby the syntax is: <hostname or domain wildcard> = <REALM>. At botch krb5.conf: [domain_realm]
ambarinode.myhadoop.com = HORTONWORKS.COM
ambaristandby.myhadoop.com = EXAMPLE.COM
I hope this help, in case of any doubt please ask. 🙂
... View more
03-22-2017
06:18 PM
Hello @Facundo Bianco were you able to solve this?
I'm having the same issue
... View more
03-21-2017
11:35 AM
1 Kudo
I finally solved this. Everything was fine but I was hitting https://issues.apache.org/jira/browse/AMBARI-18898 So I solved this doing the following: Quoted all $ in Ambari Infra > Configs > Advanced > Infra Solr Kerberos name rules In Ambari Infra > Configs > Advanced > infra-solr-env template quoted the value of SOLR_KERB_NAME_RULES: SOLR_KERB_NAME_RULES="{{infra_solr_kerberos_name_rules}}" In Ambari Infra > Configs > Advanced > infra-solr-env template removed the value -Dsolr.kerberos.name.rules="$SOLR_KERB_NAME_RULES" from SOLR_AUTHENTICATION_OPTS resulting as follows: SOLR_AUTHENTICATION_OPTS=" -DauthenticationPlugin=org.apache.solr.security.KerberosPlugin -Djava.security.auth.login.config=$SOLR_JAAS_FILE -Dsolr.kerberos.principal=${SOLR_KERB_PRINCIPAL} -Dsolr.kerberos.keytab=${SOLR_KERB_KEYTAB} -Dsolr.kerberos.cookie.domain=${SOLR_HOST} " In each solr server modify /usr/lib/ambari-infra-solr/bin/solr and add -Dsolr.kerberos.name.rules=$SOLR_KERB_NAME_RULES to SOLR_START_OPTS variable: SOLR_START_OPTS=('-server' "${JAVA_MEM_OPTS[@]}" "${GC_TUNE[@]}" "${GC_LOG_OPTS[@]}" \
"${REMOTE_JMX_OPTS[@]}" "${CLOUD_MODE_OPTS[@]}" \
"-Djetty.port=$SOLR_PORT" "-DSTOP.PORT=$stop_port" "-DSTOP.KEY=$STOP_KEY" \
"${SOLR_HOST_ARG[@]}" "-Duser.timezone=$SOLR_TIMEZONE" \
"-Djetty.home=$SOLR_SERVER_DIR" "-Dsolr.solr.home=$SOLR_HOME" "-Dsolr.install.dir=$SOLR_TIP" \
"${LOG4J_CONFIG[@]}" "${SOLR_OPTS[@]}" -Dsolr.kerberos.name.rules="$SOLR_KERB_NAME_RULES") After all this modifications just start ambari infra and it will start properly :).
... View more
03-21-2017
10:17 AM
Hello @Anuja Leekha I have regenerated the keytab and still having the same errors: errors-12524.txt Can this be happening because I had installed ranger before install ambari infra?
... View more
03-17-2017
10:32 AM
@Predrag Minovic
Hello, I dont have exactly that variable, i have the following in myambari: SOLR_KERB_PRINCIPAL={{infra_solr_web_kerberos_principal}}
....
SOLR_AUTHENTICATION_OPTS=" -DauthenticationPlugin=org.apache.solr.security.KerberosPlugin -Djava.security.auth.login.config=$SOLR_JAAS_FILE -Dsolr.kerberos.principal=${SOLR_KERB_PRINCIPAL} -Dsolr.kerberos.keytab=${SOLR_KERB_KEYTAB} -Dsolr.kerberos.cookie.domain=${SOLR_HOST} -Dsolr.kerberos.name.rules=${SOLR_KERB_NAME_RULES}" The {{infra_solr_web_kerberos_principal}} it's HTTP/_HOST@HADOOP.INT And the infra_solr_jaas.conf generated by ambari is correctly pointing in each server it's principal hadoop01: [root@hadoop01 ~]# cat /etc/ambari-infra-solr/conf/infra_solr_jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/ambari-infra-solr.service.keytab"
principal="infra-solr/hadoop01.int@HADOOP.INT";
hadoop02: [root@hadoop02 ~]# cat /etc/ambari-infra-solr/conf/infra_solr_jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=false
keyTab="/etc/security/keytabs/ambari-infra-solr.service.keytab"
principal="infra-solr/hadoop02.int@HADOOP.INT";
So I don't know what maybe the problem, should I add declare SOLR_KERBEROS_PRINCIPAL?
... View more
03-16-2017
08:23 PM
Hello, I've upgraded a cluster from 2.4.0 to 2.5.3, and this cluster used Audit to DB for audit, now it's deprecated and I've to use ambari infra. I've added 2 instances of ambari infra to use solrcloud, and enabled ranger for solrcloud, ranger seems to connect to the zookeeper server and get ambari infra urls, but then it fails to connect on ranger admin start: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 208, in <module>
RangerAdmin().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/ranger_admin.py", line 100, in start
setup_ranger_audit_solr()
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 590, in setup_ranger_audit_solr
jaas_file = params.solr_jaas_file)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/solr_cloud_util.py", line 116, in create_collection
Execute(create_collection_cmd)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 273, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh JAVA_HOME=/usr/jdk64/jdk1.8.0_60 /usr/lib/ambari-infra-solr-client/solrCloudCli.sh --zookeeper-connect-string hadoop04.int:2181,hadoop03.int:2181,hadoop02.int:2181/infra-solr --create-collection --collection ranger_audits --config-set ranger_audits --shards 1 --replication 1 --max-shards 1 --retry 5 --interval 10 --no-sharding --jaas-file /usr/hdp/current/ranger-admin/conf/ranger_solr_jaas.conf' returned 1. Using default ZkCredentialsProvider
Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
Client environment:host.name=hadoop01.int
Client environment:java.version=1.8.0_60
Client environment:java.vendor=Oracle Corporation
Client environment:java.home=/usr/jdk64/jdk1.8.0_60/jre
Client environment:java.class.path=/usr/lib/ambari-infra-solr-client:/usr/lib/ambari-infra-solr-client/libs/httpclient-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-lang-2.5.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-api-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/commons-io-2.1.jar:/usr/lib/ambari-infra-solr-client/libs/easymock-3.4.jar:/usr/lib/ambari-infra-solr-client/libs/junit-4.10.jar:/usr/lib/ambari-infra-solr-client/libs/jcl-over-slf4j-1.7.7.jar:/usr/lib/ambari-infra-solr-client/libs/stax2-api-3.1.4.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-core-asl-1.9.9.jar:/usr/lib/ambari-infra-solr-client/libs/log4j-1.2.17.jar:/usr/lib/ambari-infra-solr-client/libs/ambari-logsearch-solr-client-2.4.1.0.22.jar:/usr/lib/ambari-infra-solr-client/libs/hamcrest-core-1.1.jar:/usr/lib/ambari-infra-solr-client/libs/noggit-0.6.jar:/usr/lib/ambari-infra-solr-client/libs/objenesis-2.2.jar:/usr/lib/ambari-infra-solr-client/libs/slf4j-log4j12-1.7.2.jar:/usr/lib/ambari-infra-solr-client/libs/woodstox-core-asl-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/httpmime-4.4.1.jar:/usr/lib/ambari-infra-solr-client/libs/commons-codec-1.8.jar:/usr/lib/ambari-infra-solr-client/libs/commons-cli-1.3.1.jar:/usr/lib/ambari-infra-solr-client/libs/solr-solrj-5.5.2.jar:/usr/lib/ambari-infra-solr-client/libs/jackson-mapper-asl-1.9.13.jar:/usr/lib/ambari-infra-solr-client/libs/zookeeper-3.4.6.jar:/usr/lib/ambari-infra-solr-client/libs/httpcore-4.4.1.jar
Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
Client environment:java.io.tmpdir=/tmp
Client environment:java.compiler=<NA>
Client environment:os.name=Linux
Client environment:os.arch=amd64
Client environment:os.version=2.6.32-573.el6.x86_64
Client environment:user.name=root
Client environment:user.home=/root
Client environment:user.dir=/var/lib/ambari-agent
Initiating client connection, connectString=hadoop04.int:2181,hadoop03.int:2181,hadoop02.int:2181/infra-solr sessionTimeout=15000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@3fb4f649
Waiting for client to connect to ZooKeeper
successfully logged in.
TGT refresh thread started.
Client will use GSSAPI as SASL mechanism.
TGT valid starting at: Thu Mar 16 18:39:23 CET 2017
TGT expires: Fri Mar 17 18:39:23 CET 2017
TGT refresh sleeping until: Fri Mar 17 14:48:54 CET 2017
Opening socket connection to server hadoop04.int/198.18.0.4:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
Socket connection established to hadoop04.int/198.18.0.4:2181, initiating session
Session establishment complete on server hadoop04.int/198.18.0.4:2181, sessionid = 0x35ad7374f89001f, negotiated timeout = 15000
Watcher org.apache.solr.common.cloud.ConnectionManager@bd9131d name:ZooKeeperConnection Watcher:hadoop04.int:2181,hadoop03.int:2181,hadoop02.int:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Watcher org.apache.solr.common.cloud.ConnectionManager@bd9131d name:ZooKeeperConnection Watcher:hadoop04.int:2181,hadoop03.int:2181,hadoop02.int:2181/infra-solr got event WatchedEvent state:SaslAuthenticated type:None path:null path:null type:None
Setting up SPNego auth with config: /usr/hdp/current/ranger-admin/conf/ranger_solr_jaas.conf
Using default ZkCredentialsProvider
Initiating client connection, connectString=hadoop04.int:2181,hadoop03.int:2181,hadoop02.int:2181/infra-solr sessionTimeout=10000 watcher=org.apache.solr.common.cloud.SolrZkClient$3@783e6358
Waiting for client to connect to ZooKeeper
Client will use GSSAPI as SASL mechanism.
Opening socket connection to server hadoop02.int/198.18.0.2:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
Socket connection established to hadoop02.int/198.18.0.2:2181, initiating session
Session establishment complete on server hadoop02.int/198.18.0.2:2181, sessionid = 0x15ad73888170015, negotiated timeout = 10000
Watcher org.apache.solr.common.cloud.ConnectionManager@f9ff59e name:ZooKeeperConnection Watcher:hadoop04.int:2181,hadoop03.int:2181,hadoop02.int:2181/infra-solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
Client is connected to ZooKeeper
Using default ZkACLProvider
Updating cluster state from ZooKeeper...
Watcher org.apache.solr.common.cloud.ConnectionManager@f9ff59e name:ZooKeeperConnection Watcher:hadoop04.int:2181,hadoop03.int:2181,hadoop02.int:2181/infra-solr got event WatchedEvent state:SaslAuthenticated type:None path:null path:null type:None
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
... 11 more
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
... 11 more
Command failed, tries again (tries: 1)
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 12 more
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 12 more
Command failed, tries again (tries: 2)
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 13 more
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 13 more
Command failed, tries again (tries: 3)
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 14 more
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 14 more
Command failed, tries again (tries: 4)
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 15 more
No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://hadoop02.int:8886/solr, http://hadoop01.int:8886/solr]
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:352)
at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at org.apache.ambari.logsearch.solr.commands.AbstractSolrRetryCommand.createAndProcessRequest(AbstractSolrRetryCommand.java:43)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:45)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.retry(AbstractRetryCommand.java:54)
at org.apache.ambari.logsearch.solr.commands.AbstractRetryCommand.run(AbstractRetryCommand.java:40)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.listCollections(AmbariSolrCloudClient.java:107)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudClient.createCollection(AmbariSolrCloudClient.java:114)
at org.apache.ambari.logsearch.solr.AmbariSolrCloudCLI.main(AmbariSolrCloudCLI.java:463)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://hadoop01.int:8886/solr: Expected mime type application/octet-stream but got text/html. <html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<title>Error 404 Not Found</title>
</head>
<body><h2>HTTP ERROR 404</h2>
<p>Problem accessing /solr/admin/collections. Reason:
<pre> Not Found</pre></p><hr><i><small>Powered by Jetty://</small></i><hr/>
</body>
</html>
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:545)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:341)
... 15 more
Command failed, tries again (tries: 5)
usage:
./solrCloudCli.sh --create-collection -z host1:2181,host2:2181/ambari-solr -c collection -cs conf_set
./solrCloudCli.sh --upload-config -z host1:2181,host2:2181/ambari-solr -d /tmp/myconfig_dir -cs config_set
./solrCloudCli.sh --download-config -z host1:2181,host2:2181/ambari-solr -cs config_set -d /tmp/myonfig_dir
./solrCloudCli.sh --check-config -z host1:2181,host2:2181/ambari-solr -cs config_set
./solrCloudCli.sh --create-shard -z host1:2181,host2:2181/ambari-solr -c collection -sn myshard
./solrCloudCli.sh --create-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --check-znode -z host1:2181,host2:2181 -zn /ambari-solr
./solrCloudCli.sh --cluster-prop -z host1:2181,host2:2181/ambari-solr -cpn urlScheme -cpn http
./solrCloudCli.sh --create-sasl-users -z host1:2181,host2:2181 -zn /ambari-solr -csu logsearch,atlas,ranger
./solrCloudCli.sh --setup-kerberos -z host1:2181,host2:2181 --secure -zn /ambari-solr-secure -cfz /ambari-solr-unsecure -jf /etc/path/my_jaas.conf
./solrCloudCli.sh --setup-kerberos-plugin -z host1:2181,host2:2181 -zn /ambari-solr
-c,--collection <collection name> Collection name
-cc,--create-collection Create collection in Solr (command)
-cfz,--copy-from-znode </ambari-solr-secure> Copy-from-znode
-chc,--check-config Check configuration exists in Zookeeper (command)
-chz,--check-znode Check znode exists in Zookeeper (command)
-cp,--cluster-prop Set cluster property (command)
-cpn,--property-name <cluster prop name> Cluster property name
-cpv,--property-value <cluster prop value> Cluster property value
-cs,--config-set <config_set> Configuration set
-csh,--create-shard Create shard in Solr (command)
-csu,--create-sasl-users Create sasl users
-cz,--create-znode Create Znode (command)
-d,--config-dir <config_dir> Configuration directory
-dc,--download-config Download configuration set from Zookeeper (command)
-h,--help Print commands
-i,--interval <interval> Interval for retry logic in sec [default:5]
-jf,--jaas-file <jaas_file> Location of the jaas-file to communicate with kerberized Solr
-ksl,--key-store-location <key store location> Location of the key store used to communicate with Solr using SSL
-ksp,--key-store-password <key store password> Key store password used to communicate with Solr using SSL
-kst,--key-store-type <key store type> Type of the key store used to communicate with Solr using SSL
-m,--max-shards <max number of shards> Max number of shards per node (default: replication * shards)
-ns,--no-sharding Sharding not used when creating collection
-r,--replication <replication factor> Replication factor
-rf,--router-field <router_field> Router field for collection [default:_router_field_]
-rn,--router-name <router_name> Router name for collection [default:implicit]
-rt,--retry <number of retries> Number of retries for access Solr [default:10]
-s,--shards <shard number> Number of shards
-sec,--secure Flag for enable/disable kerberos (with --setup-kerberos or --setup-kerberos-plugin)
-sk,--setup-kerberos Setup kerberos (command)
-skp,--setup-kerberos-plugin Setup kerberos plugin in security.json (command)
-sn,--shard-name <my_new_shard> Name of the shard for create-shard command
-su,--sasl-users <atlas,ranger,logsearch-solr> Sasl users (comma separated list)
-tsl,--trust-store-location <trust store location> Location of the trust store used to communicate with Solr using SSL
-tsp,--trust-store-password <trust store password> Trust store password used to communicate with Solr using SSL
-tst,--trust-store-type <trust store type> Type of the trust store used to communicate with Solr using SSL
-uc,--upload-config Upload configuration set to Zookeeper (command)
-z,--zookeeper-connect-string <host:port,host:port[/ambari-solr]> Zookeeper quorum [and Znode (optional)]
-zn,--znode </ambari-solr> Zookeeper ZNode
Maximum retries exceeded: 5
Maximum retries exceeded: 5
Return code: 1 Ambari shows both instances of ambari infra as running but the solr.log have the following:
[root@hadoop01 ambari-infra-solr]# cat solr.log
2017-03-16 18:23:42,576 [main] WARN [ ] org.eclipse.jetty.server.handler.RequestLogHandler (RequestLogHandler.java:137) - !RequestLog
2017-03-16 18:23:43,310 [main] WARN [ ] org.eclipse.jetty.security.ConstraintSecurityHandler (ConstraintSecurityHandler.java:807) - ServletContext@o.e.j.w.WebAppContext@544fe44c{/solr,file:/usr/lib/ambari-infra-solr/server/solr-webapp/webapp/,STARTING}{/usr/lib/ambari-infra-solr/server/solr-webapp/webapp} has uncovered http methods for path: /
2017-03-16 18:23:43,646 [main] WARN [ ] org.apache.solr.core.CoreContainer (CoreContainer.java:398) - Couldn't add files from /opt/ambari_infra_solr/data/lib to classpath: /opt/ambari_infra_solr/data/lib
2017-03-16 18:23:44,264 [main] ERROR [ ] org.apache.solr.servlet.SolrDispatchFilter (SolrDispatchFilter.java:141) - Could not start Solr. Check solr/home property and the logs
2017-03-16 18:23:44,297 [main] ERROR [ ] org.apache.solr.common.SolrException (SolrException.java:159) - null:org.apache.solr.common.SolrException: Missing required parameter 'solr.kerberos.principal'.
at org.apache.solr.security.KerberosPlugin.putParam(KerberosPlugin.java:135)
at org.apache.solr.security.KerberosPlugin.init(KerberosPlugin.java:82)
at org.apache.solr.core.CoreContainer.initializeAuthenticationPlugin(CoreContainer.java:292)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:158)
at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:134)
at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:41)
at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:186)
at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:498)
at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:146)
at org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:180)
at org.eclipse.jetty.deploy.providers.WebAppProvider.fileAdded(WebAppProvider.java:461)
at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:64)
at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:609)
at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:528)
at org.eclipse.jetty.util.Scanner.scan(Scanner.java:391)
at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:313)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:150)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:560)
at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:235)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.server.Server.start(Server.java:387)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:354)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1255)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.eclipse.jetty.start.Main.invokeMain(Main.java:321)
at org.eclipse.jetty.start.Main.start(Main.java:817)
at org.eclipse.jetty.start.Main.main(Main.java:112)
I've check the configuration and seems to be ok, I just found that solr was using spnego keytabs and changed the variables to solr-infra keytabs but the error persist. If I try to access the webui I can using the spnego ticket of my AD without any problem, but the console holds loading
Any clue about what is going on? Thank you in advance
... View more
Labels:
03-14-2017
04:36 PM
Hi @Deepesh , that was my problem, by default it takes /etc/hive/conf/ when I tried to use the conf.server directory also forgot the export and just declared a local val that's why it failed when I tried it.
Thank you.
... View more