Member since
09-17-2015
436
Posts
736
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3605 | 01-14-2017 01:52 AM | |
5611 | 12-07-2016 06:41 PM | |
6423 | 11-02-2016 06:56 PM | |
2111 | 10-19-2016 08:10 PM | |
5548 | 10-19-2016 08:05 AM |
09-30-2015
11:36 PM
2 Kudos
Have a few examples here for a lab we built. We are looking to get the existing Solr tutorial updated: https://gist.github.com/abajwa-hw/675b01c152e9fac8d3c2 https://gist.github.com/abajwa-hw/86ec17a6f0b3542fd4a9 Collection1 should not longer be used in Solr 5.x, its a bug in sandbox Also make sure to change owner of all files under solr log dir to solr on sandbox. It has some files owned by root which causes problems. More details available in docs: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_search/index.html Sample steps On non-sandbox, you can install/setup Solr this way (not needed on sandbox): yum install -y lucidworks-hdpsearch
sudo -u hdfs hadoop fs -mkdir /user/solr
sudo -u hdfs hadoop fs -chown solr /user/solr Then run below sample steps to start Solr in cloud mode and create collection #only needed on current sandbox
chown -R solr:solr /opt/lucidworks-hdpsearch/solr
su solr
/opt/lucidworks-hdpsearch/solr/bin/solr start -c -z localhost:2181
/opt/lucidworks-hdpsearch/solr/bin/solr create -c tweets \
-d data_driven_schema_configs \
-s 1 \
-rf 1
... View more
09-30-2015
10:40 PM
1 Kudo
This link may be useful http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_Ambari_Security_Guide/content/_configuring_ambari_for_non-root.html
... View more
09-30-2015
08:46 PM
1 Kudo
Is Ambari Metrics service up?
... View more
09-30-2015
05:04 AM
Thanks @smohanty@hortonworks.com. Is it correct to say that we need to pass in previously configured properties as well as new properties together to the POST? (to avoid removal of existing properties)
... View more
09-30-2015
04:01 AM
/var/lib/ambari-server/resources/scripts/configs.sh is a great way to automate config changes, but seems it only supports the 'Default' config group. Is there a clean way to make changes to other config groups via API?
... View more
Labels:
- Labels:
-
Apache Ambari
09-30-2015
12:58 AM
While developing Zeppelin Ambari service, I was going through the cycle of updating the code, restarting ambari, installing service. Then once the install failed (e.g. due to error in my code), I would delete the failed install via REST api and restart ambari and then start over. Initially this worked fine (first 3 or 4 times), but then it gets into this weird state where the failed install icon comes back after posting the DELETE request and running ambari restart. Usually with failed installs, one can go under Services tab of Ambari and see the failed install service listed and attempt to re-install, but in this weird state, the service does not appear under Services any more. So basically I'm stuck with a cluster I can't remove this service from. I have been able to reproduce this on 2 envs: one on CentOS 7 and one on RHEL 7, but never seen the problem on CentOS 6. Questions 1. Is this a bug? 2. How do I manually remove all traces of this service on my cluster before attempting to re-install the service? Ambari log shows that its unable to delete the service due to FK constraint Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: org.postgresql.util.PSQLException: ERROR: update or delete on table "servicecomponentdesiredstate" violates foreign key constraint "hstcomponentstatecomponentname" on table "hostcomponentstate"
Detail: Key (component_name, cluster_id, service_name)=(ZEPPELIN_MASTER, 2, ZEPPELIN) is still referenced from table "hostcomponentstate".
Error Code: 0
Call: DELETE FROM servicecomponentdesiredstate WHERE (((cluster_id = ?) AND (component_name = ?)) AND (service_name = ?))
bind => [3 parameters bound]
Query: DeleteObjectQuery(org.apache.ambari.server.orm.entities.ClusterServiceEntity@3426596b)
at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1611)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:898)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:962)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:631)
at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149)
at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134)
at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:603)
at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:558)
at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2002)
at org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:298)
... View more
Labels:
- Labels:
-
Apache Ambari
09-29-2015
03:10 PM
2 Kudos
Here is a publicly accessible link with step by step screenshots on how to setup LDAPS, generating a certificate on AD and then importing on the Ambari node. http://gregtechnobabble.blogspot.com/2012/11/enabling-ldap-ssl-in-windows-2012-part-1.html Once this is complete you can run through the Ambari security wizard and select the AD option and provide your detailed to enable kerberos
... View more
09-29-2015
02:16 AM
1 Kudo
This is fixed in Ambari 2.1.1. Details/patch available in AMBARI-12133
... View more
09-29-2015
01:32 AM
6 Kudos
Ambari does not manage HA for Oozie yet. Here are some list of manual steps which I recently dug out for someone (AMBARI-6683 is the related JIRA but BUG-13082 has the relevant details you are looking for) Pasting here: 1) Added oozie-server component, using +Add button on host page. 2) Using apache httpd(using mod_proxy and mod_proxy_balancer), configured load balancing with url liveness check. It means, that returned url for oozie previously checked for availability. We need this, because one of oozie can be unavailable, so load balancer should not return url for it. 3) In oozie-site.xml config: – add oozie.zookeeper.connection.string = <list of zookeeper hosts with ports> (example: c6401.ambari.apache.org:2181,c6402.ambari.apache.org:2181,c6403.ambari.apache.org:2181) – add these classes "org.apache.oozie.service.ZKLocksService,org.apache.oozie.service.ZKXLogStreamingService,org.apache.oozie.service.ZKJobsConcurrencyService" to property oozie.services.ext. – change oozie.base.url to http://<loadbalancer_hostname>:11000/oozie 4) In oozie-env.sh config: – uncomment OOZIE_BASE_URL property and change value to point to the loadbalancer (example of value: http://<loadbalancer_hostname>:11000/oozie) 5) In core-site.xml: – add host with newly added oozie-server to hadoop.proxyuser.oozie.hosts property. Hosts should be comma separated. 6) Restart all needed services. Note1: Oozie HA will work only for existing db, because as i know, derby db doesn't support concurrent connections.
... View more
- « Previous
- Next »