Member since
05-15-2017
31
Posts
3
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1135 | 08-03-2017 01:58 PM | |
1200 | 07-26-2017 03:16 PM |
07-19-2018
01:48 PM
thanks for your answer. this means that the first three numbers are referring to hbase itself, and the last ones to the HDP version?in your example HDP 3.0.0.0. is this correct?
... View more
07-19-2018
11:49 AM
Hi, I'm looking for a list of artifact-versions for HDP 3.0 in order to use with Maven (Java), so I get the correct version for e.g. hbase-client. I can't seem to find the list. Thanks in advance
... View more
- Tags:
- maven
Labels:
12-15-2017
12:01 PM
Hello, I have a service that connects to Hive via JDBC. When I start it on a machine (A) it's writing data to Hive. When I start the same service again on another machine (B) the call pstmt.executeUpdate(); does not return. there is no exception or anything, I had it running for more than 10 minutes as well. Why is it working on one machine, but not the other? it's the same connection string. Thanks and BR
... View more
Labels:
- Labels:
-
Apache Hive
12-13-2017
07:34 AM
thanks for your response @ghagleitner . short answer: unfortunately no long answer: we use HBase as primary storage, but at the same time we want to write it to Hive, where the data resides longer than within HBase (we set a TTL on the HBase tables). Within Hive we do not want the same amount of columns, only a few of them, and some "aggregated" so we do not have to adapt the table schema within Hive every time a new column is added within HBase. At the moment we use AMQ for messaging to the Hive-write service. But we plan on introducing Kafka. Is there a better way when using Kafka?
Any other ideas that might pop into your head?
... View more
12-12-2017
09:04 AM
Hello, we have Hive (version 1.2.1000) running in our cluster. I'm using JDBC to write data to a hive table. I see that performance of adding data isn't that great (at some point it takes up to 10s or even more) for one insert. The table has four columns, one String, one BIGINT, and two Binary. I tried using batch, but the batch operations aren't supported with the jdbc driver. Next I tried using connection pooling (tomcat-jdbc), but it doesn't seem to work either. I receive the following error message: org.apache.thrift.transport.TTransportException: java.net.SocketException: Socket closed Any ideas on what I can use in order to improve write performance? Thanks
... View more
Labels:
- Labels:
-
Apache Hive
12-12-2017
07:49 AM
ok, thanks for the info.
... View more
09-20-2017
01:30 PM
Hi @amagyar , thanks for your explanations! My remaining question is, where can I find the log lines that I added to my 'status' method, and the ones within the service_check.py file ? I assume on the host the service itself is located, but I can't find the file. Thanks
... View more
08-09-2017
12:24 PM
I don't know of any best practice, but I did refer to an existing service, e.g. HBase This example shows that the dependencies are only defined for the Master component. But anyhow, it totally depends on your components. So if your slave depends on having e.g. a RegionServer on the same host you should add this dependency to your slave component. If you just require to have a RegionServer on the cluster I'd just go with defining the dependency at the Master component. What's also important is the '<requiredServices>' section in the sample linked to above. I hope this information helps.
... View more
08-09-2017
11:57 AM
Thanks for your answer @Akhil S Naik. It seems that it really was the issue with the config. I now changed my service to contain a config, although it's not used, but I don't have the issue anymore. Nevertheless, I didn't test the ambari-server configuration change, as it would have only been a workaround for me.
... View more
08-09-2017
05:14 AM
Hello, I'm curious on the status functions implemented for the service ('service_check') and it's component(s). As seen with HBase service, there's a 'commandScript' defined within 'metainfo.xml' which is referencing to 'scripts/service_check.py' and is on the same level as '<components>' On the other hand, on a components' class (subclass of 'Script') you have the def status(self, env): I see the 'service_check' is executed upon installation, and I can see the log entries within the Ambari UI, but I haven't seen the log entries anywhere else yet. So my questions are: 1. When is which function called? 2. Each function uses the 'Logger', where can I find the log lines for both the service_check and the component's status? Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
08-07-2017
01:38 PM
Thanks, but as the site states Note: These API calls do not uninstall the packages associated with the service and neither they remove the config or temp folders associated with the service components. I'm looking for a way to execute some code when a component gets uninstalled. Thanks
... View more
08-07-2017
01:32 PM
When I'm trying to test an (express) upgrade of a stack including my custom component I get the following error: Last Service Check should be more recent than the last configuration change for the given service Reason: Unexpected server error happenedFailed on: As you can see, it doesn't tell me which component. Anyhow, as I'm having my own component installed with the stack I suppose it's my own. My colleague now pointed out that it's maybe because I do not have a "Config" tab in my component. Could this be the issue? To shortly explain what my service does: I want to be able to have 2+ RegionServers on one host, thus I just copy the configuration from the original RegionServer and just do some changes within two configuration files before the component gets started. That's why I do not need any configuration within Ambari itself. Hopefully somebody can explain a bit about my issue. Thanks
... View more
Labels:
- Labels:
-
Apache Ambari
08-03-2017
01:58 PM
1 Kudo
hi @Sayed Anisul Hoque. @1. you already linked to the resources handling custom services, maybe the sub-menu brings more light @2. depdency.name: this is <Service>/<Component>, have a look at the other metainfo.xml definitions. As another example, if you depend on an HBase Master you would write HBASE/HBASE_MASTER @2 dependency.scope: this is the scope the dependency has to have. In your example, 'cluster' means that a Zookeeper Server has to be installed somewhere on your cluster. When you have scope 'host' it would mean that the component Zookeeper Server has to be on the same host as your custom service. Hope that helps.
... View more
08-01-2017
02:16 PM
@Jay SenSharma yes, it's my custom component, and a directory with the name is present within /usr/hdp/current. Any idea why I get this error? Anything else I need to be aware of?
... View more
08-01-2017
01:42 PM
Another warning I get when I restart the ambari-server is within ambari-server-check-database.log WARN - Service(s): RS_02, from cluster testc has no config(s) in serviceconfig table! What does this warning mean in particular? What do I have to do in order to get rid of it? Thanks
... View more
07-28-2017
11:32 AM
I want to execute some code when I delete a service, basically clean-up of my custom service. At https://cwiki.apache.org/confluence/display/AMBARI/How-To+Define+Stacks+and+Services#How-ToDefineStacksandServices-Commands I found supported commands. So in order to uninstall I'd guess to override 'uninstall' method. Is this correct? I looked into the HBase definitions, but there ain't any definition for uninstall, can you explain me why? As I just realized, when I delete my service, it's not within Ambari-UI anymore, but the service is still running. In case I just want to 'delete' a service via Ambari, what method gets triggered? Thanks. Edit/Update: I implemented the 'uninstall' method within my service, but it's not getting called. I added log-lines, but I can't see them anywhere, and nothing that's supposed to happen (deleting files etc) is happening. Can you please tell me what I have to do in order to let something happen on deletion of my service? Thanks. Any ideas?
... View more
- Tags:
- ambari-server
Labels:
- Labels:
-
Apache Ambari
07-26-2017
03:18 PM
Thanks for the hint. The log files provided me further details, but in the end I ended up with other problems. see my other questions. Thanks
... View more
07-26-2017
03:16 PM
I (kind of) managed to change the files when they get copied during installation, and I'm changing the files after the copy via python. Anyhow, I'm having another issue now, see my question: https://community.hortonworks.com/questions/115875/unable-to-determine-stack-version-for-custom-servi.html Thanks
... View more
07-26-2017
03:11 PM
I'm trying to add a custom service, an additional region server (see my other questions), but I'm getting the following error message: Could not determine stack version for component hbase-regionserver-01 by calling '/usr/bin/hdp-select status hbase-regionserver-01 > /tmp/tmplhosIC'. Return Code: 1, Output: ERROR: Invalid package - hbase-regionserver-01 What method do I have to provide in order to make this call succeed ? Edit: I just tried to test the upgrade, and it seems that the issue is critial for upgrade as well. I get the following message: The following components were found to have version mismatches. Finalize will not complete successfully:
c6403.ambari.apache.org: RS_HBASE_02/RS_HBASE_02 reports UNKNOWN Thanks in advance for any hints.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
06-30-2017
05:28 AM
Thanks for the hint, but I'm looking for a way to change settings of the copied hbase-site.xml and hbase-env.sh file, before it gets started, via python.
... View more
06-29-2017
02:47 PM
Hi, I'm trying to add an additional regionserver to a (HBase) cluster. I have the following metainfo.xml <metainfo>
<schemaVersion>2.0</schemaVersion>
<services>
<service>
<name>RS_HBASE_01</name>
<displayName>RS-HBASE-01</displayName>
<comment>Additional Regionserver on a host</comment>
<version>1.1.2</version>
<components>
<component>
<name>RS_HBASE_01</name>
<displayName>RS-01</displayName>
<category>SLAVE</category>
<cardinality>1+</cardinality>
<versionAdvertised>true</versionAdvertised>
<decommissionAllowed>true</decommissionAllowed>
<!-- the default category used to store generated metrics data -->
<timelineAppid>HBASE</timelineAppid>
<!-- the list of components that this component depends on -->
<dependencies>
<dependency>
<name>HBASE/HBASE_MASTER</name>
<scope>cluster</scope>
<auto-deploy>
<enabled>true</enabled>
<!--<co-locate>HBASE/HBASE_MASTER</co-locate>-->
</auto-deploy>
</dependency>
<dependency>
<name>HBASE/HBASE_REGIONSERVER</name>
<scope>host</scope>
<auto-deploy>
<enabled>true</enabled>
</auto-deploy>
</dependency>
</dependencies>
<commandScript>
<script>scripts/hbase_regionserver.py</script>
<scriptType>PYTHON</scriptType>
<timeout>1200</timeout>
</commandScript>
<bulkCommands>
<displayName>RegionServers</displayName>
<!-- Used by decommission and recommission -->
<masterComponent>HBASE/HBASE_MASTER</masterComponent>
</bulkCommands>
<logs>
<log>
<logId>hbase_regionserver01</logId>
<primary>true</primary>
</log>
</logs>
</component>
</components>
<commandScript>
<script>scripts/service_check.py</script>
<scriptType>PYTHON</scriptType>
<timeout>300</timeout>
</commandScript>
<!-- what other services that should be present on the cluster -->
<requiredServices>
<service>ZOOKEEPER</service>
<service>HDFS</service>
<service>HBASE</service>
</requiredServices>
<!-- configuration files that are expected by the service (config files owned by other services are specified in this list) -->
<configuration-dependencies>
<config-type>core-site</config-type> <!-- hbase puts core-site in it's folder -->
<config-type>hbase-policy</config-type>
<config-type>hbase-site</config-type>
<config-type>hbase-env</config-type>
<config-type>hbase-log4j</config-type>
</configuration-dependencies>
</service>
</services>
</metainfo> where I define my required services, and also the configuration files I need from the other services. The installation itself works already, all files/symlinks are created and put into their directoy. But in order to successfully start up I need some configuration values changed, e.g. hbase.regionserver.port and hbase.regionserver.info.port (within hbase-site.xml) , PID file location, log-file name etc (within hbase-env.sh) . How can I change the values in the configuration files? I suppose there is some helper method where I can read the configuration file before starting, and change the values. Thanks for your suggestions.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
06-29-2017
02:36 PM
Hi @Riccardo Iacomini, I found those two examples quite helpful: - https://github.com/geniuszhe/ambari-mongodb-cluster - https://github.com/abajwa-hw/ntpd-stack I suppose you have your own services, if so, I'd create my own repo (rpm repo for centos e.g. as in the mongodb example) where you have your services packaged and ready to install.
... View more
06-27-2017
08:06 AM
Thanks for the info @jaimin. I just re-created my virtual machines, now with Ambari 2.4.0 instead of 2.2.2.0. My custom service is again added to the stack definition, and now I'm able to assign masters and slaves, including my custom service. So maybe it's a problem with the 'old' version. The error I now get is org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Configuration with tag 'version1' exists for 'hbase-env'
ambari-server.log
ClusterImpl:2523 - Looking for service for config types [hbase-env]
27 Jun 2017 07:27:07,652 ERROR [ambari-client-thread-24] AbstractResourceProvider:343 - Caught AmbariException when modifying a resource
org.apache.ambari.server.AmbariException: Configuration with tag 'version1' exists for 'hbase-env'
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.createConfiguration(AmbariManagementControllerImpl.java:855)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.updateCluster(AmbariManagementControllerImpl.java:1653)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.updateClusters(AmbariManagementControllerImpl.java:1473)
at org.apache.ambari.server.controller.internal.ClusterResourceProvider$2.invoke(ClusterResourceProvider.java:311)
at org.apache.ambari.server.controller.internal.ClusterResourceProvider$2.invoke(ClusterResourceProvider.java:308)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.invokeWithRetry(AbstractResourceProvider.java:455)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.modifyResources(AbstractResourceProvider.java:336)
at org.apache.ambari.server.controller.internal.ClusterResourceProvider.updateResourcesAuthorized(ClusterResourceProvider.java:308)
at org.apache.ambari.server.controller.internal.AbstractAuthorizedResourceProvider.updateResources(AbstractAuthorizedResourceProvider.java:301)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.updateResources(ClusterControllerImpl.java:319)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.update(PersistenceManagerImpl.java:125)
at org.apache.ambari.server.api.handlers.UpdateHandler.persist(UpdateHandler.java:45)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:73)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:145)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:126)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:90)
at org.apache.ambari.server.api.services.ClusterService.updateCluster(ClusterService.java:142)
attached the configuration and package folders of my service, the metainfo didn't change (except the version, as it's 1.1.2 now) ambari-hbase-rs.zip basically I only changed some port numbers, location for log and pid file. and I removed any code regarding Windows OS. Thanks for your help. Edit I was having some problem in my params_linux.py script, as some of the imports ain't available. Right now I can create the intended symlink, but it's pointing to a wrong location, the original RS is like this: hbase-regionserver -> /usr/hdp/2.5.6.0-40/hbase and mine is hbase-regionserver-01 -> /usr/hdp/2.5.0.0/hbase The path for the target is created within status_params.py (at the moment) looking like this: regionserver_root_dir_specific = format("{stack_root}/{stack_version_formatted}/hbase")
So my question is: Is the {stack_version_formatted} wrong? (no change of the code, just re-using it) or do I have to use another variable? Thanks for your time
... View more
06-27-2017
05:50 AM
Thanks for the hint @jaimin .
Attached the files from the folder stack-recommendations.zip , unfortunately the stackadvisor.err
file is empty, so I'm not sure that helps. Maybe you see a problem? Maybe a question for understanding: Do I have to define a MASTER
component for a service, or is it ok if I only have SLAVEs? As in my
case, the MASTER is the one from HBASE.
... View more
06-26-2017
02:44 PM
I already posted the manual steps for adding an additional RS to an existing HBase cluster (see here )
Due to the limitation that I can't manage it via Ambari I looked at the sources and other custom service definitions for a stack. the metainfo.xml file will follow below. Anyhow, when I add this to my ambari-server the server starts and I see the new service within the components list. When I only select my new service Ambari tells me that other services are required in order to have this service running, so that's working fine. But then I get the exception: 500 status code received on GET method for API: /api/v1/stacks/HDP/versions/2.4/recommendations
Error message: Server Error and the ambari-server.log shows this 26 Jun 2017 14:22:49,119 WARN [qtp-ambari-client-22] ServletHandler:563 - /api/v1/stacks/HDP/versions/2.4/recommendations
java.lang.NullPointerException
at org.apache.ambari.server.state.PropertyInfo.getAttributesMap(PropertyInfo.java:145)
at org.apache.ambari.server.state.PropertyInfo.convertToResponse(PropertyInfo.java:128)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.getStackConfigurations(AmbariManagementControllerImpl.java:3898)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.getStackConfigurations(AmbariManagementControllerImpl.java:3867)
at org.apache.ambari.server.controller.internal.StackConfigurationResourceProvider$1.invoke(StackConfigurationResourceProvider.java:114)
at org.apache.ambari.server.controller.internal.StackConfigurationResourceProvider$1.invoke(StackConfigurationResourceProvider.java:111)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.getResources(AbstractResourceProvider.java:302)
at org.apache.ambari.server.controller.internal.StackConfigurationResourceProvider.getResources(StackConfigurationResourceProvider.java:111)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl$ExtendedResourceProviderWrapper.queryForResources(ClusterControllerImpl.java:945)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.getResources(ClusterControllerImpl.java:132)
at org.apache.ambari.server.api.query.QueryImpl.doQuery(QueryImpl.java:508)
at org.apache.ambari.server.api.query.QueryImpl.queryForSubResources(QueryImpl.java:463)
at org.apache.ambari.server.api.query.QueryImpl.queryForSubResources(QueryImpl.java:482)
at org.apache.ambari.server.api.query.QueryImpl.queryForResources(QueryImpl.java:436)
at org.apache.ambari.server.api.query.QueryImpl.execute(QueryImpl.java:216)
at org.apache.ambari.server.api.handlers.ReadHandler.handleRequest(ReadHandler.java:68)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:135)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:106)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:75)
at org.apache.ambari.server.api.services.stackadvisor.commands.StackAdvisorCommand.getServicesInformation(StackAdvisorCommand.java:356)
at org.apache.ambari.server.api.services.stackadvisor.commands.StackAdvisorCommand.invoke(StackAdvisorCommand.java:247)
at org.apache.ambari.server.api.services.stackadvisor.StackAdvisorHelper.recommend(StackAdvisorHelper.java:109)
at org.apache.ambari.server.controller.internal.RecommendationResourceProvider.createResources(RecommendationResourceProvider.java:92)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.createResources(ClusterControllerImpl.java:289)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.create(PersistenceManagerImpl.java:76)
at org.apache.ambari.server.api.handlers.CreateHandler.persist(CreateHandler.java:36)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:72)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:135)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:106)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:75)
at org.apache.ambari.server.api.services.RecommendationService.getRecommendation(RecommendationService.java:59)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) the metainfo.xml file: <metainfo>
<schemaVersion>2.0</schemaVersion>
<services>
<service>
<name>HBASE-RS</name>
<displayName>HBase-RS</displayName>
<comment>Additional Regionservers on one host
</comment>
<version>1.1.2.2.4</version>
<components>
<component>
<name>HBASE_REGIONSERVER_01</name>
<displayName>RegionServer01</displayName>
<category>MASTER</category>
<cardinality>1+</cardinality>
<versionAdvertised>true</versionAdvertised>
<decommissionAllowed>true</decommissionAllowed>
<timelineAppid>HBASE</timelineAppid>
<dependencies>
<dependency>
<name>HBASE/HBASE_MASTER</name>
<scope>cluster</scope>
<auto-deploy>
<enabled>false</enabled>
<co-locate>HBASE/HBASE_MASTER</co-locate>
</auto-deploy>
</dependency>
<dependency>
<name>HBASE/HBASE_REGIONSERVER</name>
<scope>host</scope>
<auto-deploy>
<enabled>false</enabled>
</auto-deploy>
</dependency>
</dependencies>
<commandScript>
<script>scripts/hbase_regionserver.py</script>
<scriptType>PYTHON</scriptType>
<timeout>1200</timeout>
</commandScript>
<bulkCommands>
<displayName>RegionServers</displayName>
<!-- Used by decommission and recommission -->
<masterComponent>HBASE/HBASE_MASTER</masterComponent>
</bulkCommands>
<logs>
<log>
<logId>hbase_regionserver01</logId>
<primary>true</primary>
</log>
</logs>
</component>
</components>
<osSpecifics>
<osSpecific>
<osFamily>redhat6,redhat7</osFamily>
<packages>
<package>
<name>hbase</name>
</package>
</packages>
</osSpecific>
</osSpecifics>
<commandScript>
<script>scripts/service_check.py</script>
<scriptType>PYTHON</scriptType>
<timeout>300</timeout>
</commandScript>
<requiredServices>
<service>ZOOKEEPER</service>
<service>HDFS</service>
<service>HBASE</service>
</requiredServices>
<!-- configuration files that are expected by the service (config files owned by other services are specified in this list) -->
<configuration-dependencies>
<config-type>core-site</config-type> <!-- hbase puts core-site in it's folder -->
<config-type>hbase-policy</config-type>
<!--<config-type>hbase-site</config-type>-->
<!--<config-type>hbase-env</config-type>-->
<!--<config-type>hbase-log4j</config-type>-->
</configuration-dependencies>
</service>
</services> I tried to set the category to SLAVE, as it basically is a SLAVE, and the master component is a HBASE_MASTER, but same error. Any ideas what I do wrong?
Thanks.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
06-08-2017
06:19 AM
thanks. resolved the issue. would be nice to include this in the quick start guide
... View more
06-07-2017
03:31 PM
I was following the instructions as shown at https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide with CentOS 6.4 and installed ambari-server. The logs show that the ambari-server is up and running, but from my host machine I can't access the UI. Neither with the FQDN, nor with the IP. I tried adding a forwarded_port within vagrant c6401.vm.network :forwarded_port, guest: 8080, host: 8080 but still no luck. Any idea what I missed?
... View more
Labels:
- Labels:
-
Apache Ambari
06-06-2017
01:45 PM
Well, I finally got some time to dig a little bit deeper. I followed the instructions as shown on the wiki. First, https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Development . I installed everything on my Ubuntu machine and did a maven build. rpm was successful, the one with jdeb failed for some projects, as the plugin jdeb was missing. (ambari-funtest, ambari-logsearch, ambari-metrics-grafana, ambari-metrics-storm-sink) After fixing the issues the maven build was fine on my machine. Next I tried https://cwiki.apache.org/confluence/display/AMBARI/Development+in+Docker . With this file the set-up of the container failed, firstly because of Oracle's JDK download issue (had to place it in the directory and transfer it to the docker image with the COPY command), secondly, the Maven version 3.0.5 was not enough, so I changed to the latest, 3.5.0. Anyhow, the maven build that gets executed at the end of the Dockerfile failed for several times for the ambari-server: Caused by: java.lang.OutOfMemoryError: PermGen space I tried with both the trunk and 2.5.0 branch, both times the errors. If there was some error on my side please let me know. Anyhow, doing something with Docker now is obsolete as it won't compile. So, this is just some feedback for you. Finally coming to my findings and questions: Installing HBase is managed by the HDP stack, what I'm not quite sure is, where are the scripts located? The only scripts I see are those located in ambari-server/src/main/resources/common-services - are those the correct ones? So, as I want to add multiple region servers on one host, would I just add another component referencing new scripts handling the changes I have to make? Where would I add this component, the ones in the common_scripts folder or do I need to adapt anything in the stack ? If the latter, there is no definition of components, or at least I can't find them. Thanks for any hints.
... View more
05-16-2017
08:29 AM
2 Kudos
I managed
to manually get a second RS running on a host with the following changes: 1)
Configuration The
configuration of our first RS on this host lies within
/usr/hdp/current/hbase-regionserver/conf, which actually links to
/etc/hbase/2.5.3.0-37/0 . In order to
have a second configuration I copied the '0' folder to '1' in the same parent
folder (same permissions as the '0' folder) and adapted the following
parameters: within
hbase-site.xml hbase.regionserver.info.port from 16030 to 26130
hbase.regionserver.port from 16020 to 26020 within
hbase-env.sh export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/etc/hbase/2.5.3.0-37/1}export HBASE_LOG_DIR=/var/log/hbase/hbase1export HBASE_PID_DIR=/var/run/hbase/hbase1 I changed
log and pid directory to a subfolder as the permissions to write are already
set correctly Adapting
the WAL directory the RS writes to is not necessary, as the port is contained
in the folder, and thus it's unique (I got the information from
https://amahanty.wordpress.com/2012/11/04/understanding-hbase-files-and-directories ) 2)
Execution Even though
there's only one hbase-regionserver "folder" within /usr/hdp/current
, this is sufficient, as we can define another configuration folder upon
startup Based on
the original command that is logged when restarting a RS via Ambari Execute['/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh --config /usr/hdp/current/hbase-regionserver/conf start regionserver'] {'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hbase/hbase-hbase-regionserver.pid && ps -p `ambari-sudo.sh -H -E cat /var/run/hbase/hbase-hbase-regionserver.pid` >/dev/null 2>&1',
'user': 'hbase'} I executed
the following: /usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh --config /etc/hbase/2.5.3.0-37/1 start regionserver 3) Finally Within the
hbase-master web UI I saw the new RS, and after some time, the regions were
automatically distributed to the new RS. Within the
log file I see impl.MetricsSystemImpl: Error creating sink 'timeline'org.apache.hadoop.metrics2.impl.MetricsConfigException:
Error creating plugin: org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink so probably
I'm missing some metrics configuration, but I haven't taken a look any further
at the moment. Probably ports? Anyhow, we
can't manage the new RS with Ambari, which makes sense, as Ambari doesn't know
anything about this service. Well, the
obvious question now is: How can we extend Ambari in order to install further
RS on one host? Open
questions: a) File
/usr/hdp/current/hbase-regionserver/conf/regionservers - When
adding an additional RS on a host, would it result in a second entry in the
regionservers file? - What's
the meaning of this file? b)
/etc/hbase/2.5.3.0-37/0 - the
folder '0', what's the meaning behind this 'zero' folder ? c) Does the
Ambari design support adding additional RS on one host? I cloned
the source from https://git-wip-us.apache.org/repos/asf/ambari.git and browsed
through the code, checked out the ambari design document etc. and have a basic
overview of the design, really basic (just to underline this) What I
think right now how it works, and please correct me if I'm wrong. Basically,
when I add a new host, the basic file structure for all services is put on this
host. Only when assigning a component to this newly added host, some magic
happens. With
'magic' I mean all the operations that need to be done in order to be able to
monitor the component with Ambari. At this
point I'm not sure where to add some code in order to define a different
configuration folder for the RS. Of course, I also want to be able to
restart/stop/start etc the component via Ambari, and when I make any changes to
the configuration of a RS or HBase itself, the additional RS should also be
considered. Can you
give me any hints? Thanks.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase