Member since
10-22-2015
55
Posts
41
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1214 | 01-11-2019 03:06 PM | |
897 | 08-27-2018 04:10 PM | |
1602 | 10-10-2017 05:38 PM | |
1476 | 06-22-2017 05:31 PM | |
1428 | 05-30-2017 07:59 PM |
12-15-2016
03:56 PM
1 Kudo
HBase can run on YARN or without YARN. MapReduce jobs always run on YARN, so whenever there is an HBase operation that performs a MapReduce, that will be run on YARN.
... View more
08-30-2016
12:23 AM
1 Kudo
Slider uses its own Hadoop jars in the Slider lib directory. I suspect that the Hadoop client jars Slider is using are not compatible with the version of Hadoop running on the cluster. If you built Slider yourself, you could try rebuilding it with a hadoop.version that matches the version running on your cluster (you may need to adjust some other dependency versions in this case as well). Alternatively, you could try replacing the hadoop jars in the Slider lib directory, but if the versions are too far apart this will not work and you'll have to rebuild Slider.
... View more
08-03-2016
03:06 PM
Thanks for trying it out! It does sound like a Slider bug if it isn't working for multiple containers on the same host. You're welcome to open a ticket at https://issues.apache.org/jira/browse/SLIDER, or I can do that for you.
... View more
08-01-2016
08:42 PM
1 Kudo
I don't think the default hbase app package will allow you to get the region server ports through the REST API because Slider is not allocating those ports. The port is set to 0 in the hbase-site config, so hbase will pick a random port itself. If you wanted Slider to allocate the port instead, you might be able to change the hbase.regionserver.port property to the following in the appConfig: "site.hbase-site.hbase.regionserver.port": "${HBASE_REGIONSERVER.ALLOCATED_PORT}{PER_CONTAINER}", You would also have to add componentExports to the HBASE_REGIONSERVER component in the metainfo.xml file: <component>
<name>HBASE_REGIONSERVER</name>
<componentExports>
<componentExport>
<name>regionservers</name>
<value>${THIS_HOST}:${site.hbase-site.hbase.regionserver.port}</value>
</componentExport>
</componentExports>
... You should then be able to access this information through /ws/v1/slider/publisher/slider/componentinstancedata
... View more
06-22-2016
06:41 PM
As described in the answer to this question: https://community.hortonworks.com/questions/35138/target-replicas-is-5-but-found-3-live-replicas.html Start the accumulo shell on a host where the Accumulo client is installed and run the following: config -t accumulo.root -s table.file.replication=3
config -t accumulo.metadata -s table.file.replication=3 You'll have to change these properties AND run the hdfs setrep command so that both new and existing files have their settings changed.
... View more
05-26-2016
03:12 PM
Client installations are not the same as server installations. The Accumulo components (Master, TServer etc.) do not need a client installation of Accumulo, so clients are not installed except on the nodes you request in the wizard. The server components use a separate ACCUMULO_CONF_DIR with different permissions, since the server configuration files contain sensitive information. To run the Accumulo shell from a server installation, you would run ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo shell. But for normal client operations you should not do this; you should run Accumulo shell from a client installation. In any case, I'm glad you were able to fix the problem!
... View more
05-25-2016
01:41 PM
An Accumulo client will be set up on nodes you selected during the "Assign Slaves and Clients" step of the Ambari Install Wizard. You should be able to find out where clients are installed by going to the Ambari Dashboard, clicking on Accumulo in the service list on the left, then clicking on Accumulo Client.
... View more
05-24-2016
02:28 PM
4 Kudos
Accumulo sets the replication for its metadata tables to 5 by default; the hdfs setrep command will change the existing files, but Accumulo will still be using 5 for new files it creates. To change this setting, start the accumulo shell and run the following: config -t accumulo.root -s table.file.replication=3
config -t accumulo.metadata -s table.file.replication=3
(Note that older versions of Accumulo have just one metadata table named !METADATA instead of the two tables listed above.) You'll have to change these properties AND run the hdfs setrep command so that both new and existing files have their settings changed. You could set the table.file.replication to 3 directly as in the commands above, or you could set it to 0 if you want Accumulo to use the HDFS default replication value. To see what value Accumulo has for the table.file.replication property and verify that you have set it properly, you can run: config -t accumulo.root -f table.file.replication
config -t accumulo.metadata -f table.file.replication
... View more
05-02-2016
03:40 PM
2 Kudos
I think you are correct that it is just seeing if the PID still exists. It should be related to this code in the app package regionserver script: https://github.com/apache/incubator-slider/blob/develop/app-packages/hbase/package/scripts/hbase_regionserver.py#L55 The actual heartbeat is done in the agent Controller: https://github.com/apache/incubator-slider/blob/develop/slider-agent/src/main/python/agent/Controller.py#L264 Also, you can specify the heartbeat.monitor.interval in the appConfig.json (in milliseconds): {
"schema": "http://example.org/specification/v2.0.0",
"metadata": {
},
"global": {
"heartbeat.monitor.interval": "60000",
...
}
}
... View more
01-27-2016
02:57 PM
7 Kudos
The X-Requested-By HTTP header is required. Try this: curl -u admin:admin -X DELETE -H "X-Requested-By: ambari" http://lnxbig05:8080/api/v1/clusters/rsicluster01/services/ACCUMULO
... View more