Member since
12-15-2015
39
Posts
21
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1705 | 07-15-2016 03:41 AM | |
921 | 07-15-2016 01:31 AM | |
1727 | 02-24-2016 11:39 PM | |
3158 | 02-19-2016 05:16 PM |
06-12-2017
02:44 PM
HBase shell also includes an alternative command to `truncate` called `truncate_preserve` which can be used to drop data within a table but still maintain the metadata about the table contained in hbase:meta. You can also use the `status 'detailed'` command from `hbase shell` to see the split/presplit regions. For example: $ /opt/bin/hbshell.bash <<< "truncate_preserve 'dev06ostgarnerh:contact'" $ /opt/bin/hbshell.bash <<< "status 'detailed'" | grep contact
"dev06ostgarh:contact,600,1497277520170.a17b52eec2a9df78bbc175a5238a13f8."
"dev06ostgarh:contact,800,1497277520170.275ff20d4c4036163ee90a94092b47e6."
"dev06ostgarh:contact,A00,1497277520170.e94b5202f2cbb100dd74e858f772573e."
"dev06ostgarh:contact,E00,1497277520170.7d9f1b140431d2a975ee35e716b9337e."
"dev06ostgarh:contact,,1497277520170.ed4ad8c3fb40598bb51d2ad91dee6973."
"dev06ostgarh:contact,100,1497277520170.43378ea32c0b3bf64dd697b62b87b23c."
"dev06ostgarh:contact,200,1497277520170.be10a9e051b0f9a4f7c460c1b36969d7."
"dev06ostgarh:contact,F00,1497277520170.64f57d24d29058f238b1addb08c603a8."
"dev06ostgarh:contact,300,1497277520170.6244d798c326984e47a577f579645487."
"dev06ostgarh:contact,500,1497277520170.c9b1ed0633850eb5086a32049a5597d5."
"dev06ostgarh:contact,C00,1497277520170.bb446488db4bb788248f41551db878f5."
"dev06ostgarh:contact,D00,1497277520170.b2d2bc1febc9206df27e05a976066027."
"dev06ostgarh:contact,400,1497277520170.b61ffc5cdfda59f96b47d83f8aa46134."
"dev06ostgarh:contact,700,1497277520170.e4e1cab56f430344c71e24fa6673b55f."
"dev06ostgarh:contact,900,1497277520170.1cb1067053fe203426e0e9e2c00bb37f."
"dev06ostgarh:contact,B00,1497277520170.5867477ee49a19f6dafd0ff60f9b8f6f."
... View more
02-27-2017
09:53 PM
@artem thanks for the details, I'm looking for these features with the REST server included with HBase and it apparently lacks thud support. See the HBase upstream Jira I referenced.
... View more
02-27-2017
09:08 PM
According to this JIRA: https://issues.apache.org/jira/browse/HBASE-14147 namespaces support was added in what looks to be versions: 2.0.0, 1.2.0, 1.3.0, 0.98.15 of the HBase REST server. Is there any way to get access to this in HDP 2.3 - 2.5? All these HDP versions are on HBase 1.1.2.
... View more
Labels:
- Labels:
-
Apache HBase
02-13-2017
06:28 PM
Background
I'm currently investigating how to make Ambari Server run in multiple regions within AWS. I realize that Ambari does not provide this feature out of the box, and so was thinking that I could use something like the following: Setup 2 EC2 instances as Ambari Servers (in 2 different regions in AWS) Assign a VIP (Virtual IP) and designate 1 of the above servers as the initial owner Use keepalived to failover VIP between 2 EC2 instances that both have Ambari Server Use DynamoDB to externalize the PostgresSQL DB script/command that keepalived would trigger to start up the secondary Ambari Server and stop the primary My questions
Does this seem feasible? What about the hostname/IP info assigned to the 2 Ambari Servers? What about any filesystem details associated to Ambari Server, does this need to be shared across nodes too? References
Chapter 5. Moving the Ambari Server How to setup High Availability for Ambari server?
... View more
Labels:
- Labels:
-
Apache Ambari
07-27-2016
07:28 PM
http://hortonworks.com/blog/odpi-core-hdp-hadoop-core/. Looks with the above that Phoenix would be part of the extended HDP services. > Extended HDP services: Extended HDP services relates to the services that access the data stored in HDP including Hive, HBase, Storm, Spark and more. Extended services provide customers with the flexibility to uptake the innovation that is delivered on top of the common core, based on their own requirements and schedule. With Extended services, customers would be able to easily uptake the new functionality without causing any disruption to their core platform.
... View more
07-27-2016
06:31 PM
As we've been evaluating the use of Phoenix + HBase in HW 2.3, we came upon the conclusion that in the current bundled version of Phoenix, 4.4, that namespaces are not supported. The upstream JIRAs are confusing but it looks like Phoenix will not support namespaces + permissions until 4.7-4.8. Does this sound like a correct conclusion? If so what are my options in terms of Phoenix?
upgrade it within HW 2.3 (assuming this is not an option)? work around it somehow? Move Phoenix tables manually out of default NS into our applications? How? This is what we seen when enabling Phoenix on our system. The SYSTEM.* tables are Phoenix's. Ours are in the ns: namespace. Version 1.1.2.2.3.6.0-3796, r2873b074585fce900c3f9592ae16fdd2d4d3a446, Thu Jun 23 16:29:31 UTC 2016
hbase(main):001:0> list
TABLE
SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.SEQUENCE
SYSTEM.STATS
ambarismoketest
ns:contact
ns:counters
ns:counters-backup
ns:lists
ns:logins
ns:modelLogs
ns:models
ns:reportLogs
ns:users
References
HBase namespaces surfaced in phoenix Support HBase non-default Namespace Phoenix Schema should mapping to HBase Namespace
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-23-2016
07:57 PM
I've seen it in as far back as 2.1 that's why I was surprised it was missing in my install.
... View more
07-22-2016
07:58 PM
Yeah perhaps it's a bug. I saw no indications of the Phoenix jar files when I did a find on a 2.3 system. So I did a yum install phoenix and they got installed. This box was pretty basic with just HDFS, Zookeeper, & HBase. I'll spin another env up and see if I encounter the issue again. Thanks for the help.
... View more
07-22-2016
01:00 PM
This seemed a little odd to me that I had to do an actual `yum install phoenix` command to install Phoenix on HDP 2.3. In HDP 2.2 this was included by default (from what I remember) when I installed the HBase role. I suspect it was done given the size of Phoenix when I installed it being ~250MB, but this felt unusual to me given I can literally do everything else through the Ambari dashboard. Why this 2nd class treatment for Phoenix? Are there plans to install Phoenix as a actual role or will the installation be like this going forward for this particular Hadoop project?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
-
Apache Phoenix
07-21-2016
01:52 PM
We recently started implementing HBase namespaces + ACLs and have run into an issue. According to the docs: http://hbase.apache.org/0.94/book/ops.snapshots.html: 14.8.7. Snapshots operations and ACLs If you are using security with the AccessController Coprocessor (See Section 8.2, “Access Control”), only a global administrator can take, clone, or restore a snapshot, and these actions do not capture the ACL rights. This means that restoring a table preserves the ACL rights of the existing table, while cloning a table creates a new table that has no ACL rights until the administrator adds them. Our application requires the ability to take a snapshot of a specific table, clone it, and then Questions
Why does the snapshot mechanism require this high level access to function? Is this something that will change over time or is this the design and it's being done this way for a specific purpose?
... View more
Labels:
- Labels:
-
Apache HBase
07-15-2016
03:41 AM
It appears as though adding permissions on just the namespace is not sufficient for allowing a user access to the tables within it. I had to cascade the permissions to the tables themselves like so in an hbase shell: list.each {|t| grant 'our_apps_user','RWCXA',t} I did the above using the included hbase SPN in the hbase.headless.keytab. $ kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-<servername>@<REALM>
$ hbase shell
...above command...
... View more
07-15-2016
03:33 AM
Sorry that was a mistake when I sanitized the output taking work specific details out. The name is our_apps_user and it's not a typo.
... View more
07-15-2016
01:31 AM
1 Kudo
Found my own answer after Googling a bit more in Community Connections here: https://community.hortonworks.com/articles/14463/auth-to-local-rules-syntax.html. [n:string]
Indicates a matching rule where n declares the number of expected components in the principal. Components are separated by a /, where a user account has one component (ambari-qa) and a service account has two components (nn/fqdn). The string value declares how to reformat the value to be used in the rest of the expression. The placeholders are as follows:
$0 - realm
$1 - 1st component
$2 - 2nd component
... View more
07-15-2016
01:27 AM
NOTE: My question is in regards to this HW doc: http://hortonworks.com/blog/fine-tune-your-apache-hadoop-security-settings/ I'm very familiar with regex's so I understand the `s/@.*//` portions regarding rules like this: RULE:[1:$1@$0](.*@YOUR.REALM)s/@.*//
And I've re-read this paragraph multiple times. The translations rules have 3 sections: base, filter, and substitution. The base is the number of components in the principal name excluding the realm and the pattern for building the name from the sections of the principal name. The base uses $0 to mean the realm, $1 to mean the first component and $2 to mean the second component. But it's unclear to me what the "[1:" means in the rule above. Additionally what does the "[2:" mean in this rule? [2:$1%$2] translates “username/admin@APACHE.ORG” to “username%admin”
I'm guessing it's a rule # that tells what order to apply the rules, but that's a total guess.
... View more
Labels:
- Labels:
-
Apache Hadoop
07-14-2016
10:37 PM
1 Kudo
In a previous question I inquired about namespaces and how to utilize them: https://community.hortonworks.com/questions/18552/introduction-of-hbase-namespaces-into-a-pre-existi.html. Since then we've enabled our application to use them, and we're now working through getting our application to work with namespaces + Kerberos. I understand that the service principal (SPN) that our application uses gets parsed down to just the base portion of the name. For example: <username>/<hostname>@REALM would result in having to grant <username> permissions on the namespace. I went ahead and did this: hbase(main):001:0> user_permission '@dev01osth'
User Namespace,Table,Family,Qualifier:Permission
our_apps_user dev01osth,,,: [Permission: actions=READ,WRITE,CREATE,EXEC,ADMIN]
1 row(s) in 0.4360 seconds So it would appear that I have a proper user who has RWCEA permissions in this namespace. However when I then bring our applications .keytab file over and do a `kinit` using it on our HBase node I cannot perform any actions in an hbase shell as this user. I would expect that this user would be able to `list` the tables in this namespace and also do `scans` of tables that are within this given namespace. Errors in the hbase shell are as follows: hbase(main):009:0> user_permission
User Namespace,Table,Family,Qualifier:Permission
ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user 'our_apps_user' (global, action=ADMIN)
at org.apache.hadoop.hbase.security.access.AccessController.requireGlobalPermission(AccessController.java:531)
at org.apache.hadoop.hbase.security.access.AccessController.requirePermission(AccessController.java:507)
at org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:2273)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7459)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1876)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1858)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Here is some help for this command:
and this: hbase(main):008:0> user_permission '@dev01osth'
User Namespace,Table,Family,Qualifier:Permission
ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions (user=our_apps_user/<hostname>@<REALM>, scope=dev01osth, params=[namespace=dev01osth],action=ADMIN)
at org.apache.hadoop.hbase.security.access.AccessController.requireNamespacePermission(AccessController.java:588)
at org.apache.hadoop.hbase.security.access.AccessController.getUserPermissions(AccessController.java:2264)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.getUserPermissions(AccessControlProtos.java:9949)
at org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10107)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7459)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1876)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1858)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
What am I missing here?
... View more
Labels:
- Labels:
-
Apache HBase
02-24-2016
11:39 PM
2 Kudos
I was able to work around my issue by explicitly setting my hostname in /etc/hosts in addition to hostnamectl. I think when Ambari constructs the Kerberos principals it is using the hostname that would resolve for the IP address that's assigned to my box. Using the output from hostname -A lead me to a solution in addition to this snippet in Ambari Agent's log file: java.io.IOException: Login failure for dn/host-192-168-114-49.td.local@<REDACTED KERBEROS REALM> from keytab /etc/security/keytabs/dn.service.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user Notice the hostname is thought to be host-192-168-114-49.td.local however in hostnamectl it's set to dev09-ost-hivetest-h-hb02.td.local. These being out of sync was ultimately my issue. I created this Jira in the Ambari project about this as well: https://issues.apache.org/jira/browse/AMBARI-15165
... View more
02-22-2016
03:44 AM
1 Kudo
@Artem Ervits Thank you, this is a much more substantial answer to what I was looking for. In reviewing this material and in my previous researching it does not look like there's any method available via the hbase-site.xml file or top level command where you can specify a default namespace wrt client calls. So our only recourse looks to be to modify our application so that it explicitly calls out <ns>.<table> instead of what it's doing now. I was hopeful there was something along the lines of a `use <ns>` type of operation that I could utilize to "pin" our application's calls to HBase tables to a specific namespace, but that doesn't seem to be the case. At any rate I appreciate your time and look forward to some more thorough docs around namespaces.
... View more
02-21-2016
05:25 PM
2 Kudos
I've inherited the support role for an application that writes all of its tables to the default namespace in HBase. I'd like to be able to leverage namespaces so that I can manage a centralized instance of HBase that many instances of our application can then be configured/reconfigured to use. Questions
I understand how to use namespaces but was wondering if there was a easy way to facilitate this through HBase/Hadoop on the backend? What's required to use namespaces from our application's standpoint? Can our application simply do a "use <namespace>" and from that point be referencing a specific namespace's tables?
... View more
Labels:
- Labels:
-
Apache HBase
02-21-2016
03:55 PM
@Robert Levas Yes there's a ambari server & agent on the same host. When doing the installation the hostname was found w/o issue when you do the search.
... View more
02-19-2016
07:49 PM
1 Kudo
The output of `hostname -f` and hostnamectl match so I don't think this is the issue.
[root@dev09-ost-hivetest-h-hb02 ~]# hostname -f
dev09-ost-hivetest-h-hb02.td.local
[root@dev09-ost-hivetest-h-hb02 ~]# hostnamectl
Static hostname: dev09-ost-hivetest-h-hb02.td.local
Icon name: computer-vm
Chassis: vm
Machine ID: 61aaddd051a8fb40b29e47fd1b6c7084
Boot ID: af96bd95fae147b8abb044cc7a95f78d
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-327.10.1.el7.x86_64
Architecture: x86-64
[root@dev09-ost-hivetest-h-hb02 ~]# hostname
dev09-ost-hivetest-h-hb02.td.local
... View more
02-19-2016
05:16 PM
3 Kudos
You can use a script like this to perform the shutdown of a cluster. The script below will first determine the cluster's name and then use it in subsequent calls to the API. USER=admin
PASSWORD=admin
AMBARI_HOST=localhost
#detect name of cluster
CLUSTER=$(curl -s -u $USER:$PASSWORD -i -H 'X-Requested-By: ambari' \
http://$AMBARI_HOST:8080/api/v1/clusters | \
sed -n 's/.*"cluster_name" : "\([^\"]*\)".*/\1/p')
#stop all services
curl -u $USER:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT \
-d '{"RequestInfo":{"context":"_PARSE_.STOP.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"Sandbox"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' \
http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/services
#start all services
curl -u $USER:$PASSWORD -i -H 'X-Requested-By: ambari' -X PUT \
-d '{"RequestInfo":{"context":"_PARSE_.START.ALL_SERVICES","operation_level":{"level":"CLUSTER","cluster_name":"Sandbox"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}' \
http://$AMBARI_HOST:8080/api/v1/clusters/$CLUSTER/service When you run the above you'll see a response from the API like so (NOTE: here my cluster is called "dev09_ost_hivetest_h"): HTTP/1.1 202 Accepted
User: admin
Set-Cookie: AMBARISESSIONID=1t9w52ud2xali10ofo3r9uw2t4;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 150
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://localhost:8080/api/v1/clusters/dev09_ost_hivetest_h/requests/42",
"Requests" : {
"id" : 42,
"status" : "Accepted"
}
You can take the request ID ("id": 42) and monitor it to see if it's completed. $ curl -s -u admin:admin -i -H 'X-Requested-By: ambari' http://localhost:8080/api/v1/clusters/dev09_ost_hivetest_h/requests/42
HTTP/1.1 200 OK
User: admin
Set-Cookie: AMBARISESSIONID=ypsi94kpge383tn0n1aosf9p;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 5593
Server: Jetty(8.1.17.v20150415)
{
"href" : "http://localhost:8080/api/v1/clusters/dev09_ost_hivetest_h/requests/42",
"Requests" : {
"aborted_task_count" : 0,
"cluster_name" : "dev09_ost_hivetest_h",
"completed_task_count" : 16,
"create_time" : 1455901657726,
"end_time" : 1455901773056,
"exclusive" : false,
"failed_task_count" : 0,
"id" : 42,
"inputs" : null,
"operation_level" : null,
"progress_percent" : 100.0,
"queued_task_count" : 0,
"request_context" : "_PARSE_.STOP.ALL_SERVICES",
"request_schedule" : null,
"request_status" : "COMPLETED",
"resource_filters" : [ ],
"start_time" : 1455901657749,
"task_count" : 16,
"timed_out_task_count" : 0,
"type" : "INTERNAL_REQUEST"
},
...
... When it says COMPLETED, the shutdown is done.
... View more
02-19-2016
04:53 PM
2 Kudos
I'm in the process of trying to enable Kerberos on the following version of HDP 2.3 (HDP-2.3.4.0-3485). I have the following components selected/installed:
HDFS MapReduce2 YARN Tez Hive HBase Pig ZooKeeper Ambari Metrics Kerberos I encountered a error message similar to this one when trying to enable kerberos. NOTE: This dialog comes up when I attempt to regenerate my Kerberos keys. I also see the following exception in the ambari-server.log file: 19 Feb 2016 11:45:15,118 INFO [qtp-client-3081] AmbariManagementControllerImpl:1324 - Received a updateCluster request, clusterId=2, clusterName=dev09_ost_hivetest_h, securityType=KERBEROS, request={ clusterName=dev09_ost_hivetest_h, clusterId=2, provisioningState=null, securityType=KERBEROS, stackVersion=HDP-2.3, desired_scv=null, hosts=[] }
19 Feb 2016 11:45:15,157 WARN [qtp-client-3081] ServletHandler:563 - /api/v1/clusters/dev09_ost_hivetest_h
java.lang.NullPointerException
at org.apache.ambari.server.actionmanager.ActionDBAccessorImpl.persistActions(ActionDBAccessorImpl.java:300)
at org.apache.ambari.server.orm.AmbariJpaLocalTxnInterceptor.invoke(AmbariJpaLocalTxnInterceptor.java:68)
at org.apache.ambari.server.actionmanager.ActionManager.sendActions(ActionManager.java:99)
at org.apache.ambari.server.controller.internal.RequestStageContainer.persist(RequestStageContainer.java:216)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.updateCluster(AmbariManagementControllerImpl.java:1567)
at org.apache.ambari.server.controller.AmbariManagementControllerImpl.updateClusters(AmbariManagementControllerImpl.java:1308)
at org.apache.ambari.server.controller.internal.ClusterResourceProvider$2.invoke(ClusterResourceProvider.java:241)
at org.apache.ambari.server.controller.internal.ClusterResourceProvider$2.invoke(ClusterResourceProvider.java:238)
at org.apache.ambari.server.controller.internal.AbstractResourceProvider.modifyResources(AbstractResourceProvider.java:330)
at org.apache.ambari.server.controller.internal.ClusterResourceProvider.updateResources(ClusterResourceProvider.java:238)
at org.apache.ambari.server.controller.internal.ClusterControllerImpl.updateResources(ClusterControllerImpl.java:310)
at org.apache.ambari.server.api.services.persistence.PersistenceManagerImpl.update(PersistenceManagerImpl.java:104)
at org.apache.ambari.server.api.handlers.UpdateHandler.persist(UpdateHandler.java:42)
at org.apache.ambari.server.api.handlers.BaseManagementHandler.handleRequest(BaseManagementHandler.java:72)
at org.apache.ambari.server.api.services.BaseRequest.process(BaseRequest.java:135)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:105)
at org.apache.ambari.server.api.services.BaseService.handleRequest(BaseService.java:74)
at org.apache.ambari.server.api.services.ClusterService.updateCluster(ClusterService.java:151)
at sun.reflect.GeneratedMethodAccessor192.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:540)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:715)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:182)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:209)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:198)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:132)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:982)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1043)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745) Searching for this issue I've come across others who are encountering this exact issue: https://mail-archives.apache.org/mod_mbox/ambari-user/201602.mbox/%3C56B0CE2C.9050902@roo.ee%3E. No resolution that I've seen as of yet though.
... View more
Labels:
02-05-2016
06:39 AM
See my answer which shows the screen where I made this modification.
... View more
02-05-2016
06:38 AM
FYI, the method I used to remove the 2 offending parameters was to do it through Ambari. If you navigate to the config tab of YARN you can go to the scheduler section and delete the 2 options in the Capacity Scheduler textbox. That textbox shows the 2 options like so:
... View more
01-11-2016
09:56 PM
You have to create links in the directory where `sqlline.py` lives to 2 .xml files that are provided by HBase/Hadoop. $ pwd
/usr/hdp/2.2.8.0-3150/phoenix/bin
$ ll | grep xml
lrwxrwxrwx 1 root root 29 Dec 16 13:34 core-site.xml -> /etc/hbase/conf/core-site.xml
lrwxrwxrwx 1 root root 30 Dec 16 13:34 hbase-site.xml -> /etc/hbase/conf/hbase-site.xml With those in place and `$JAVA_HOME` and `java` on your `$PATH`, you can now run `sqlline.py`: $ ./sqlline.py localhost:2181/hbase-unsecure
... View more
12-28-2015
07:39 PM
1 Kudo
BTW - this is a single node of HortonWorks so it seems odd that it would require so much space? I'm going w/ the default options when I do a server install too.
... View more
12-28-2015
07:36 PM
1 Kudo
@Scott Shaw - I'm using 2.0.1 of Ambari. I never thought of that, so I can remove AMS and then re-install it to get it to recreate it when I happen upon the out of HDD space issue? I'll look thru the links to see how to dial down the TTLs for AMS. Thanks for the info!
... View more
12-28-2015
07:08 PM
3 Kudos
How can I stop the entire HDP stack of services from the CLI? I seem to recall that there was a command to accomplish this but I can not find it. It would seem like this facility would be associated with the ambari-agent service but I did not see any such method/action on the usage for this script/service.
... View more
Labels:
- Labels:
-
Apache Ambari
12-28-2015
04:41 PM
1 Kudo
I recently setup HDP (HBase) on a single VM which had ~15GB of space. The installation went fine but after ~2 months the system ran out of HDD space. I'd like to come up with a method for clearing out the metrics or truncating them. While researching this I drilled down to this directory where the bulk of the space is being used: $ du -sh /var/lib/ambari-metrics-collector/hbase/data/default/* | sort -rh | head -5
7.1G /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE
403M /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_MINUTE
209M /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD
76M /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_AGGREGATE_HOURLY
45M /var/lib/ambari-metrics-collector/hbase/data/default/METRIC_RECORD_HOURLY
I've toyed with several methods of truncating these files using the `truncate -s 0 <file>` command but this trashes the files so that they're no longer usable by AMS.
Questions
Is there a simple way to reset the metrics? Is there a safe way to delete the data collected periodically, from say a cron job?
NOTE: This is a small installation and I don't have the ability to throw more HDD space at the problem. I'd like to keep AMS enabled if possible.
... View more
Labels:
- Labels:
-
Apache Ambari