Member since
09-20-2017
50
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
959 | 08-09-2018 06:47 PM | |
3297 | 01-05-2018 02:34 PM | |
669 | 12-05-2017 02:29 PM | |
411 | 10-18-2017 06:10 PM |
08-09-2018
07:08 PM
I used ambari to uninstall and reinstall those services.
... View more
08-09-2018
06:47 PM
For old version of ambari:https://issues.apache.org/jira/browse/AMBARI-20875 For 2.6.1: ambari-server start --auto-fix-database 1.- backup ambari 2.- run this querys ######
SELECT cc.config_id, cc.type_name, cc.version_tag FROM ambari.clusterconfig cc, ambari.clusterconfig ccm WHERE cc.config_id NOT IN (SELECT scm.config_id FROM ambari.serviceconfigmapping scm) AND cc.type_name != 'cluster-env' AND cc.type_name = ccm.type_name AND cc.version_tag = ccm.version_tag; CREATE TEMPORARY TABLE orphaned_configs AS (SELECT cc.config_id FROM ambari.clusterconfig cc WHERE cc.config_id NOT IN (SELECT scm.config_id FROM ambari.serviceconfigmapping scm) AND cc.type_name != 'cluster-env'); delete FROM ambari.clusterconfig WHERE config_id IN (SELECT config_id from orphaned_configs);
######
... View more
08-09-2018
06:41 PM
When you restart ambari server you will see warning. This is for ambari 2.6.1 version 2018-05-31 13:52:47,897 WARN - You have config(s): hive-interactive-site-version1526669268436,spark2-thrift-sparkconf-version1523470551917,hive-env-version1523468253282,ranger-hive-security-version1523468253282,hive-interactive-site-version1523468253282,hive-site-version1523470551910,hive-env-version1523470551912,webhcat-site-version1523476281613,spark2-defaults-version1523546169843,ranger-hive-security-version1523541830065,webhcat-site-version1523468253282,hive-interactive-env-version1526669268434,oozie-site-version1523476282031,ranger-hive-security-version1523470551914,hive-site-version1523475239563,livy2-conf-version1523470551917,oozie-site-version1523470551917,hive-env-version1524144296621,ranger-hive-policymgr-ssl-version1523470551915,oozie-site-version1523548496113,hive-env-version1523475386015,hive-interactive-site-version1526679191562,tez-interactive-site-version1523468253282,hive-interactive-env-version1526679191561,ranger-hive-audit-version1523473225964,oozie-env-version1523470551917,hive-site-version1523544593764,oozie-site-version1523631139320,hive-site-version1526676861358,hive-site-version1523476281402,oozie-env-version1523548262140,hive-interactive-site-version1526676861357,ranger-hive-audit-version1523468253282,ranger-hive-audit-version1523470551913,spark2-defaults-version1523470551917,spark2-defaults-version1523476281670,hive-interactive-site-version1526672227348,hive-site-version1523468253282,hive-interactive-env-version1523468253282,hive-env-version1524243680992,tez-interactive-site-version1526669268437,oozie-site-version1523554724976,tez-interactive-site-version1526674777606,ranger-hive-plugin-properties-version1523468253282,hive-interactive-env-version1526672227349,hive-atlas-application.properties-version1523468253282,hive-site-version1526670928225,hive-site-version1523475386016,ranger-hive-policymgr-ssl-version1523468253282,hive-interactive-site-version1523476282107,hiveserver2-site-version1523468253282 that is(are) not mapped (in serviceconfigmapping table) to any service!
... View more
Labels:
- Labels:
-
Apache Ambari
08-09-2018
04:57 PM
Enabling kerberos authentication for the ambari resolved the issue. Thank you @Robert Levas
... View more
08-09-2018
03:40 PM
I am trying to sync group to ambari server using keytab through curl command. I am using below command: curl -i -k --negotiate -u : -H 'X-Requested-By: ambari' -X POST -d '[{"Event": {"specs":[{"principal_type":"groups","sync_type":"specific","names": "group_name"}]}}]' https://<host>:8442/api/v1/ldap_sync_events Error: HTTP/1.1 403 Missing authentication token Strict-Transport-Security: max-age=31536000 X-Frame-Options: DENY X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff Pragma: no-cache Content-Type: text/plain;charset=ISO-8859-1 Content-Length: 64
... View more
Labels:
- Labels:
-
Apache Ambari
07-26-2018
02:26 PM
My Yarn UI is kerberos enabled. getHTTP complaining about 401 authentication error. Is there any work around for this?
... View more
07-25-2018
07:28 PM
We would like to change useTicketCache=true in hbase_queryserver_jaas.conf file. I changed it manually on the servers where hbase&phoenix are running. But once I restart those services it switched back to default useTicketCache=false. Is there any way that we can this?
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Phoenix
07-02-2018
03:06 PM
Where to find Version Definition File for HDF?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Cloudera DataFlow (CDF)
06-14-2018
09:51 PM
Exactly...
... View more
06-14-2018
09:20 PM
Yes it is nifi 1.5
... View more
06-14-2018
08:36 PM
It is running fine sometimes and failing at somepoint with the below error. Put HiveQL Nifi processor is failing sometimes with Hive Connection pool - Error: Could not establish connection to jdbc:hive2://example.host.com:10001/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;: org.apache.http.client.ClientProtocolException (state=08S01,code=0) Caused by: org.apache.thrift.transport.TTransportException: org.apache.http.client.ClientProtocolException
at org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:297)
at org.apache.thrift.transport.THttpClient.flush(THttpClient.java:313)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.hive.service.cli.thrift.TCLIService$Client.send_OpenSession(TCLIService.java:158)
at org.apache.hive.service.cli.thrift.TCLIService$Client.OpenSession(TCLIService.java:150)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:563)
... 35 common frames omitted
Caused by: org.apache.http.client.ClientProtocolException: null
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:118)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:251)
... 41 common frames omitted
Caused by: org.apache.http.HttpException: null
at org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:86)
at org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:183)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.ServiceUnavailableRetryExec.execute(ServiceUnavailableRetryExec.java:85)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
... 44 common frames omitted
Caused by: org.apache.http.HttpException: null
at org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:68)
at org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:74)
... 50 common frames omitted
Caused by: java.lang.reflect.UndeclaredThrowableException: null
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1887)
at org.apache.hive.service.auth.HttpAuthUtils.getKerberosServiceTicket(HttpAuthUtils.java:83)
at org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:62)
... 51 common frames omitted
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at org.apache.hive.service.auth.HttpAuthUtils$HttpKerberosClientAction.run(HttpAuthUtils.java:183)
at org.apache.hive.service.auth.HttpAuthUtils$HttpKerberosClientAction.run(HttpAuthUtils.java:151)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.ja
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache NiFi
05-15-2018
05:25 PM
Sudo to infra-solr user su - infra-solr kinit -kt <solr-keytab> and run source /etc/ambari-infra-solr/conf/infra-solr-env.sh /usr/lib/ambari-infra-solr/server/scripts/cloud-scripts/zkcli.sh --zkhost "${ZK_HOST}" -cmd list
... View more
04-26-2018
01:31 PM
This is what I gfound in the Ranger admin access log: [26/Apr/2018:13:28:36 +0000] "GET /service/plugins/secure/policies/download/HDPCLUSTER_hbase?lastKnownVersion=172&lastActivationTime=1524514591769&pluginId=hbaseRegional@hadoop.cluster.com-HDPCLUSTER_hbase&clusterName=HDPCLUSTER HTTP/1.1" 401 - "-" "Java/1.8.0_161"
... View more
04-25-2018
05:36 PM
Any pointers for masking the fields of HBase tables using tag based Ranger policies?
... View more
Labels:
- Labels:
-
Apache Atlas
-
Apache HBase
-
Apache Ranger
04-20-2018
08:29 PM
Ranger admin is running node01 & node02 and used external load balancer. Added the spn for load balancer on node01 & node02. Ranger tagsync is running on node02. It is using the keytab of en02 for rangertagsync user to update the tagstore and getting denied. 20 Apr 2018 13:28:55 DEBUG TagAdminRESTSink [Thread-7] - 143 Using Principal = rangertagsync/node02-priv.cluster.com@CLUSTER.COM 20 Apr 2018 13:28:55 DEBUG TagAdminRESTSink [Thread-7] - 173 ==> doUpload() 20 Apr 2018 13:28:55 ERROR TagAdminRESTSink [Thread-7] - 183 Upload of service-tags failed with message HTTP 401 20 Apr 2018 13:28:55 ERROR TagAdminRESTSink [Thread-7] - 152 Upload of service-tags failed with message java.lang.Exception: Upload of service-tags failed with response: PUT https://loadblancer.cluster.com:6182/service/tags/importservicetags/ returned a response status of 401 Unauthorized at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.uploadServiceTags(TagAdminRESTSink.java:187) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.access$000(TagAdminRESTSink.java:46) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink$1.run(TagAdminRESTSink.java:150) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink$1.run(TagAdminRESTSink.java:146) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1849) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.doUpload(TagAdminRESTSink.java:146) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.run(TagAdminRESTSink.java:255) at java.lang.Thread.run(Thread.java:748)
... View more
04-19-2018
02:39 PM
Here is the other way Backing Up and Restoring HDFS Metadata Backing Up HDFS Metadata Using Cloudera Manager HDFS metadata backups can be used to restore a NameNode when
both NameNode roles have failed. In addition, Cloudera recommends backing up
HDFS metadata before a major upgrade. Minimum Required Role: (also provided by Full
Administrator) This backup method requires you to shut down the cluster.
Note
the active NameNode.
Stop
the cluster. It is particularly important that the NameNode role process
is not running so that you can make a consistent backup.
Go to
the HDFS service.
Click
the Configuration tab.
In the
Search field, search for "NameNode Data Directories" and note
the value.
On the
active NameNode host, back up the directory listed in the NameNode Data
Directories property. If more than one is listed, make a backup of one
directory, since each directory is a complete copy. For example, if the
NameNode data directory is /data/dfs/nn, do the following as root:
# cd
/data/dfs/nn # tar -cvf /root/nn_backup_data.tar . You should see output like this: ./ ./current/ ./current/fsimage ./current/fstime ./current/VERSION ./current/edits ./image/ ./image/fsimage If there is a file with the extension lock in
the NameNode data directory, the NameNode most likely is still running. Repeat
the steps, starting by shutting down the NameNode role. Restoring HDFS Metadata From a Backup The following process assumes a scenario where both NameNode
hosts have failed and you must restore from a backup.
Remove
the NameNode, JournalNode, and Failover Controller roles from the HDFS
service.
Add
the host on which the NameNode role will run.
Create
the NameNode data directory, ensuring that the permissions, ownership, and
group are set correctly.
Copy
the backed up files to the NameNode data directory.
Add
the NameNode role to the host.
Add
the Secondary NameNode role to another host.
Enable
high availability. If not all roles are started after the wizard
completes, restart the HDFS service. Upon startup, the NameNode reads the
fsimage file and loads it into memory. If the JournalNodes are up and
running and there are edit files present, any edits newer than the fsimage
are applied.
... View more
04-19-2018
02:34 PM
#Put the namenode into safemode hdfs dfsadmin -safemode enter #save all the trsactions to namespace hdfs dfsadmin -saveNamespace #Download the FSImage of namenode hdfs dfsadmin -fetchImage <path-forimage> #Bring the namenode out from safemode hdfs dfsadmin -safemode leave #This step is critical # Navigate to metadata directory cd /data/dfs/nn #Extract to the location wherever you want. tar -cvf /root/nn_backup_data.tar .
... View more
04-19-2018
02:24 AM
When I update ranger tag based repository with new policies I have an error in the tagsync log which says upload of service-tags failed with message 401 java.lang.Exception: Upload of service tags failed with response: PUT https://ranger-host>:6182/service/tags/importservicetags/ returned a response status of 401 Unauthorized Ambari-2.6.1 & HDP-2.6.4 Kerberos & SSL enabled
... View more
Labels:
- Labels:
-
Apache Ranger
04-13-2018
04:11 PM
we can change the password in advanced atals-env in the atlas configurations in ambari.
... View more
01-08-2018
04:46 PM
How can we minimize the verbose when we run query on LLAP engine on the console. Example verbose: capture.png
... View more
Labels:
- Labels:
-
Apache Hive
01-05-2018
02:34 PM
Found some custom jar files in the phoenix lib folder.. Deleting those jar files from there fixed the issue. Thanks @jay
... View more
01-05-2018
01:59 PM
It looks they both are in the same version.. And we haven't upgraded anything. I got this error just after restarting hiverserver2. rpm -qa | grep ambari-metrics-hadoop-sink
ambari-metrics-hadoop-sink-2.5.1.0-159.x86_64
rpm -qa | grep ambari
ambari-agent-2.5.1.0-159.x86_64
ambari-metrics-monitor-2.5.1.0-159.x86_64
ambari-infra-solr-client-2.5.1.0-159.noarch
ambari-metrics-hadoop-sink-2.5.1.0-159.x86_64
... View more
01-04-2018
09:42 PM
ERROR [main]: metastore.HiveMetaStore (HiveMetaStore.java:init(535)) - error in Metrics init: java.lang.reflect.InvocationTargetException null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:532)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:91)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6364)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:205)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:76)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:92)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3488)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3520)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:528)
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:130)
at org.apache.hive.service.cli.CLIService.init(CLIService.java:115)
at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:122)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:474)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Caused by: java.lang.AbstractMethodError: org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.init(Lorg/apache/phoenix/shaded/org/apache/commons/configuration/SubsetConfiguration;)V
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:529)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:501)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:480)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initReporting(CodahaleMetrics.java:377)
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.<init>(CodahaleMetrics.java:199)
... 42 more
WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(508)) - Error starting HiveServer2 on attempt 2, will retry in 60 seconds
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/webapp/YarnJacksonJaxbJsonProvider
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:268)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:169)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.tez.client.TezYarnClient.init(TezYarnClient.java:46)
at org.apache.tez.client.TezClient.start(TezClient.java:325)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:197)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:76)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:488)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
... View more
Labels:
- Labels:
-
Apache Hive
12-14-2017
03:37 PM
It will consider the node is part of the rack which you set, until you reset it to the correct rack..
... View more
12-13-2017
02:47 PM
There are many things to take into consideration here: Rack awareness is helpful for 2 purposes: 1)Rack awareness for the nodes is configured based on the physical Rack of the Nodes. Based on that we will distribute the components between the Rack's to avoid the fail-over the whole Rack. Suppose if the Active node is Rack1 and Standby on Rack2, If Rack1 goes down then Namenode on Rack2 will become active. Whereas if we install both the Name-nodes on the same rack, Then if that whole rack goes down, then your whole cluster will goes down. 2)In the case of DataNodes, The files you are are writing/Reading to the HDFS, will pick the node which is closer and if the node is free/available on the same rack then that will take your action. Same case with the execution of jobs, it will get scheduled on the same rack, if any of the nodes on that rack are available, if not schedule on the other rack. (This reduces the Network traffic as well between the nodes)
... View more
12-12-2017
07:30 PM
I am able to create a policy with the below command; curl -iv -u username:password -H "content-type:application/json" -X POST http://hostname:6080/service/public/api/policy/ -d '{ "policyName": "api-Test", "resourceName": "/data", "description": "Testing", "repositoryName": "HDPPRD01_hadoop", "repositoryType": "hdfs", "isEnabled": "true", "isRecursive": "true", "isAuditEnabled": "true", "permMapList": [{ "userList":["sudheer"],"groupList":["hadoop_group"], "permList": ["Read","Execute", "Write", "Admin"] }] }' But I want to update that policy with the Rest APi: Tried with this command but throwing some error: (The specified HTTP method is not allowed for the requested resource.) curl -iv -u username:password -H "content-type:application/json" -X PUT http://hostname:6080/service/public/api/policy/ -d '{ "policyName": "api-Test", "resourceName": "/tmp", "description": "Testing", "repositoryName": "HDPPRD01_hadoop", "repositoryType": "hdfs", "isEnabled": "true", "isRecursive": "true", "isAuditEnabled": "true", "permMapList": [{ "userList":["velagapudi"],"groupList":["hadoop_user"], "permList": ["Read","Execute", "Write", "Admin"] }] }'
... View more
Labels:
- Labels:
-
Apache Ranger
12-05-2017
02:29 PM
Run the below command to manually replicate the ranger_audits to other solr instance. curl -i -k -v --negotiate -u : http://<from-node>:8886/solr/admin/collections?action=ADDREPLICA&collection=ranger_audits&shard=shard1&node=<target-host>:8886_solr
... View more
11-21-2017
02:57 PM
My Solr instance is getting killed once in a week with OOM. I tried to tune the below parameter.. Need some recommendations.. Recommended values for these parameters based on the log provided: GC_TUNE="-XX:NewRatio=3 \
-XX:SurvivorRatio=4 \
-XX:TargetSurvivorRatio=90 \
-XX:MaxTenuringThreshold=8 \
-XX:+UseConcMarkSweepGC \
-XX:+UseParNewGC \
-XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
-XX:+CMSScavengeBeforeRemark \
-XX:PretenureSizeThreshold=64m \
-XX:+UseCMSInitiatingOccupancyOnly \
-XX:CMSInitiatingOccupancyFraction=50 \
-XX:CMSMaxAbortablePrecleanTime=6000 \
-XX:+CMSParallelRemarkEnabled \
-XX:+ParallelRefProcEnabled"
And also:
Minimum Heap Size & Maximum Heap Size Log: Heap after GC invocations=19 (full 1):
par new generation total 218496K, used 7215K [0x00000006c0000000, 0x00000006d0000000, 0x0000000700000000)
eden space 174848K, 0% used [0x00000006c0000000, 0x00000006c0000000, 0x00000006caac0000)
from space 43648K, 16% used [0x00000006cd560000, 0x00000006cdc6bc78, 0x00000006d0000000)
to space 43648K, 0% used [0x00000006caac0000, 0x00000006caac0000, 0x00000006cd560000)
concurrent mark-sweep generation total 786432K, used 92764K [0x0000000700000000, 0x0000000730000000, 0x00000007c0000000)
Metaspace used 39260K, capacity 39748K, committed 40076K, reserved 1085440K
class space used 4267K, capacity 4422K, committed 4528K, reserved 1048576K
}
2017-11-21T14:30:19.916-0500: 117.112: Total time for which application threads were stopped: 0.0098877 seconds, Stopping threads took: 0.0000449 seconds
2017-11-21T14:30:20.916-0500: 118.112: Total time for which application threads were stopped: 0.0003058 seconds, Stopping threads took: 0.0001195 seconds
{Heap before GC invocations=19 (full 1):
par new generation total 218496K, used 182063K [0x00000006c0000000, 0x00000006d0000000, 0x0000000700000000)
eden space 174848K, 100% used [0x00000006c0000000, 0x00000006caac0000, 0x00000006caac0000)
from space 43648K, 16% used [0x00000006cd560000, 0x00000006cdc6bc78, 0x00000006d0000000)
to space 43648K, 0% used [0x00000006caac0000, 0x00000006caac0000, 0x00000006cd560000)
concurrent mark-sweep generation total 786432K, used 92764K [0x0000000700000000, 0x0000000730000000, 0x00000007c0000000)
Metaspace used 39262K, capacity 39748K, committed 40076K, reserved 1085440K
class space used 4267K, capacity 4422K, committed 4528K, reserved 1048576K
2017-11-21T14:30:28.151-0500: 125.346: [GC (Allocation Failure) 2017-11-21T14:30:28.151-0500: 125.346: [ParNew
Desired survivor size 40225992 bytes, new threshold 8 (max 8)
- age 1: 136184 bytes, 136184 total
- age 2: 1010336 bytes, 1146520 total
- age 3: 40472 bytes, 1186992 total
- age 4: 73744 bytes, 1260736 total
- age 5: 2934424 bytes, 4195160 total
- age 6: 24424 bytes, 4219584 total
- age 7: 111680 bytes, 4331264 total
- age 8: 56160 bytes, 4387424 total
: 182063K->5404K(218496K), 0.0065729 secs] 274827K->98648K(1004928K), 0.0066500 secs] [Times: user=0.02 sys=0.00, real=0.01 secs]
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Solr
11-17-2017
06:50 PM
1 Kudo
Enabling PAM authentication is causing too much load on the base linux machine where KNOX is running and the knox is gettting killed when we made too many concurrent connections to hive through knox.
... View more