Member since
09-20-2017
50
Posts
2
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1725 | 08-09-2018 06:47 PM | |
5457 | 01-05-2018 02:34 PM | |
1298 | 12-05-2017 02:29 PM | |
790 | 10-18-2017 06:10 PM |
04-26-2018
01:31 PM
This is what I gfound in the Ranger admin access log: [26/Apr/2018:13:28:36 +0000] "GET /service/plugins/secure/policies/download/HDPCLUSTER_hbase?lastKnownVersion=172&lastActivationTime=1524514591769&pluginId=hbaseRegional@hadoop.cluster.com-HDPCLUSTER_hbase&clusterName=HDPCLUSTER HTTP/1.1" 401 - "-" "Java/1.8.0_161"
... View more
04-20-2018
08:29 PM
Ranger admin is running node01 & node02 and used external load balancer. Added the spn for load balancer on node01 & node02. Ranger tagsync is running on node02. It is using the keytab of en02 for rangertagsync user to update the tagstore and getting denied. 20 Apr 2018 13:28:55 DEBUG TagAdminRESTSink [Thread-7] - 143 Using Principal = rangertagsync/node02-priv.cluster.com@CLUSTER.COM 20 Apr 2018 13:28:55 DEBUG TagAdminRESTSink [Thread-7] - 173 ==> doUpload() 20 Apr 2018 13:28:55 ERROR TagAdminRESTSink [Thread-7] - 183 Upload of service-tags failed with message HTTP 401 20 Apr 2018 13:28:55 ERROR TagAdminRESTSink [Thread-7] - 152 Upload of service-tags failed with message java.lang.Exception: Upload of service-tags failed with response: PUT https://loadblancer.cluster.com:6182/service/tags/importservicetags/ returned a response status of 401 Unauthorized at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.uploadServiceTags(TagAdminRESTSink.java:187) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.access$000(TagAdminRESTSink.java:46) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink$1.run(TagAdminRESTSink.java:150) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink$1.run(TagAdminRESTSink.java:146) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:360) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1849) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.doUpload(TagAdminRESTSink.java:146) at org.apache.ranger.tagsync.sink.tagadmin.TagAdminRESTSink.run(TagAdminRESTSink.java:255) at java.lang.Thread.run(Thread.java:748)
... View more
04-19-2018
02:39 PM
Here is the other way Backing Up and Restoring HDFS Metadata Backing Up HDFS Metadata Using Cloudera Manager HDFS metadata backups can be used to restore a NameNode when
both NameNode roles have failed. In addition, Cloudera recommends backing up
HDFS metadata before a major upgrade. Minimum Required Role: (also provided by Full
Administrator) This backup method requires you to shut down the cluster.
Note
the active NameNode.
Stop
the cluster. It is particularly important that the NameNode role process
is not running so that you can make a consistent backup.
Go to
the HDFS service.
Click
the Configuration tab.
In the
Search field, search for "NameNode Data Directories" and note
the value.
On the
active NameNode host, back up the directory listed in the NameNode Data
Directories property. If more than one is listed, make a backup of one
directory, since each directory is a complete copy. For example, if the
NameNode data directory is /data/dfs/nn, do the following as root:
# cd
/data/dfs/nn # tar -cvf /root/nn_backup_data.tar . You should see output like this: ./ ./current/ ./current/fsimage ./current/fstime ./current/VERSION ./current/edits ./image/ ./image/fsimage If there is a file with the extension lock in
the NameNode data directory, the NameNode most likely is still running. Repeat
the steps, starting by shutting down the NameNode role. Restoring HDFS Metadata From a Backup The following process assumes a scenario where both NameNode
hosts have failed and you must restore from a backup.
Remove
the NameNode, JournalNode, and Failover Controller roles from the HDFS
service.
Add
the host on which the NameNode role will run.
Create
the NameNode data directory, ensuring that the permissions, ownership, and
group are set correctly.
Copy
the backed up files to the NameNode data directory.
Add
the NameNode role to the host.
Add
the Secondary NameNode role to another host.
Enable
high availability. If not all roles are started after the wizard
completes, restart the HDFS service. Upon startup, the NameNode reads the
fsimage file and loads it into memory. If the JournalNodes are up and
running and there are edit files present, any edits newer than the fsimage
are applied.
... View more
04-19-2018
02:34 PM
#Put the namenode into safemode hdfs dfsadmin -safemode enter #save all the trsactions to namespace hdfs dfsadmin -saveNamespace #Download the FSImage of namenode hdfs dfsadmin -fetchImage <path-forimage> #Bring the namenode out from safemode hdfs dfsadmin -safemode leave #This step is critical # Navigate to metadata directory cd /data/dfs/nn #Extract to the location wherever you want. tar -cvf /root/nn_backup_data.tar .
... View more
04-19-2018
02:24 AM
When I update ranger tag based repository with new policies I have an error in the tagsync log which says upload of service-tags failed with message 401 java.lang.Exception: Upload of service tags failed with response: PUT https://ranger-host>:6182/service/tags/importservicetags/ returned a response status of 401 Unauthorized Ambari-2.6.1 & HDP-2.6.4 Kerberos & SSL enabled
... View more
Labels:
- Labels:
-
Apache Ranger
04-13-2018
04:11 PM
we can change the password in advanced atals-env in the atlas configurations in ambari.
... View more
01-08-2018
04:46 PM
How can we minimize the verbose when we run query on LLAP engine on the console. Example verbose: capture.png
... View more
Labels:
- Labels:
-
Apache Hive
01-05-2018
02:34 PM
Found some custom jar files in the phoenix lib folder.. Deleting those jar files from there fixed the issue. Thanks @jay
... View more
01-05-2018
01:59 PM
It looks they both are in the same version.. And we haven't upgraded anything. I got this error just after restarting hiverserver2. rpm -qa | grep ambari-metrics-hadoop-sink
ambari-metrics-hadoop-sink-2.5.1.0-159.x86_64
rpm -qa | grep ambari
ambari-agent-2.5.1.0-159.x86_64
ambari-metrics-monitor-2.5.1.0-159.x86_64
ambari-infra-solr-client-2.5.1.0-159.noarch
ambari-metrics-hadoop-sink-2.5.1.0-159.x86_64
... View more
01-04-2018
09:42 PM
ERROR [main]: metastore.HiveMetaStore (HiveMetaStore.java:init(535)) - error in Metrics init: java.lang.reflect.InvocationTargetException null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.common.metrics.common.MetricsFactory.init(MetricsFactory.java:42)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:532)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:91)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6364)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:205)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:76)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1564)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:92)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:138)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:110)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3488)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3520)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:528)
at org.apache.hive.service.cli.CLIService.applyAuthorizationConfigPolicy(CLIService.java:130)
at org.apache.hive.service.cli.CLIService.init(CLIService.java:115)
at org.apache.hive.service.CompositeService.init(CompositeService.java:59)
at org.apache.hive.service.server.HiveServer2.init(HiveServer2.java:122)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:474)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Caused by: java.lang.AbstractMethodError: org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.init(Lorg/apache/phoenix/shaded/org/apache/commons/configuration/SubsetConfiguration;)V
at org.apache.hadoop.metrics2.impl.MetricsConfig.getPlugin(MetricsConfig.java:199)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.newSink(MetricsSystemImpl.java:529)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configureSinks(MetricsSystemImpl.java:501)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.configure(MetricsSystemImpl.java:480)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.start(MetricsSystemImpl.java:189)
at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:164)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:54)
at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.initReporting(CodahaleMetrics.java:377)
at org.apache.hadoop.hive.common.metrics.metrics2.CodahaleMetrics.<init>(CodahaleMetrics.java:199)
... 42 more
WARN [main]: server.HiveServer2 (HiveServer2.java:startHiveServer2(508)) - Error starting HiveServer2 on attempt 2, will retry in 60 seconds
java.lang.NoClassDefFoundError: org/apache/hadoop/yarn/webapp/YarnJacksonJaxbJsonProvider
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:268)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:169)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.tez.client.TezYarnClient.init(TezYarnClient.java:46)
at org.apache.tez.client.TezClient.start(TezClient.java:325)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:197)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:116)
at org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.startPool(TezSessionPoolManager.java:76)
at org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:488)
at org.apache.hive.service.server.HiveServer2.access$700(HiveServer2.java:87)
at org.apache.hive.service.server.HiveServer2$StartOptionExecutor.execute(HiveServer2.java:720)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:593)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
... View more
Labels:
- Labels:
-
Apache Hive