Member since
05-23-2019
19
Posts
6
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2426 | 05-28-2019 07:39 AM |
06-17-2019
12:43 AM
I also have this requirement, can you give me a complete configuration? Thank you
... View more
06-11-2019
08:52 AM
2019-06-11 15:59:06,314 [pool-21-thread-13] ERROR org.apache.thrift.transport.TSaslTransport - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:426)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:242)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3021)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3040)
at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1234)
at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:174)
at org.apache.hadoop.hive.ql.metadata.Hive.<clinit>(Hive.java:166)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
at org.apache.spark.sql.hive.client.HiveClientImpl.newState(HiveClientImpl.scala:186)
at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:120)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:274)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:390)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:287)
at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)
at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:195)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:195)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:194)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:114)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:102)
at org.apache.spark.sql.internal.SharedState.globalTempViewManager$lzycompute(SharedState.scala:138)
at org.apache.spark.sql.internal.SharedState.globalTempViewManager(SharedState.scala:133)
at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder$$anonfun$2.apply(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager$lzycompute(SessionCatalog.scala:91)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.globalTempViewManager(SessionCatalog.scala:91)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:203)
at org.apache.spark.sql.execution.command.CreateDatabaseCommand.run(ddl.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
at com.wumii.analysis.data.sync.InitDataJob$.initPartition(InitDataJob.scala:40)
at com.wumii.analysis.data.sync.InitDataJob$.doInitCrement(InitDataJob.scala:121)
at com.wumii.analysis.data.sync.InitDataJob$$anonfun$initIncrement$1$$anon$1.call(InitDataJob.scala:92)
at com.wumii.analysis.data.sync.InitDataJob$$anonfun$initIncrement$1$$anon$1.call(InitDataJob.scala:89)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) workflow.xml <workflow-app name="Init-Data-Increment-Job" xmlns="uri:oozie:workflow:0.5">
<credentials>
<credential name='hcatauth' type='hcat'>
<property>
<name>hcat.metastore.uri</name>
<value>thrift://ip:9083</value>
</property>
<property>
<name>hcat.metastore.principal</name>
<value>hive/_HOST@WUMII.NET</value>
</property>
</credential>
</credentials>
<start to="spark-a61f"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="spark-a61f">
<spark xmlns="uri:oozie:spark-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<master>yarn</master>
<mode>cluster</mode>
<name>Init-Data-Increment</name>
<class>com.wumii.analysis.data.sync.InitDataJob</class>
<jar>data-inbound-1.0-SNAPSHOT.jar</jar>
<spark-opts>--master yarn --driver-memory 16g --executor-memory 8g --queue online --keytab hive.service.keytab </spark-opts>
<arg>2</arg>
<file>/user/oozie/job/data-inbound/data-inbound-1.0-SNAPSHOT.jar#data-inbound-1.0-SNAPSHOT.jar</file>
<file>/user/oozie/job/data-inbound/platform-common-1.0-SNAPSHOT.jar#platform-common-1.0-SNAPSHOT.jar</file>
<file>/user/oozie/job/data-inbound/hive.service.keytab#hive.service.keytab</file>
</spark>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app>
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Spark
06-10-2019
08:30 AM
java.io.IOException: Not a data file. org.apache.nifi.processor.exception.ProcessException: IOException thrown from ConvertAvroToJSON[id=4004c598-016b-1000-ffff-ffff850df8b7]: java.io.IOException: Not a data file. at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2946) at org.apache.nifi.processors.avro.ConvertAvroToJSON.onTrigger(ConvertAvroToJSON.java:148) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162) at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:205) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: Not a data file. at org.apache.avro.file.DataFileStream.initialize(DataFileStream.java:105) at org.apache.avro.file.DataFileStream.<init>(DataFileStream.java:84) at org.apache.nifi.processors.avro.ConvertAvroToJSON$1.process(ConvertAvroToJSON.java:179) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2925) ... 12 common frames omitted data: {"City": "Athens", "Edition": 1896, "Sport": "Aquatics", "sub_sport": "Swimming", "Athlete": "HAJOS, Alfred", "country": "HUN", "Gender": "Men", "Event": "100m freestyle", "Event_gender": "M", "Medal": "Gold"}
... View more
Labels:
- Labels:
-
Apache NiFi
06-03-2019
01:38 AM
HDP3.1,ambari 2.7, debian 9 2019-05-31 07:36:53,863 INFO zookeeper.ReadOnlyZKClient (ReadOnlyZKClient.java:run(315)) - 0x4d5650ae no activities for 60000 ms, close active connection. Will reconnect next time when there are new requests. 2019-05-31 07:37:53,542 INFO storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(170)) - Running HBase liveness monitor 2019-05-31 07:37:53,544 WARN storage.HBaseTimelineReaderImpl (HBaseTimelineReaderImpl.java:run(183)) - Got failure attempting to read from timeline storage, assuming HBase down java.io.UncheckedIOException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0 at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:55) at org.apache.hadoop.yarn.server.timelineservice.storage.reader.TimelineEntityReader.readEntities(TimelineEntityReader.java:283) at org.apache.hadoop.yarn.server.timelineservice.storage.HBaseTimelineReaderImpl$HBaseMonitor.run(HBaseTimelineReaderImpl.java:174) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:332) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58) at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:269) at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:437) at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:312) at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:597) at org.apache.hadoop.hbase.client.ResultScanner$1.hasNext(ResultScanner.java:53) ... 9 more Caused by: java.io.IOException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /atsv2-hbase-secure/meta-region-server at org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2002) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:762) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:729) at org.apache.hadoop.hbase.client.ConnectionImplementation.relocateRegion(ConnectionImplementation.java:707) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:911) at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:732) at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:325)
... View more
05-31-2019
06:50 AM
ambari2.7 、HDP3.1
... View more
05-30-2019
11:10 AM
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/alerts/alert_ats_hbase.py", line 183, in execute
ats_hbase_app_info = make_valid_json(output)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/alerts/alert_ats_hbase.py", line 226, in make_valid_json
raise Fail("Couldn't validate the received output for JSON parsing.")
Fail: Couldn't validate the received output for JSON parsing.
... View more
05-30-2019
10:32 AM
I have the same problem. Have you solved it?
... View more
05-30-2019
10:02 AM
I have the same problem. Have you solved it?
... View more
05-30-2019
07:54 AM
Thank you, but it's the same after the revision
... View more
05-30-2019
05:24 AM
I have the same problem, Can you help me?
... View more
05-29-2019
05:51 AM
1 Kudo
Labels:
05-28-2019
07:39 AM
3 Kudos
have already been solved hive://ip:10500/default?auth=KERBEROS&kerberos_service_name=hive
... View more
05-28-2019
06:02 AM
1 Kudo
how to configure hivserver2 in superset.
... View more
Labels:
- Labels:
-
Apache Hive
05-28-2019
05:49 AM
Have you made any progress on this?
... View more
05-28-2019
05:48 AM
HDP3.1 + Superset 0.23.0 ,I don't know how to configure it
... View more
05-23-2019
01:16 PM
I also need it. AUTH_TYPE = AUTH_LDAP
AUTH_USER_REGISTRATION = True
AUTH_LDAP_SERVER = "ldap://XXX"
AUTH_LDAP_SEARCH="dc=XXX,dc=com"
AUTH_LDAP_APPEND_DOMAIN = "XXX.com"
AUTH_LDAP_UID_FIELD="userPrincipalName"
AUTH_LDAP_FIRSTNAME_FIELD="givenName"
AUTH_LDAP_LASTTNAME_FIELD="sn"
AUTH_LDAP_USE_TLS = False
... View more