Member since
12-10-2015
76
Posts
30
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
922 | 03-10-2021 08:35 AM | |
963 | 07-25-2019 06:34 AM | |
1696 | 04-20-2016 10:03 AM | |
1170 | 04-11-2016 03:07 PM |
09-15-2022
12:57 AM
Hi @rki_ , Thanks for the explanation. I had hoped to have found a reason for the index writer closed error of SolR. Thanks you anyway
... View more
09-13-2022
12:42 AM
hello to all, I have many connections in time_wait on the ipc port 1019 of the datanode: More than 600 time_wait and about 250 established. Is that normal? I’m afraid that’s why index write closed on solr errors (the index is on hdfs). The servers are downloaded and the datanode does not saturate the jvm heap. I couldn’t find any max connection configuration for port 1019 Any ideas? Environment: HDP 3.1.5.0-152 with HDFS 3.1.1 Thanks in advance
... View more
Labels:
- Labels:
-
HDFS
06-14-2022
01:58 AM
Hi all, I have an issue with compaction of Hive ACID table. Env HDP 3.1.5.0-152 with Hive 3.1.0 All compaction jobs fail with this stack trace: 2022-06-14 10:46:02,236 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1653525342115_29428_m_157230162771970 asked for a task
2022-06-14 10:46:02,236 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1653525342115_29428_m_157230162771970 given task: attempt_1653525342115_29428_m_000000_0
2022-06-14 10:46:03,989 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1653525342115_29428_m_000000_0 is : 0.0
2022-06-14 10:46:03,994 ERROR [IPC Server handler 5 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1653525342115_29428_m_000000_0 - exited : java.lang.NullPointerException
at java.lang.System.arraycopy(Native Method)
at org.apache.hadoop.io.Text.set(Text.java:225)
at org.apache.orc.impl.StringRedBlackTree.add(StringRedBlackTree.java:59)
at org.apache.orc.impl.writer.StringTreeWriter.writeBatch(StringTreeWriter.java:70)
at org.apache.orc.impl.writer.StructTreeWriter.writeFields(StructTreeWriter.java:64)
at org.apache.orc.impl.writer.StructTreeWriter.writeBatch(StructTreeWriter.java:78)
at org.apache.orc.impl.writer.StructTreeWriter.writeRootBatch(StructTreeWriter.java:56)
at org.apache.orc.impl.WriterImpl.addRowBatch(WriterImpl.java:557)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.flushInternalBatch(WriterImpl.java:297)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:334)
at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$1.close(OrcOutputFormat.java:316)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.close(CompactorMR.java:1002)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Below in the log file I see this error: 2022-06-14 10:46:08,699 INFO [IPC Server handler 2 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1653525342115_29428_m_000000_1 is : 0.0
2022-06-14 10:46:08,702 ERROR [IPC Server handler 5 on 40882] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1653525342115_29428_m_000000_1 - exited : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to CREATE_FILE /<hdfs>/<path>/<database_name>.db/<tablename>/_tmp_5b5a4f18-76ef-42c3-acb0-64b175679d54/base_0000005/bucket_00000 for DFSClient_attempt_1653525342115_29428_m_000000_1_-740576932_1 on 10.102.190.206 because this file lease is currently owned by DFSClient_attempt_1653525342115_29428_m_000000_0_-14754452_1 on 10.102.xxx.xxx
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2604)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:378)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2453)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2351)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:774)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:462)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1498)
at org.apache.hadoop.ipc.Client.call(Client.java:1444)
at org.apache.hadoop.ipc.Client.call(Client.java:1354)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy13.create(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:362)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy14.create(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:273)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1211)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1190)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1128)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:531)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:528)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:542)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:469)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
at org.apache.orc.impl.PhysicalFsWriter.<init>(PhysicalFsWriter.java:95)
at org.apache.orc.impl.WriterImpl.<init>(WriterImpl.java:177)
at org.apache.hadoop.hive.ql.io.orc.WriterImpl.<init>(WriterImpl.java:94)
at org.apache.hadoop.hive.ql.io.orc.OrcFile.createWriter(OrcFile.java:378)
at org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRawRecordWriter(OrcOutputFormat.java:299)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.getWriter(CompactorMR.java:1029)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:966)
at org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:939)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) but if I try to list the file it not exists on hdfs (I obfuscated the path in the logs). Any idea to fix this issue? It's critical for me.
... View more
Labels:
- Labels:
-
Apache Hive
04-07-2021
06:04 AM
1 Kudo
Hi, you can try disabling the network adapter from VM configs.
... View more
04-07-2021
02:25 AM
Description:
Simple repo to build a Docker with cdp-cli
Repo Info:
Docker build repository
Repo URL:
https://github.com/disoardi/cdp-cli
Account Name:
disoardi
Repo Name:
cdp-cli
... View more
Labels:
03-23-2021
03:52 AM
Hi, aren't there networks problems?
... View more
03-19-2021
06:49 AM
1 Kudo
Hi, y es it's right. I don't recommend using the sandbox for testing of any upgrades of a production environment. Many times the the sendbox env have been very sundry from the stnadard installed environments. The sandbox is designed to understand the technology, but in your case I recommend having a test environment at scale compared to the production one, with the same versions and topology.
... View more
03-19-2021
01:03 AM
Hi, I don't know which version of hive you are running, but hive-cli has been deprecated: In HDP 3.0 and later, Hive does not support the following features: Apache Hadoop Distributed Copy (DistCp) WebHCat Hcat CLI Hive CLI (replaced by Beeline) SQL Standard Authorization MapReduce execution engine (replaced by Tez)
... View more
03-19-2021
12:52 AM
1 Kudo
Hi, here https://www.cloudera.com/downloads/cdp-private-cloud-trial.html you can find the form to fill in to download cdp private cloud trial edition. If you have subscriptions you can proceed from your personal area on https://my.cloudera.com and in your applications you should have downloads: downloads
... View more
03-18-2021
02:49 AM
1 Kudo
Correct approc
... View more
03-17-2021
03:40 AM
2 Kudos
Hi, I guess you can't edit the extract query and maybe use a join, correct? Because the problem is that for nifi the 2 jsons in the array are potentially two representations of the same identity. So it is difficult to find a reliable method to achieve the goal. I would re-start from data extraction ...
... View more
03-17-2021
03:32 AM
2 Kudos
Hi, I have never encountered problems on Centos7 and jdk8 for Zeppelin 0.7.3. But the requirement in the doc could also depend on some specific interpreters. We mostly used Hive interpreter. I would try with a dedicated virtula nodes for testing.
... View more
03-12-2021
12:07 AM
Hi, I don't think there is a way to retrieve this information via API rest. Make a python script, from there you should be able to retrieve the quota (https://pyhdfs.readthedocs.io/en/latest/pyhdfs.html). You can try to configure a custom ambari alert whit the script. Let me know if you can because it could be useful to many people.
... View more
03-11-2021
11:46 PM
Hi, can you try to force hive compaction? ALTER TABLE <db_name>.<table_name> PARTITION (<partition_name>='<partition_value>') COMPACT 'major'; with the show compactions command it checks when it has finished and the result (failed or ok). After this it tries to execute the count again
... View more
03-11-2021
05:46 AM
Hi, The tables where you count in your example, are both on the new cluster, correct? Are the tables ACID? What version of hive are you using?
... View more
03-10-2021
08:35 AM
Ok, I solved it using the cdp cli. The problem was that from web ui of Cloudera Management Console it is not possible to insert the identity for ranger, while from cli it is possible. Below are the scripts for creating the data lake environment: cdp environments create-azure-environment \
--environment-name <ENV_NAME> \
--credential-name <CREDENTIAL_NAME> \
--region "AZURE_REGIONE_NAME" \
--security-access cidr=0.0.0.0/0 \
--no-enable-tunnel \
--public-key "ssh-rsa ..." \
--log-storage storageLocationBase=abfs://logs@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net,managedIdentity=/subscriptions/xxx/resourcegroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-LoggerIdentity \
--use-public-ip \
--existing-network-params networkId=<ENV_NAME>-Vnet,resourceGroupName=<ENV_NAME>,subnetIds=CDP \
--free-ipa instanceCountByGroup=1
cdp environments set-id-broker-mappings \
--environment-name <ENV_NAME> \
--data-access-role /subscriptions/xxx/resourceGroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-DataAccessIdentity \
--ranger-audit-role /subscriptions/xxx/resourceGroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-RangerIdentity \
--set-empty-mappings
cdp datalake create-azure-datalake \
--datalake-name <ENV_NAME> \
--environment-name <ENV_NAME> \
--cloud-provider-configuration managedIdentity=/subscriptions/xxx/resourcegroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-AssumerIdentity,storageLocation=abfs://data@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net \
--scale LIGHT_DUTY \
--runtime 7.2.7 Here instead the dockerfile for those wishing to have the cdp-cli in cointainer: FROM python
RUN apt update \
&& apt upgrade -y \
&& apt install -y \
groff \
less
RUN git clone https://github.com/cloudera/cdpcli.git \
&& cd cdpcli \
&& pip install .
... View more
03-10-2021
03:10 AM
The CDP platform is great if your use cases require it. I am noticing the issue, however, in CDP Public cloud implementation. Have you tried it?
... View more
03-09-2021
05:09 AM
Uhm, I don't think there is a "simple" method in amabri configurations. But I would try to look in the web app configuration path, then the web.xml, ssl.xm ... and see if you can intervene at that level. In other contexts these kinds of limitations were handled "before" they came to the ambari interfaces.
... View more
03-09-2021
12:03 AM
This is the error log from the "slave" node when knox is started. I have no idea why it is used knox to access to abfs, but it is consistent with the symptoms: 2021-03-08 16:15:28,657 ERROR idbroker.azure (KnoxMSICredentials.java:httpPatchRequest(416)) - Request to attach identities to VM failed with response code 400, message: {"error":{"code":"FailedIdentityOperation","message":"Identity operation for resource '/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Compute/virtualMachines/xxx' failed with error 'Failed to perform resource identity operation. Status: 'BadRequest'. Response: '{\"error\":{\"code\":\"BadRequest\",\"message\":\"Resource '/subscriptions/xxx/resourcegroups/msi/providers/Microsoft.ManagedIdentity/userAssignedIdentities/mock-idbroker-admin-identity' was not found.\"}}'.'."}} 2021-03-08 16:15:28,658 ERROR idbroker.azure (KnoxAzureClient.java:addIdentitiesToVM(288)) - Error attaching identities to VM: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden 2021-03-08 16:15:28,658 ERROR idbroker.azure (KnoxAzureClient.java:generateAccessToken(425)) - Azure ADLS2, error obtaining access token, cause : java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden 2021-03-08 16:15:28,659 ERROR idbroker.azure (KnoxAzureClient.java:getCredentialsForRole(163)) - Azure ADLS2, error obtaining access token, cause : java.lang.RuntimeException: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden 2021-03-08 16:15:28,661 ERROR idbroker.azure (KnoxAzureClient.java:getCredentialsForRole(164)) - StackTrace: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) at com.google.common.cache.LocalCache.get(LocalCache.java:3953) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4873) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.getCachedAccessToken(KnoxAzureClient.java:346) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.getCredentialsForRole(KnoxAzureClient.java:127) at org.apache.knox.gateway.service.idbroker.AbstractKnoxCloudCredentialsClient.getCredentialsForRole(AbstractKnoxCloudCredentialsClient.java:119) at org.apache.knox.gateway.service.idbroker.KnoxCloudCredentialsClientManager.getCredentialsForRole(KnoxCloudCredentialsClientManager.java:43) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getRoleCredentialsResponse(IdentityBrokerResource.java:198) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentialsResponse(IdentityBrokerResource.java:180) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentialsResponse(IdentityBrokerResource.java:173) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentialsResponse(IdentityBrokerResource.java:169) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentials(IdentityBrokerResource.java:137) at sun.reflect.GeneratedMethodAccessor90.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:171) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:406) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:350) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:106) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:259) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:319) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:236) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1028) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:373) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:381) at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:534) at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:482) at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:419) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.doFilterInternal(AbstractIdentityAssertionFilter.java:193) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.access$000(AbstractIdentityAssertionFilter.java:53) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter$1.run(AbstractIdentityAssertionFilter.java:161) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.doAs(AbstractIdentityAssertionFilter.java:156) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.continueChainAsPrincipal(AbstractIdentityAssertionFilter.java:146) at org.apache.knox.gateway.identityasserter.common.filter.CommonIdentityAssertionFilter.doFilter(CommonIdentityAssertionFilter.java:94) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.provider.federation.jwt.filter.AbstractJWTFilter$1.run(AbstractJWTFilter.java:207) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.knox.gateway.provider.federation.jwt.filter.AbstractJWTFilter.continueWithEstablishedSecurityContext(AbstractJWTFilter.java:202) at org.apache.knox.gateway.provider.federation.jwt.filter.JWTFederationFilter.doFilter(JWTFederationFilter.java:93) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.filter.XForwardedHeaderFilter.doFilter(XForwardedHeaderFilter.java:50) at org.apache.knox.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:58) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.GatewayFilter.doFilter(GatewayFilter.java:167) at org.apache.knox.gateway.GatewayFilter.doFilter(GatewayFilter.java:92) at org.apache.knox.gateway.GatewayServlet.service(GatewayServlet.java:135) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1623) at org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:214) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1701) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1668) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.apache.knox.gateway.trace.TraceHandler.handle(TraceHandler.java:51) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.apache.knox.gateway.filter.CorrelationHandler.handle(CorrelationHandler.java:41) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.apache.knox.gateway.filter.PortMappingHelperHandler.handle(PortMappingHelperHandler.java:106) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:502) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427) at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321) at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.generateAccessToken(KnoxAzureClient.java:426) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.lambda$getCachedAccessToken$0(KnoxAzureClient.java:350) at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4878) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ... 107 more Caused by: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.addIdentitiesToVM(KnoxAzureClient.java:289) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.loadUserIdentities(KnoxAzureClient.java:186) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.getAccessTokenUsingMSI(KnoxAzureClient.java:476) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.generateAccessToken(KnoxAzureClient.java:416) ... 113 more Caused by: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at org.apache.knox.gateway.service.idbroker.azure.KnoxMSICredentials.httpPatchRequest(KnoxMSICredentials.java:429) at org.apache.knox.gateway.service.idbroker.azure.KnoxMSICredentials.attachIdentities(KnoxMSICredentials.java:188) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.addIdentitiesToVM(KnoxAzureClient.java:256) ... 116 more
... View more
03-08-2021
11:53 PM
Hi all, I'm having trouble during the provisioning of an environment via cloudera manager cloud console. I followed the quick start, https://docs.cloudera.com/management-console/cloud/azure-quickstart/topics/mc-azure-quickstart.html and the guide on the repository https://github.com/cpv0310/cdp -azure-tools, but the problem remains the same:hdfs can't write to storage abfs: // data @ xxx I tried to create the managed identity both through the template and through the script provided, but I have not had any changes. The only different thing is that in the guide, step 6, it says to assign both assumer identity and data identity, but in the form I only have the possibility to assign the assumer identity. Same thing when I go to assign the identity logger: I only have one slot and I can't assign the identity ranger. In the logs I see that the creation of the data lake stops trying to create the first folder on HDFS (abfs) and the error is on the "slave" node which through knox has a 403 forbidden. As soon as possible I attach the logs. Thanks in advance
... View more
Labels:
07-25-2019
06:34 AM
Hi, see this form ambari minor upgrade: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-upgrade/content/checkpoint_hdfs.html Do you have hdfs whit HA enabled?
... View more
07-25-2019
06:29 AM
Hi @jessica moore, see this https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
... View more
11-30-2018
02:46 PM
Yes, it works. Thanks
... View more
11-30-2018
12:29 PM
Hi all, HDP 2.6.3 with Hive 2. I have enabled the ACID transaction; when run the compaction I have seen max 3 job on YARN queue. I tried to launch manually more than 3 compaction, but they are execute 3 at a time. Is there a parameter to set the max concurrent compaction jobs? Thancks in advance
... View more
Labels:
- Labels:
-
Apache Hive
04-24-2018
11:50 AM
in your place I would have:
convert record to JSON not to CSV
use EvaluateJsonPath to set the attribute from the JSON in the contentFile use updateAttribute to transform datetime and user use ReplaceText to replace all contentFile with the altered attribute write the contentFile to FS
... View more
04-24-2018
11:49 AM
in your place I would have: convert record to JSON not to CSV use EvaluateJsonPath to set the attribute from the JSON in the contentFile use updateAttribute to transform datetime and user use ReplaceText to replace all contentFile with the altered attribute write the contentFile to FS
... View more
04-23-2018
03:00 PM
sorry for the mistake, obviously the operation is performed on the attribute, not on the FlowFile. So you must first set the values you need in the attributes and then transform them. Thanks @Matt Clarke
... View more
04-23-2018
02:10 PM
Try to execute tailf /proc/<pid_execution_script>/fd/2 pid_execution_scipt --> on the server where run NiFi you look for the process id, not the pid of NiFi service. Sometimes ExecutionScript ddoes not recivie the termination of the script. I do not know how to solve this 😞
... View more
04-23-2018
01:58 PM
you must change the attribute on which to perform the transformation. I never used the GrokReader, but I believe you have to change ts to in timestamp in the EL string
... View more
04-23-2018
10:42 AM
1 Kudo
Hi @Wojtek,
I believe the problem is that the format() function works only with numbers. Your timestamp has a '.' (dot) which makes it interpret as a string.
To solve the problem I have adopted the following EL un a updateAttribute processor: ${ts:substringBefore('.'):append(${ts:substringAfter('.')})
:toNumber():format('MM/dd/yyyy HH:mm:ss.SSS')
}
ts attribute contains this value 1518442283.483 the result is 02/12/2018 13:31:23.483 I hope I have been of help
... View more