Member since
12-10-2015
73
Posts
30
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
626 | 03-10-2021 08:35 AM | |
790 | 07-25-2019 06:34 AM | |
1381 | 04-20-2016 10:03 AM | |
947 | 04-11-2016 03:07 PM |
04-07-2021
06:04 AM
1 Kudo
Hi, you can try disabling the network adapter from VM configs.
... View more
04-07-2021
02:25 AM
Description:
Simple repo to build a Docker with cdp-cli
Repo Info:
Docker build repository
Repo URL:
https://github.com/disoardi/cdp-cli
Account Name:
disoardi
Repo Name:
cdp-cli
... View more
Labels:
03-23-2021
03:52 AM
Hi, aren't there networks problems?
... View more
03-19-2021
06:49 AM
1 Kudo
Hi, y es it's right. I don't recommend using the sandbox for testing of any upgrades of a production environment. Many times the the sendbox env have been very sundry from the stnadard installed environments. The sandbox is designed to understand the technology, but in your case I recommend having a test environment at scale compared to the production one, with the same versions and topology.
... View more
03-19-2021
01:03 AM
Hi, I don't know which version of hive you are running, but hive-cli has been deprecated: In HDP 3.0 and later, Hive does not support the following features: Apache Hadoop Distributed Copy (DistCp) WebHCat Hcat CLI Hive CLI (replaced by Beeline) SQL Standard Authorization MapReduce execution engine (replaced by Tez)
... View more
03-19-2021
12:52 AM
1 Kudo
Hi, here https://www.cloudera.com/downloads/cdp-private-cloud-trial.html you can find the form to fill in to download cdp private cloud trial edition. If you have subscriptions you can proceed from your personal area on https://my.cloudera.com and in your applications you should have downloads: downloads
... View more
03-18-2021
02:49 AM
1 Kudo
Correct approc
... View more
03-17-2021
03:40 AM
2 Kudos
Hi, I guess you can't edit the extract query and maybe use a join, correct? Because the problem is that for nifi the 2 jsons in the array are potentially two representations of the same identity. So it is difficult to find a reliable method to achieve the goal. I would re-start from data extraction ...
... View more
03-17-2021
03:32 AM
2 Kudos
Hi, I have never encountered problems on Centos7 and jdk8 for Zeppelin 0.7.3. But the requirement in the doc could also depend on some specific interpreters. We mostly used Hive interpreter. I would try with a dedicated virtula nodes for testing.
... View more
03-12-2021
12:07 AM
Hi, I don't think there is a way to retrieve this information via API rest. Make a python script, from there you should be able to retrieve the quota (https://pyhdfs.readthedocs.io/en/latest/pyhdfs.html). You can try to configure a custom ambari alert whit the script. Let me know if you can because it could be useful to many people.
... View more
03-11-2021
11:46 PM
Hi, can you try to force hive compaction? ALTER TABLE <db_name>.<table_name> PARTITION (<partition_name>='<partition_value>') COMPACT 'major'; with the show compactions command it checks when it has finished and the result (failed or ok). After this it tries to execute the count again
... View more
03-11-2021
05:46 AM
Hi, The tables where you count in your example, are both on the new cluster, correct? Are the tables ACID? What version of hive are you using?
... View more
03-10-2021
08:35 AM
Ok, I solved it using the cdp cli. The problem was that from web ui of Cloudera Management Console it is not possible to insert the identity for ranger, while from cli it is possible. Below are the scripts for creating the data lake environment: cdp environments create-azure-environment \
--environment-name <ENV_NAME> \
--credential-name <CREDENTIAL_NAME> \
--region "AZURE_REGIONE_NAME" \
--security-access cidr=0.0.0.0/0 \
--no-enable-tunnel \
--public-key "ssh-rsa ..." \
--log-storage storageLocationBase=abfs://logs@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net,managedIdentity=/subscriptions/xxx/resourcegroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-LoggerIdentity \
--use-public-ip \
--existing-network-params networkId=<ENV_NAME>-Vnet,resourceGroupName=<ENV_NAME>,subnetIds=CDP \
--free-ipa instanceCountByGroup=1
cdp environments set-id-broker-mappings \
--environment-name <ENV_NAME> \
--data-access-role /subscriptions/xxx/resourceGroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-DataAccessIdentity \
--ranger-audit-role /subscriptions/xxx/resourceGroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-RangerIdentity \
--set-empty-mappings
cdp datalake create-azure-datalake \
--datalake-name <ENV_NAME> \
--environment-name <ENV_NAME> \
--cloud-provider-configuration managedIdentity=/subscriptions/xxx/resourcegroups/<RG_NAME>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<ENV_NAME>-AssumerIdentity,storageLocation=abfs://data@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net \
--scale LIGHT_DUTY \
--runtime 7.2.7 Here instead the dockerfile for those wishing to have the cdp-cli in cointainer: FROM python
RUN apt update \
&& apt upgrade -y \
&& apt install -y \
groff \
less
RUN git clone https://github.com/cloudera/cdpcli.git \
&& cd cdpcli \
&& pip install .
... View more
03-10-2021
03:10 AM
The CDP platform is great if your use cases require it. I am noticing the issue, however, in CDP Public cloud implementation. Have you tried it?
... View more
03-09-2021
05:09 AM
Uhm, I don't think there is a "simple" method in amabri configurations. But I would try to look in the web app configuration path, then the web.xml, ssl.xm ... and see if you can intervene at that level. In other contexts these kinds of limitations were handled "before" they came to the ambari interfaces.
... View more
03-09-2021
12:03 AM
This is the error log from the "slave" node when knox is started. I have no idea why it is used knox to access to abfs, but it is consistent with the symptoms: 2021-03-08 16:15:28,657 ERROR idbroker.azure (KnoxMSICredentials.java:httpPatchRequest(416)) - Request to attach identities to VM failed with response code 400, message: {"error":{"code":"FailedIdentityOperation","message":"Identity operation for resource '/subscriptions/xxx/resourceGroups/xxx/providers/Microsoft.Compute/virtualMachines/xxx' failed with error 'Failed to perform resource identity operation. Status: 'BadRequest'. Response: '{\"error\":{\"code\":\"BadRequest\",\"message\":\"Resource '/subscriptions/xxx/resourcegroups/msi/providers/Microsoft.ManagedIdentity/userAssignedIdentities/mock-idbroker-admin-identity' was not found.\"}}'.'."}} 2021-03-08 16:15:28,658 ERROR idbroker.azure (KnoxAzureClient.java:addIdentitiesToVM(288)) - Error attaching identities to VM: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden 2021-03-08 16:15:28,658 ERROR idbroker.azure (KnoxAzureClient.java:generateAccessToken(425)) - Azure ADLS2, error obtaining access token, cause : java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden 2021-03-08 16:15:28,659 ERROR idbroker.azure (KnoxAzureClient.java:getCredentialsForRole(163)) - Azure ADLS2, error obtaining access token, cause : java.lang.RuntimeException: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden 2021-03-08 16:15:28,661 ERROR idbroker.azure (KnoxAzureClient.java:getCredentialsForRole(164)) - StackTrace: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2051) at com.google.common.cache.LocalCache.get(LocalCache.java:3953) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4873) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.getCachedAccessToken(KnoxAzureClient.java:346) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.getCredentialsForRole(KnoxAzureClient.java:127) at org.apache.knox.gateway.service.idbroker.AbstractKnoxCloudCredentialsClient.getCredentialsForRole(AbstractKnoxCloudCredentialsClient.java:119) at org.apache.knox.gateway.service.idbroker.KnoxCloudCredentialsClientManager.getCredentialsForRole(KnoxCloudCredentialsClientManager.java:43) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getRoleCredentialsResponse(IdentityBrokerResource.java:198) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentialsResponse(IdentityBrokerResource.java:180) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentialsResponse(IdentityBrokerResource.java:173) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentialsResponse(IdentityBrokerResource.java:169) at org.apache.knox.gateway.service.idbroker.IdentityBrokerResource.getCredentials(IdentityBrokerResource.java:137) at sun.reflect.GeneratedMethodAccessor90.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:151) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:171) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:152) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:406) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:350) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:106) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:259) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267) at org.glassfish.jersey.internal.Errors.process(Errors.java:315) at org.glassfish.jersey.internal.Errors.process(Errors.java:297) at org.glassfish.jersey.internal.Errors.process(Errors.java:267) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:319) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:236) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1028) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:373) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:381) at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:534) at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:482) at org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:419) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.doFilterInternal(AbstractIdentityAssertionFilter.java:193) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.access$000(AbstractIdentityAssertionFilter.java:53) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter$1.run(AbstractIdentityAssertionFilter.java:161) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.doAs(AbstractIdentityAssertionFilter.java:156) at org.apache.knox.gateway.identityasserter.common.filter.AbstractIdentityAssertionFilter.continueChainAsPrincipal(AbstractIdentityAssertionFilter.java:146) at org.apache.knox.gateway.identityasserter.common.filter.CommonIdentityAssertionFilter.doFilter(CommonIdentityAssertionFilter.java:94) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.provider.federation.jwt.filter.AbstractJWTFilter$1.run(AbstractJWTFilter.java:207) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.knox.gateway.provider.federation.jwt.filter.AbstractJWTFilter.continueWithEstablishedSecurityContext(AbstractJWTFilter.java:202) at org.apache.knox.gateway.provider.federation.jwt.filter.JWTFederationFilter.doFilter(JWTFederationFilter.java:93) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.filter.XForwardedHeaderFilter.doFilter(XForwardedHeaderFilter.java:50) at org.apache.knox.gateway.filter.AbstractGatewayFilter.doFilter(AbstractGatewayFilter.java:58) at org.apache.knox.gateway.GatewayFilter$Holder.doFilter(GatewayFilter.java:349) at org.apache.knox.gateway.GatewayFilter$Chain.doFilter(GatewayFilter.java:263) at org.apache.knox.gateway.GatewayFilter.doFilter(GatewayFilter.java:167) at org.apache.knox.gateway.GatewayFilter.doFilter(GatewayFilter.java:92) at org.apache.knox.gateway.GatewayServlet.service(GatewayServlet.java:135) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:865) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1623) at org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter.doFilter(WebSocketUpgradeFilter.java:214) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1701) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1668) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.apache.knox.gateway.trace.TraceHandler.handle(TraceHandler.java:51) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.apache.knox.gateway.filter.CorrelationHandler.handle(CorrelationHandler.java:41) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.apache.knox.gateway.filter.PortMappingHelperHandler.handle(PortMappingHelperHandler.java:106) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:502) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427) at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321) at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.generateAccessToken(KnoxAzureClient.java:426) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.lambda$getCachedAccessToken$0(KnoxAzureClient.java:350) at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4878) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ... 107 more Caused by: java.lang.RuntimeException: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.addIdentitiesToVM(KnoxAzureClient.java:289) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.loadUserIdentities(KnoxAzureClient.java:186) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.getAccessTokenUsingMSI(KnoxAzureClient.java:476) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.generateAccessToken(KnoxAzureClient.java:416) ... 113 more Caused by: javax.ws.rs.WebApplicationException: HTTP 403 Forbidden at org.apache.knox.gateway.service.idbroker.azure.KnoxMSICredentials.httpPatchRequest(KnoxMSICredentials.java:429) at org.apache.knox.gateway.service.idbroker.azure.KnoxMSICredentials.attachIdentities(KnoxMSICredentials.java:188) at org.apache.knox.gateway.service.idbroker.azure.KnoxAzureClient.addIdentitiesToVM(KnoxAzureClient.java:256) ... 116 more
... View more
03-08-2021
11:53 PM
Hi all, I'm having trouble during the provisioning of an environment via cloudera manager cloud console. I followed the quick start, https://docs.cloudera.com/management-console/cloud/azure-quickstart/topics/mc-azure-quickstart.html and the guide on the repository https://github.com/cpv0310/cdp -azure-tools, but the problem remains the same:hdfs can't write to storage abfs: // data @ xxx I tried to create the managed identity both through the template and through the script provided, but I have not had any changes. The only different thing is that in the guide, step 6, it says to assign both assumer identity and data identity, but in the form I only have the possibility to assign the assumer identity. Same thing when I go to assign the identity logger: I only have one slot and I can't assign the identity ranger. In the logs I see that the creation of the data lake stops trying to create the first folder on HDFS (abfs) and the error is on the "slave" node which through knox has a 403 forbidden. As soon as possible I attach the logs. Thanks in advance
... View more
Labels:
- Labels:
-
Cloudera Altus
07-25-2019
06:34 AM
Hi, see this form ambari minor upgrade: https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/bk_ambari-upgrade/content/checkpoint_hdfs.html Do you have hdfs whit HA enabled?
... View more
07-25-2019
06:29 AM
Hi @jessica moore, see this https://github.com/apache/ambari/blob/trunk/ambari-server/docs/api/v1/index.md
... View more
11-30-2018
02:46 PM
Yes, it works. Thanks
... View more
11-30-2018
12:29 PM
Hi all, HDP 2.6.3 with Hive 2. I have enabled the ACID transaction; when run the compaction I have seen max 3 job on YARN queue. I tried to launch manually more than 3 compaction, but they are execute 3 at a time. Is there a parameter to set the max concurrent compaction jobs? Thancks in advance
... View more
Labels:
- Labels:
-
Apache Hive
04-24-2018
11:50 AM
in your place I would have:
convert record to JSON not to CSV
use EvaluateJsonPath to set the attribute from the JSON in the contentFile use updateAttribute to transform datetime and user use ReplaceText to replace all contentFile with the altered attribute write the contentFile to FS
... View more
04-24-2018
11:49 AM
in your place I would have: convert record to JSON not to CSV use EvaluateJsonPath to set the attribute from the JSON in the contentFile use updateAttribute to transform datetime and user use ReplaceText to replace all contentFile with the altered attribute write the contentFile to FS
... View more
04-23-2018
03:00 PM
sorry for the mistake, obviously the operation is performed on the attribute, not on the FlowFile. So you must first set the values you need in the attributes and then transform them. Thanks @Matt Clarke
... View more
04-23-2018
02:10 PM
Try to execute tailf /proc/<pid_execution_script>/fd/2 pid_execution_scipt --> on the server where run NiFi you look for the process id, not the pid of NiFi service. Sometimes ExecutionScript ddoes not recivie the termination of the script. I do not know how to solve this 😞
... View more
04-23-2018
01:58 PM
you must change the attribute on which to perform the transformation. I never used the GrokReader, but I believe you have to change ts to in timestamp in the EL string
... View more
04-23-2018
10:42 AM
1 Kudo
Hi @Wojtek,
I believe the problem is that the format() function works only with numbers. Your timestamp has a '.' (dot) which makes it interpret as a string.
To solve the problem I have adopted the following EL un a updateAttribute processor: ${ts:substringBefore('.'):append(${ts:substringAfter('.')})
:toNumber():format('MM/dd/yyyy HH:mm:ss.SSS')
}
ts attribute contains this value 1518442283.483 the result is 02/12/2018 13:31:23.483 I hope I have been of help
... View more
12-22-2017
04:02 PM
I have a cluster with 7 NiFi nodes. After a node crash, on restart, NiFi could not find the file /usr/hdf/current/nifi/conf/keystore.jks and truststore.jks I have re-created the files with tls-toolkit.sh client -c tp-hostname.domain.com -t passwordPassword -p 10443 In Ambari config the keystore and truststore pasword are empty. When I start the NiFi services Ihave: Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'protocolSocketConfiguration': FactoryBean threw exception on object creation; nested exception is java.io.IOException: Keystore was tampered with, or password was incorrect
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:175)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103)
at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1585)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:317)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:351)
... 78 common frames omitted
Caused by: java.io.IOException: Keystore was tampered with, or password was incorrect
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)
at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
at sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
at sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
at java.security.KeyStore.load(KeyStore.java:1445)
at org.apache.nifi.io.socket.SSLContextFactory.<init>(SSLContextFactory.java:65)
at org.apache.nifi.cluster.protocol.spring.SocketConfigurationFactoryBean.getObject(SocketConfigurationFactoryBean.java:45)
at org.apache.nifi.cluster.protocol.spring.SocketConfigurationFactoryBean.getObject(SocketConfigurationFactoryBean.java:30)
at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:168)
... 83 common frames omitted
Caused by: java.security.UnrecoverableKeyException: Password verification failed
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:778)
... 91 common frames omitted
In nifi.properties I have: nifi.security.keyPasswd=
nifi.security.keystore=/usr/hdf/current/nifi/conf/keystore.jks
nifi.security.keystorePasswd=
nifi.security.keystoreType=jks
nifi.security.needClientAuth=False
nifi.security.ocsp.responder.certificate=
nifi.security.ocsp.responder.url=
nifi.security.truststore=/usr/hdf/current/nifi/conf/truststore.jks
nifi.security.truststorePasswd=
nifi.security.truststoreType=jks
nifi.security.user.authorizer=ranger-provider
nifi.security.user.login.identity.provider=
nifi.sensitive.props.additional.keys=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.key=sdlkjdslkjsdlkjdjjd||xyGZZ+R3FO04BxcUHSL5U6+OGqtQQevXbFfecQ
nifi.sensitive.props.key.protected=aes/gcm/256
nifi.sensitive.props.provider=BC On other NiFi nodes I have an encrypted password in nifi.properties, but the truststore and the keystore has an empty string as a password. Do you have any idea for this issue? Thanks in advance
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache NiFi
11-29-2016
11:59 AM
Hi guys, I see surfing the internet that the max numbers of files stored in HDFS is equals at Integer.MAX_VALUE of JVM. Any confirmation that the maximum number of files is this (2.147.483.647)?
... View more
Labels:
- Labels:
-
Apache Hadoop
04-26-2016
10:23 AM
1 Kudo
Ok, in advaced zookeeper.log4j set: log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/zookeeper.log ${zookeeper.log.dir} is missing in defaul configuration.
... View more