Member since
03-08-2016
84
Posts
12
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1609 | 12-18-2017 07:42 PM | |
408 | 04-03-2017 01:28 PM | |
829 | 02-16-2017 12:40 PM | |
624 | 02-13-2017 11:58 AM | |
2494 | 01-03-2017 08:49 PM |
12-18-2017
07:42 PM
The problem occurs because aws processor builds a client for S3 in different manner and supports only S3 regions. For example, here is how it is recommended to create client for OCI S3(https://docs.us-phoenix-1.oraclecloud.com/Content/Object/Tasks/s3compatibleapi.htm#supportedClients): // Get S3 credentials from the console and put them here AWSCredentialsProvider credentials = new AWSStaticCredentialsProvider(new BasicAWSCredentials( "ocid1.credential.oc1..anEXAMPLE", "anEXAMPLE=")); // The name of your tenancy String tenancy = "tenancy"; // The region to connect to String region = "us-ashburn-1"; // Create an S3 client pointing at the region String endpoint = String.format("%s.compat.objectstorage.%s.oraclecloud.com",tenancy,region); AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(endpoint, region); AmazonS3 client = AmazonS3Client.builder() .standard() .withCredentials(credentials) .withEndpointConfiguration(endpointConfiguration) .disableChunkedEncoding() .enablePathStyleAccess() .build(); The most interesting part is: AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(endpoint, region); AmazonS3 client = AmazonS3Client.builder() .standard() .withCredentials(credentials) .withEndpointConfiguration(endpointConfiguration) .disableChunkedEncoding() .enablePathStyleAccess() .build(); However in nifi-aws-processor client has no support of setting region, so the region is defaulted to 'us-east-1' To support other storage rather then amazon one should make changes in nifi-aws-abstract-processors/src/main/java/org/apache/nifi/processors/aws/s3/AbstractS3Processor.java:132: <code> /**
* Create client using credentials provider. This is the preferred way for creating clients
*/
@Override
protected AmazonS3Client createClient(final ProcessContext context, final AWSCredentialsProvider credentialsProvider, final ClientConfiguration config) {
getLogger().info("Creating client with credentials provider");
initializeSignerOverride(context, config);
AmazonS3Client s3 = null;
if(StringUtils.trimToEmpty(context.getProperty(ENDPOINT_OVERRIDE).evaluateAttributeExpressions().getValue()).isEmpty() == false && StringUtils.trimToEmpty(context.getProperty(REGION).evaluateAttributeExpressions().getValue()).isEmpty() == false
){
AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration(context.getProperty(ENDPOINT_OVERRIDE).evaluateAttributeExpressions().getValue(), context.getProperty(REGION).evaluateAttributeExpressions().getValue());
AmazonS3 client = AmazonS3Client.builder()
.standard()
.withClientConfiguration(config)
.withCredentials(credentials)
.withEndpointConfiguration(endpointConfiguration)
.disableChunkedEncoding()
.enablePathStyleAccess()
.build();
}
else {
s3 = new AmazonS3Client(credentialsProvider, config);
initalizeEndpointOverride(context, s3);
}
return s3;
}
... View more
12-16-2017
08:21 AM
I can confirm that credentials are correct, because I had used the same for s3cmd and everything had been working correctly
... View more
12-15-2017
11:24 PM
Dear community,
I am trying to forward data from apache nifi to OCI(Oracle cloud infrastructure) S3 storage. However I receive following error:
com.amazonaws.services.s3.model.AmazonS3Exception: com.oracle.pic.casper.integration.auth.dataplane.exceptions.NotAuthenticatedException: Authentication region name'us-east-1' is incorrect. (Service: Amazon S3; Status Code: 403; Error Code: SignatureDoesNotMatch; Request ID: null)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1545)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1183)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:964)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:676)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:650)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:633)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:601)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:583)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:447)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4137)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1685)
at org.apache.nifi.processors.aws.s3.PutS3Object$1.process(PutS3Object.java:474)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2133)
at org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2103)
at org.apache.nifi.processors.aws.s3.PutS3Object.onTrigger(PutS3Object.java:417)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1118)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) Seems cloud provider checks for region. However I had specified S3.properties file as mentioned here https://community.hortonworks.com/articles/86801/working-with-s3-compatible-data-stores-via-apache.html with all needed properties which should override region, accesskey, secretkey and others.
... View more
Labels:
- Labels:
-
Apache NiFi
10-02-2017
11:29 AM
Dear community, Recently I looked into a possibility of parsing logs with hive. However it is little bit complex to port existing grok rules. Is there a way to use grok patterns in hive like amazon athena does?
... View more
Labels:
- Labels:
-
Apache Hive
08-08-2017
10:10 PM
Thanks for help. Finally got some data. There were several problems: 1) Sometimes metrics collector had been failing. 2) Time had not been synchronized. Host which requested metrics had different time rather then ambari cluster.
... View more
08-05-2017
08:44 PM
@Jay SenSharma Escaping characters helped, but still receive an empty response: curl --verbose --user admin:admin -H 'X-Requested-By:ambari' -H 'X-Forwarded-To:foo-controller1.local:31080' -X GET "http://192.168.1.14/api/v1/clusters/foo/?fields=metrics/load/CPUs\[1501965010,1501965087,15\]"
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 192.168.1.14...
* Connected to 192.168.1.14 (192.168.1.14) port 80 (#0)
* Server auth using Basic with user 'admin'
> GET /api/v1/clusters/foo/?fields=metrics/load/CPUs[1501965010,1501965087,15] HTTP/1.1
> Host: 192.168.1.14
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.47.0
> Accept: */*
> X-Requested-By:ambari
> X-Forwarded-To:foo-controller1.local:31080
>
< HTTP/1.1 200 OK
< Server: openresty/1.11.2.4
< Date: Fri, 04 Aug 2017 10:42:10 GMT
< Content-Type: text/plain
< Content-Length: 184
< Connection: keep-alive
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Cache-Control: no-store
< Pragma: no-cache
< Set-Cookie: AMBARISESSIONID=ao8jm5334pchda97v8yafnhx;Path=/;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< User: admin
< Vary: Accept-Encoding, User-Agent
<
{
"href" : "http://192.168.1.14/api/v1/clusters/foo/?fields=metrics/load/CPUs[1501965010,1501965087,15]",
"Clusters" : {
"cluster_name" : "foo",
"version" : "HDP-2.6"
}
* Connection #0 to host 192.168.1.14 left intact Also tried, but the result is the same: curl --verbose --user admin:admin -H 'X-Requested-By:ambari' -H 'X-Forwarded-To:foo-controller1.local:31080' -X GET "http://192.168.1.14/api/v1/clusters/foo/?fields=metrics/cpu/Idle\[1501965010,1501965087,15\]"
... View more
08-05-2017
11:54 AM
Dear @Jay SenSharma Thanks for fast reply. I`ve tried to run queries against different clusters, but receive strange errors: curl -k --user admin:admin -H 'X-Requested-By:ambari' -X GET "https://mycluster:8443/api/v1/clusters/dev_sap/?fields=metrcs/load/CPUs._avg[1501902776,1501906376,15]"
curl: (3) [globbing] bad range in column 101
curl -k --user admin:admin -H 'X-Requested-By:ambari' -X GET "https://mycluster:8443/api/v1/clusters/dev_sap/?fields=metrics/load/Nodes._avg[1501902776,1501906376,15]"
curl: (3) [globbing] bad range in column 103
curl -k --user admin:admin -H 'X-Requested-By:ambari' -X GET "https://mycluster:8443/api/v1/clusters/dev_sap/?fields=metrics/load/1-min._avg[1501902776,1501906376,15]"
curl: (3) [globbing] bad range in column 103
... View more
08-04-2017
09:58 PM
Dear community, I am trying to get cluster metrics on hdp 2.6 on centos 7, but when executing this example: http://<ambari-server>:8080/api/v1/clusters/<cluster-name>?fields=metrics/load I recieve empty response. curl output: HTTP/1.1 200 OK
Server: openresty/1.11.2.4
Date: Fri, 04 Aug 2017 04:49:50 GMT
Content-Type: text/plain
Content-Length: 155
Connection: keep-alive
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=1p21o8xq7zkhq1o695y5wqcpgp;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Vary: Accept-Encoding, User-Agent
Best regards, Vladislav
... View more
Labels:
- Labels:
-
Apache Ambari
04-25-2017
12:22 PM
I had found out that amabri uses jinja2 templates, so the syntax should be the same as used in jinja2. https://github.com/apache/ambari/tree/trunk/ambari-common/src/main/python/ambari_jinja2/docs However I am still not sure about variables. In stacks definitions variables are in format "nifi.install.dir", but in blueprints they are in format nifi_install_dir. Which variant is correct?
... View more
04-25-2017
10:36 AM
Also noticed some conditional expressions like "{% if security_enabled %}". How does this works? Which other constructions blueprint supports?
... View more
04-25-2017
09:12 AM
Dear community, I am little bit confused with variables in ambari blueprints. Actually had not found any manual about syntax in ambari blueprints. Just found something like this {{nifi_install_dir}} which I guess refers to nifi.install.dir option, but I am not sure because had not found any description how it works. Also I had noticed such variables ${hbase.tmp.dir} are they the same as {{hbase_tmp_dir}}? Also found something like {hbase_tmp_dir}. What usage of blueprint variables is correct?
... View more
Labels:
- Labels:
-
Apache Ambari
04-10-2017
10:51 AM
@Timothy Spann I was able to run apache nifi(package downloaded from nifi.apache.org) on my local machine in cluster mode with ldap and https. The problem was with user identity mapping. The configuration only supports one identity mapping in nifi.properties: nifi.security.identity.mapping.pattern.dn=^cn=(.*?),ou=(.*?)$
nifi.security.identity.mapping.value.dn=$1 Also the dn attributes should be lowercase for AD. However when I tried that same configuration on my HDF cluster the authorization failed. I`ve noticed here https://community.hortonworks.com/articles/61729/nifi-identity-conversion.html that there are some rules in HDF for host identity mapping. The link shows an example when NIFI internal CA is used. However I am using letsencrypt for host certificates. Can NIFI internal CA parameters cause the problem? I`ve looked into sources of NIFI mpack of HDF and it is very integrated with internal CA. Had not found any option how to disable it.
... View more
04-06-2017
12:52 PM
Thanks for reply. I am using debian 7 with Ambari 2.4. Cluster is deployed from scratch without any modifications and custom configuration via blueprints. The file permissions are default. Configuration files belong to nifi user as specified by default. Firewall - all ports between cluster nodes are opened. Can not see any other logs regarding nifi except in /var/log/nifi/. Every node is running the same version of nifi, nothing is upgraded, ambari has no issues neither in GUI nor in logs. I`ve looked into this: http://bryanbende.com/development/2016/08/17/apache-nifi-1-0-0-authorization-and-multi-tenancy Will try to look into other to recommendations. Many thanks for help. Is there a possibility get more verbosity about user authorization?
... View more
04-06-2017
08:00 AM
I am trying to use workflow manager in Ambari 2.4. Created workflow, but can not delete it. Is there a possibility to delete workflow? Will this view see all workflows created in oozie?
... View more
04-05-2017
01:41 PM
Added more verbosity using logback.xml. logback.xml: <configuration scan="true" scanPeriod="30 seconds">
<contextListener>
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log.gz</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
<immediateFlush>true</immediateFlush>
</encoder>
</appender>
<appender name="USER_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user_%d.log.gz</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap_%d.log.gz</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE">
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="DEBUG"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
<logger name="com.sun.jersey.spi.spring" level="ERROR"/>
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="DEBUG" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="DEBUG" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="DEBUG" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.springframework.security.ldap.authentication" level="DEBUG" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<root level="INFO">
<appender-ref ref="APP_FILE"/>
</root>
</configuration> Also added some dn mappings: nifi.security.identity.mapping.pattern.dn3 : ^cn=(.*?),ou=(.*?)$
nifi.security.identity.mapping.value.dn3 : $1
nifi.security.identity.mapping.pattern.dn4 : ^cn=(.*-[^ ]+?)$
nifi.security.identity.mapping.value.dn4 : $1 However still have the same error when trying to login to web interface. And following errors in nifi-user.log: 2017-04-05 15:17:26,891 DEBUG [NiFi Web Server-85] o.s.s.l.a.LdapAuthenticationProvider Processing authentication request for user: my.User
2017-04-05 15:17:27,229 DEBUG [NiFi Web Server-85] o.s.s.l.authentication.BindAuthenticator Attempting to bind as cn=my User,ou=mydepartment,ou=com,dc=company,dc=local
2017-04-05 15:17:27,312 DEBUG [NiFi Web Server-85] o.s.s.l.authentication.BindAuthenticator Retrieving attributes...
2017-04-05 15:17:27,499 DEBUG [NiFi Web Server-87] o.a.n.w.s.x509.X509CertificateExtractor No client certificate found in request.
2017-04-05 15:17:28,183 DEBUG [NiFi Web Server-18] o.a.n.w.s.NiFiAuthenticationFilter Checking secure context token: null
2017-04-05 15:17:28,183 DEBUG [NiFi Web Server-18] o.a.n.w.s.x509.X509CertificateExtractor No client certificate found in request.
2017-04-05 15:17:28,183 DEBUG [NiFi Web Server-18] o.a.n.w.s.NiFiAuthenticationFilter Checking secure context token: null
2017-04-05 15:17:28,184 INFO [NiFi Web Server-18] o.a.n.w.s.NiFiAuthenticationFilter Attempting request for (eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ2LmZhbGZ1c2h5bnNreWkiLCJpc3MiOiJMZGFwUHJvdmlkZXIiLCJhdWQiOiJMZGFwUHJvdmlkZXIiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJ2LmZhbGZ1c2h5bnNreWkiLCJraWQiOjEsImV4cCI6MTQ5MTQ0MTQ0NywiaWF0IjoxNDkxMzk4MjQ3fQ.Okgl4l6P8U0vyK5jdcQmo12CkE39p0SDDVaTPWsyf-8) GET https://dev-hdf01.test.company.com:9091/nifi-api/flow/current-user (source ip: 10.1.1.2)
2017-04-05 15:17:28,187 INFO [NiFi Web Server-18] o.a.n.w.s.NiFiAuthenticationFilter Authentication success for my.User
2017-04-05 15:17:28,188 DEBUG [NiFi Web Server-18] o.a.n.w.s.NiFiAuthenticationFilter Checking secure context token: my.User
2017-04-05 15:17:28,188 DEBUG [NiFi Web Server-18] o.a.n.w.s.a.NiFiAnonymousUserFilter SecurityContextHolder not populated with anonymous token, as it already contained: 'my.User'
2017-04-05 15:17:28,216 INFO [NiFi Web Server-18] o.a.n.w.a.c.AccessDeniedExceptionMapper my.User does not have permission to access the requested resource. Returning Forbidden response. Out of ideas ....
... View more
04-05-2017
10:40 AM
Tried not using "Legacy Authorized Users File". Instead specified "Initial Admin Identity", but recieve the same error: 2017-04-05 12:29:41,203 INFO [NiFi Web Server-78] o.a.n.w.s.NiFiAuthenticationFilter Authentication success for my.User
2017-04-05 12:29:41,208 INFO [NiFi Web Server-78] o.a.n.w.a.c.AccessDeniedExceptionMapper my.User does not have permission to access the requested resource. Returning Forbidden response. authorizers.xml: <authorizers>
<authorizer>
<identifier>file-provider</identifier>
<class>org.apache.nifi.authorization.FileAuthorizer</class>
<property name="Authorizations File">/mnt/hadoop/nifi/conf/authorizations.xml</property>
<property name="Users File">/mnt/hadoop/nifi/conf/users.xml</property>
<property name="Legacy Authorized Users File"></property>
<property name="Node Identity 1">dev-hdf01.test.company.com</property>
<property name="Node Identity 2">dev-hdf02.test.company.com</property>
<property name="Node Identity 3">dev-hdf03.test.company.com</property>
</authorizer>
</authorizers> users.xml and authorizations.xml look the same as before. users.xml: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def" identity="my User"/>
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254" identity="dev-hdf01.test.company.com"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319" identity="dev-hdf02.test.company.com"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721" identity="dev-hdf03.test.company.com"/>
</users>
</tenants> authorizations.xml: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizations>
<policies>
<policy identifier="5e3cde8a-cc45-3895-93ab-d915c4b25b41" resource="/flow" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="4c56214e-a731-39c8-a55b-01edd488a4c7" resource="/restricted-components" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="8a742337-b4c1-3cfd-a30e-ebbbaaa5d0bb" resource="/tenants" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="ffde3c8e-f9f4-36e3-9495-8c4e0b001633" resource="/tenants" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="56478c43-0cce-335a-9657-4169455eee9c" resource="/policies" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="7786999f-46a8-364d-a5e1-ef8cd56e50c9" resource="/policies" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="68c086a6-c723-3b1f-9598-80524d1daf4a" resource="/controller" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="e376da0f-4cbc-359a-b014-b504d4a34e9c" resource="/controller" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="8a19b211-a11c-3d17-8bbe-c58934620154" resource="/proxy" action="R">
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721"/>
</policy>
<policy identifier="dce9877c-7055-3750-adf3-5aac66a03d17" resource="/proxy" action="W">
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721"/>
</policy>
</policies>
</authorizations>
... View more
04-05-2017
08:25 AM
users.xml: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def" identity="my User"/>
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254" identity="dev-hdf01.test.company.com"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319" identity="dev-hdf02.test.company.com"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721" identity="dev-hdf03.test.company.com"/>
</users>
</tenants> authorizations.xml: <authorizations>
<policies>
<policy identifier="a59757a6-adc8-3d06-8606-6a799634b2bd" resource="/controller" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="7aefc265-1e47-3c26-853a-8ae8edad0bd4" resource="/policies" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="84729226-45e2-324c-999a-d87bd37d7d41" resource="/policies" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="adc44004-0829-3e7b-ba73-6f02abc95e17" resource="/tenants" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="b536bfa5-ab43-38a1-a683-21d209169150" resource="/flow" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="2353d6ee-5514-3c71-b526-bbd4b8184112" resource="/tenants" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="5092ba54-9c82-3261-8e28-456a702a6f46" resource="/process-groups/7c84501d-d10c-407c-b9f3-1d80e38fe36a" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="30c09a19-304a-316f-9848-198c1bd51205" resource="/process-groups/7c84501d-d10c-407c-b9f3-1d80e38fe36a" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="d1feaf04-fdb8-3d9b-857d-e683253b0642" resource="/restricted-components" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="6a4151df-47d0-355c-b330-bff39bbd5c50" resource="/controller" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="835a87d3-48af-347d-974a-667a4c62f0e5" resource="/system" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
</policy>
<policy identifier="68776817-426d-3c54-8397-e99bbd017800" resource="/data/process-groups/7c84501d-d10c-407c-b9f3-1d80e38fe36a" action="W">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721"/>
</policy>
<policy identifier="b8ec3441-5417-30a6-89a7-c466803975bd" resource="/data/process-groups/7c84501d-d10c-407c-b9f3-1d80e38fe36a" action="R">
<user identifier="737d2997-486f-3ae2-bf23-ef4686e63def"/>
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721"/>
</policy>
<policy identifier="8a19b211-a11c-3d17-8bbe-c58934620154" resource="/proxy" action="R">
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721"/>
</policy>
<policy identifier="dce9877c-7055-3750-adf3-5aac66a03d17" resource="/proxy" action="W">
<user identifier="41ccf709-0a95-33e2-bfcb-1788ba3ef254"/>
<user identifier="41221a52-e9b8-31d9-bdfc-1ef56cece319"/>
<user identifier="5c60bd68-9ac9-37c1-9e7a-a95ec78af721"/>
</policy>
</policies>
</authorizations>
... View more
04-05-2017
08:14 AM
Still having this picture
... View more
04-05-2017
08:09 AM
Checked https certificates on machines in cluster. Everything seems ok. CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Request CERT (13):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using ECDHE-RSA-AES128-SHA256
* Server certificate:
* subject: CN=dev-hdf01.test.company.com
* start date: 2017-03-31 08:22:00 GMT
* expire date: 2017-06-29 08:22:00 GMT
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
> GET / HTTP/1.1
> User-Agent: curl/7.26.0
> Host: dev-hdf01.test.company.com:9091
> Accept: */*
... View more
04-05-2017
08:03 AM
According to this https://community.hortonworks.com/articles/61729/nifi-identity-conversion.html tried to add: "nifi.toolkit.dn.prefix": "CN=",
"nifi.toolkit.dn.suffix": "", nifi.toolkit.dn.suffix is empty because DN from letsencrypt is CN=dev-hdf01.test.company.com
... View more
04-03-2017
01:28 PM
Use oracle java jdk1.8.0_7 does not have trusted cert in cacerts. To fix this use oracle java >= 1.8.0_121 or import letsencrypt certificate manually(http://stackoverflow.com/questions/34110426/does-java-support-lets-encrypt-certificates/35454903#35454903).
... View more
04-03-2017
01:21 PM
1 Kudo
Dear community, I am trying to configure a nifi cluster with authorization and authentification via LDAP. The authentification works correctly, but authorization does not. I am not using internal CA, but use certificates generated by Let`s Encrypt. Each machine has its own keystore with one certificate. On login I am receiving message "Unable to perform the desired action due to insufficient permissions. Contact the system administrator.". In nifi-user.log: 2017-04-04 23:27:01,060 INFO [NiFi Web Server-23] o.a.n.w.s.NiFiAuthenticationFilter Authentication success for my.User
2017-04-04 23:27:01,085 INFO [NiFi Web Server-23] o.a.n.w.a.c.AccessDeniedExceptionMapper my.User does not have permission to access the requested resource. Returning Forbidden response. I tried the standalone variant according to http://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.2/bk_dataflow-administration/bk_dataflow-administration-20170224.pdf It works fine. However when going into cluster I am getting the error. As mentioned in documentation all the nodes have the same authorizers.xml and authorized-users.xml. On start the authorizations.xml and users.xml are created successfully on all nodes. For configuration I had used this manual: https://community.hortonworks.com/articles/81184/understanding-the-initial-admin-identity-access-po.html authorized-users.xml: <users>
<user dn="CN=my User,OU=mydepartment,OU=com,DC=company,DC=local">
<role name="ROLE_ADMIN"/>
<role name="ROLE_DFM"/>
</user>
</users> authorizers.xml: <authorizers>
<authorizer>
<identifier>file-provider</identifier>
<class>org.apache.nifi.authorization.FileAuthorizer</class>
<property name="Authorizations File">/mnt/hadoop/nifi/conf/authorizations.xml</property>
<property name="Users File">/mnt/hadoop/nifi/conf/users.xml</property>
<property name="Legacy Authorized Users File">/mnt/hadoop/nifi/conf/authorized-users.xml</property>
<property name="Initial Admin Identity"></property>
<property name="Node Identity 1">dev-hdf01.test.company.com</property>
<property name="Node Identity 2">dev-hdf02.test.company.com</property>
<property name="Node Identity 3">dev-hdf03.test.company.com</property>
</authorizer>
</authorizers> The reason why I do not have Initial Admin Identity because I am trying to control the users via authorized-users.xml. nifi.properties: nifi.security.identity.mapping.pattern.dn=^CN=(.*?),OU=(.*?)$
nifi.security.identity.mapping.pattern.dn2=^CN=(.*-[^ ]+?)$
nifi.security.identity.mapping.pattern.kerb=
nifi.security.identity.mapping.value.dn=$1
nifi.security.identity.mapping.value.dn2=$1 Checked patterns with regex tester. Seems to be working.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache NiFi
03-29-2017
07:20 PM
Sorry, the page hanged and I had not noticed that question had been successfully created.
... View more
03-29-2017
11:19 AM
1 Kudo
Dear community, Is there a way of managing file contents via blueprints? I plan to use authorized-users option in nifi with predefined set of users. However it would be good to have this set of users in cluster blueprint.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache NiFi
03-27-2017
12:34 PM
Dear community, I am using letsencrypt certificates for my ambari hosts. As far as yarn ambari view requires trusted store I decided to give it default java trusted store with blank password, but received an exception. Seems the blank passwords are not supported. ambari config changes: ssl.trustStore.password=
ssl.trustStore.path=/usr/jdk64/jdk1.8.0_77/jre/lib/security/cacerts
ssl.trustStore.type=jks Exception that is recieved when accessing YARN view: RA040 I/O error while requesting Ambari
org.apache.ambari.view.utils.ambari.AmbariApiException: RA040 I/O error while requesting Ambari
at org.apache.ambari.view.utils.ambari.AmbariApi.requestClusterAPI(AmbariApi.java:106)
at org.apache.ambari.view.utils.ambari.AmbariApi.requestClusterAPI(AmbariApi.java:77)
at org.apache.ambari.view.capacityscheduler.ConfigurationService.readFromCluster(ConfigurationService.java:315)
at org.apache.ambari.view.capacityscheduler.ConfigurationService.readClusterInfo(ConfigurationService.java:155)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.SubLocatorRule.accept(SubLocatorRule.java:137)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1507)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:118)
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:84)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariAuthorizationFilter.doFilter(AmbariAuthorizationFilter.java:257)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:103)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:113)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.jwt.JwtAuthenticationFilter.doFilter(JwtAuthenticationFilter.java:96)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.apache.ambari.server.security.authentication.AmbariAuthenticationFilter.doFilter(AmbariAuthenticationFilter.java:88)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ambari.server.security.authorization.AmbariUserAuthorizationFilter.doFilter(AmbariUserAuthorizationFilter.java:91)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.MethodOverrideFilter.doFilter(MethodOverrideFilter.java:72)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.api.AmbariPersistFilter.doFilter(AmbariPersistFilter.java:47)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.view.AmbariViewsMDCLoggingFilter.doFilter(AmbariViewsMDCLoggingFilter.java:54)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.view.ViewThrottleFilter.doFilter(ViewThrottleFilter.java:161)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.apache.ambari.server.security.AbstractSecurityHeaderFilter.doFilter(AbstractSecurityHeaderFilter.java:109)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:82)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:294)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1478)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:427)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:212)
at org.apache.ambari.server.controller.AmbariHandlerList.processHandlers(AmbariHandlerList.java:201)
at org.apache.ambari.server.controller.AmbariHandlerList.handle(AmbariHandlerList.java:150)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:370)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:973)
at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1035)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.eclipse.jetty.io.nio.SslConnection.handle(SslConnection.java:196)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:696)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:53)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Can't get connection.
at org.apache.ambari.server.controller.internal.URLStreamProvider.getSSLConnection(URLStreamProvider.java:310)
at org.apache.ambari.server.controller.internal.URLStreamProvider.processURL(URLStreamProvider.java:181)
at org.apache.ambari.server.view.ViewAmbariStreamProvider.getInputStream(ViewAmbariStreamProvider.java:123)
at org.apache.ambari.server.view.ViewAmbariStreamProvider.readFrom(ViewAmbariStreamProvider.java:85)
at org.apache.ambari.view.utils.ambari.AmbariApi.requestClusterAPI(AmbariApi.java:103)
... 106 more
Caused by: java.io.IOException: Keystore was tampered with, or password was incorrect
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)
at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
at sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
at sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
at java.security.KeyStore.load(KeyStore.java:1445)
at org.apache.ambari.server.controller.internal.URLStreamProvider.getSSLConnection(URLStreamProvider.java:302)
... 110 more
Caused by: java.security.UnrecoverableKeyException: Password verification failed
at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:778)
... 115 more
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache YARN
03-27-2017
10:27 AM
Tried to configure external auth for logsearch, but still receive the same error as you specified. I added needed values to logsearch.properties: # Custom properties
logsearch.auth.external_auth.enabled=true
logsearch.auth.external_auth.host_url=https://my-auth-node.com:8443
logsearch.roles.allowed=CLUSTER.ADMINISTRATOR,CLUSTER.OPERATOR,SERVICE.ADMINISTRATOR,SERVICE.OPERATOR,CLUSTER.USER,AMBARI.ADMINISTRATOR I can successfully login with local ambari user.
... View more
03-24-2017
12:00 PM
That I was looking into. May thanks!!!! According to the above reply: 1) To delete privileges: curl -H "X-Requested-By: ambari" -X DELETE -u admin:admin "https://yourcluster.com:8443/api/v1/clusters/yourclustername/privileges/1" 2) To add: curl -H "X-Requested-By: ambari" -X POST --data-binary "@your_privileges_file.json" -u admin:admin "https:///yourcluster.com:8443/api/v1/clusters/yourclustername/privileges/" Privilege example: {
"PrivilegeInfo" : {
"permission_name" : "CLUSTER.USER",
"principal_name" : "your-group",
"principal_type" : "GROUP"
}
}
... View more
03-24-2017
10:04 AM
Thanks for answers. Will try to use API. However had not found any possibility to manage cluster roles with that tool.
... View more
03-24-2017
09:51 AM
Thanks @Jay SenSharma Ambari API is also ok. Is there a possibility to use Blueprints or ambari-server setup utility for this? Looked both but had not found proper option.
... View more
03-24-2017
09:32 AM
Dear community, Is it possible to manage user roles not only from Ambari GUI? Blueprints? Some configs?
... View more
Labels:
- Labels:
-
Apache Ambari