Member since
06-16-2017
21
Posts
3
Kudos Received
0
Solutions
08-25-2017
08:24 PM
@Sriharsha Chintalapani I have performed one other additional test. I changed the URL entry in the quicklinks.json file to the IP address like this example below and it worked! I was able to get to the Schema Registry web page for the first time. "url":"http://IPAddress:7788/", I have a work around in place for now. Please let me know if you find out any additional information on if this might be a bug. Thanks Kirk
... View more
08-25-2017
08:05 PM
@Sriharsha Chintalapani Hello I wanted to give you an update. I also tried to change the entire url in the quicklinks.json file from the default of "url":"%@://%@:%@/", to the complete URL that would open schema registry "url":"http://servername.domain.com:7788/" This fails as well. The URL that shows up in the browser is https://servername.domain.com:7788 Somehow when the it opens the link in a new browser tab it is getting changed to the new address. Have you been able to reproduce this issue?
... View more
08-24-2017
04:33 PM
@Sriharsha Chintalapani I could not get the link that you attached to your response above to work. I receive a 404 error. I am not sure what that link was for. I found the quicklinks.json file in this location below /var/lib/ambari-server/resources/common-services/REGISTRY/0.3.0/quicklinks and I modified the quicklinks.json file and restarted Ambari Server. I modified the URL section and replaced the first %@ with http as instructed. Below is what is in my quicklink.json { "name": "default", "description": "default quick links configuration", "configuration": { "protocol": { "type":"HTTP_ONLY" }, "links": [ { "name": "registry_ui", "label": "Registry UI", "requires_user_name": "false", "component_name": "REGISTRY_SERVER", "url":"http://%@:%@/", "port":{ "http_property": "port", "http_default_port": "8080", "regex": "^(\\d+)$", "site": "registry-common" } } ] } } After restarting the URL is blank. This is what is returned - about:blank. I see no url show up in the lower left corner of my browser like the other quicklinks return.
... View more
08-24-2017
03:59 PM
Thanks for the update Sam. I appreciate it.
... View more
08-24-2017
02:12 PM
@Satish Duggana Hello. Thanks for the additional
information. Currently I cannot access Schema Registry in any way. The service is up and running but the UI link
is overwritten by Ambari. My goal is to
get it working with or without SSL at this point because it is not functional
to us at this point. So, I see two
courses of action. What do I need to change to get it to work with HTTP? What files (please include their location)
and values would I need to set? If SSL is supported and it works properly I am not opposed
to configuring it for Schema Registry. However, under
the /var/lib/ambari-server/resources/common-services folder is both a
REGISTRY and STREAMLINE folder. Which file or files do I need to add the
SSL information into? Also, what is the format of the entries in those
file or files? Can you give an example? The article I pointed
to in my first post indicates the following in the SSL section but gives no
details on where these setting are to be placed: http://registry-project.readthedocs.io/en/latest/security.html?highlight=https Registry config for the server can be configured like below. server: applicationConnectors: - type: https port: 8443 keyStorePath:
./conf/keystore.jks keyStorePassword: test12 validateCerts:
false validatePeers:
false adminConnectors: - type: https port: 8444 keyStorePath:
./conf/keystore.jks keyStorePassword: test12 validateCerts:
false validatePeers:
false Thanks, Kirk
... View more
08-23-2017
03:08 PM
Hello. We recently upgraded from HDF 2.1.4 to HDF
3.0.1.0 with Ambari 2.5.1. The ranger version is 0.7.0. We followed instructions for the upgrade in this document - https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.1/bk_ambari-upgrade/bk_ambari-upgrade.pdf While trying to restart Ambari the server failed to start due to the database check: Starting
ambari-server Ambari
Server running with administrator privileges. Organizing
resource files at /var/lib/ambari-server/resources... Ambari
database consistency check started... Server
PID at: /var/run/ambari-server/ambari-server.pid Server
out at: /var/log/ambari-server/ambari-server.out Server
log at: /var/log/ambari-server/ambari-server.log Waiting
for server start............. DB
configs consistency check failed. Run "ambari-server start
--skip-database-check" to skip. You may try --auto-fix-database flag to
attempt to fix issues automatically. If you use this
"--skip-database-check" option, do not make any changes to your
cluster topology or perform a cluster upgrade until you correct the database
consistency issues. See /var/log/ambari-server/ambari-server-check-database.log
for more details on the consistency issues. ERROR:
Exiting with exit code -1. REASON:
Ambari Server java process has stopped. Please check the logs for more
information. This error was in the
/var/log/ambari-server/ambari-server-check-database.log ERROR - Required
config(s): atlas-tagsync-ssl is(are) not available for service RANGER with
service config version XX in cluster CLUSTERNAME The default value in Ranger for ranger.tagsync.source.atlasrest.ssl.config.filename is - /etc/ranger/tagsync/conf/atlas-tagsync-ssl.xml It is a required value in Ambari. The file does not exist in the location specified. I tried to copy a atlas-tagsync-ssl.xml file from different location on the server. It did not affect the outcome any. I managed to resolve the problem by following these steps below: Start Ambari and skip the DB check -
ambari-server start --skip-database-check After Ambari starts added this configuration
parameter for Ranger: Ranger
> Advanced > Custom atlas-tagsync-ssl Add:
ranger.tagsync.source.atlas = false Restart Ranger Restart Ambari without the --skip-database-check
option Ambari started without incident. I found the setting above in this documentation - https://secure-web.cisco.com/1QXVWuC6hnxIKVz1BA5oBaahiMPQaiuFCTEguaO-VB_gnm4uros1qSqdLeZqsQno5CwDBylUxwIw5Xt440eeSo8hPLWKIINOGqTsgtTHvpuznSFSUh4qPpSnuP-RMM7sn6qPuxZ6S-qCEktLxCyDA9GzwqTS7XEd7xPyPu6JUmUzO8pc1CQoD8DypGpjX9tpKV7cnbuF5cItdSnUiPA6YK3ou7C3R6_qCK9uAaUm_td-8NcffpdpjjKhZVVtvy3DpH0q7salinr-bOeyFrDHVJw/https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FRANGER%2FTag%26#43;Synchronizer+Installation+and+Configuration The documentation above indicates ranger.tagsync.source.atlas
value is true by default. We are not using Atlas nor are we using tags in
this environment. We did not have this value set when we were
running HDF 2.1.4. It might be possible this setting was introduced with HDF 3.0 and Ambari 2.5.1 during the upgrade.
I wanted to report this in case others are experiencing the same issue or if this might be a bug. Thanks, Kirk DeMumbrane
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Atlas
-
Apache Ranger
08-22-2017
06:21 PM
I have attached the registry.yaml file. I have renamed the file to allow it to be uploaded. The file was located in this location - /usr/hdf/3.0.1.0-43/etc/registry/conf.dist/
... View more
08-18-2017
04:11 PM
1 Kudo
Hello, We have upgraded from HDF 2.1.4.0 to HDF 3.0.1.0 and the upgrade was successful. We upgraded to use Schema Registry and we successfully added Schema Registry 0.3.0 using Ambari. The existing cluster components were configured to use SSL before we upgraded to 3.0.1.0 (Ambari, Ranger, NIFI, Ambari Infra, Ambari Metrics UI's are all using SSL successfully). When I try to use the Schema Registry UI from Ambari I am not able to bring up the web page. It looks like the UI link is pointing to https even though we have not configured Schema Registry with SSL. Example of the URL which Ambari is pointing me to - https://servername.domain.com:7788/ If I try http it reverts back to a https page. In the registry.log file I see the following error: WARN [08:42:25.288] [dw-26] o.e.j.h.HttpParser - Illegal character 0x16 in state=START for buffer HeapByteBuffer@5d7b223b[p=1,l=212,c=8192,r=211]={\x16<<<\x03\x01\x00\xCf\x01\x00\x00\xCb\x03\x03K\x8f\xD6\xA5\x9e~\x99...\x00\x08\x8a\x8a\x00\x1d\x00\x17\x00\x18\xAa\xAa\x00\x01\x00>>>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00} WARN [08:42:25.288] [dw-26] o.e.j.h.HttpParser - bad HTTP parsed: 400 Illegal character 0x16 for HttpChannelOverHttp@24809a38{r=0,c=false,a=IDLE,uri=null} I have found this document - http://registry-project.readthedocs.io/en/latest/security.html?highlight=https Changes made to the registry.yaml file are overwritten when Ambari starts Schema Registry. Any suggestions on what should be added to Ambari for these items below: server: applicationConnectors: - type: https port: 8443 keyStorePath: ./conf/keystore.jks keyStorePassword: test12 validateCerts: false validatePeers: false adminConnectors: - type: https port: 8444 keyStorePath: ./conf/keystore.jks keyStorePassword: test12 validateCerts: false validatePeers: false Any help would be greatly appreciated. Thanks, Kirk
... View more
Labels:
- Labels:
-
Apache Ambari
08-16-2017
04:12 PM
Hello Sriharsha. Has any progress been made on the post that I submitted yesterday? Thanks in advance.
... View more
08-14-2017
06:27 PM
Thanks. I gave it a try. The SQL script now runs correctly. However when I try to create add a new schema I see this error in the registry.log file: ERROR [13:21:06.461] [dw-27 - POST /api/v1/schemaregistry/schemas] c.h.r.s.w.SchemaRegistryResource - Error encountered while adding schema info [SchemaMetadata{type='avro', schemaGroup='sales-nxt-email', name='test', description='test', compatibility=BACKWARD, evolve=true}] com.hortonworks.registries.storage.exception.StorageException: org.postgresql.util.PSQLException: ERROR: null value in column "validationLevel" violates not-null constraint Detail: Failing row contains (2, avro, sales-nxt-email, test, BACKWARD, null, test, t, 1502734866442). at com.hortonworks.registries.storage.impl.jdbc.provider.sql.factory.AbstractQueryExecutor$QueryExecution.executeUpdate(AbstractQueryExecutor.java:225) at com.hortonworks.registries.storage.impl.jdbc.provider.sql.factory.AbstractQueryExecutor.executeUpdate(AbstractQueryExecutor.java:182) at com.hortonworks.registries.storage.impl.jdbc.provider.postgresql.factory.PostgresqlExecutor.insertOrUpdateWithUniqueId(PostgresqlExecutor.java:182) at com.hortonworks.registries.storage.impl.jdbc.provider.postgresql.factory.PostgresqlExecutor.insert(PostgresqlExecutor.java:80) at com.hortonworks.registries.storage.impl.jdbc.JdbcStorageManager.add(JdbcStorageManager.java:66) at com.hortonworks.registries.schemaregistry.DefaultSchemaRegistry.addSchemaMetadata(DefaultSchemaRegistry.java:168) at com.hortonworks.registries.schemaregistry.webservice.SchemaRegistryResource.lambda$addSchemaInfo$1(SchemaRegistryResource.java:380) at com.hortonworks.registries.schemaregistry.webservice.SchemaRegistryResource.handleLeaderAction(SchemaRegistryResource.java:158) at com.hortonworks.registries.schemaregistry.webservice.SchemaRegistryResource.addSchemaInfo(SchemaRegistryResource.java:371) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) Looking at the create_tables.sql script for the schema_metadata_info table it appears there is a new column called "validationLevel" that is not in the original script that was used to install schema registry. Should this column allow nulls? --New Script CREATE TABLE IF NOT EXISTS schema_metadata_info ( "id" SERIAL UNIQUE NOT NULL, "type" VARCHAR(255) NOT NULL, "schemaGroup" VARCHAR(255) NOT NULL, "name" VARCHAR(255) NOT NULL, "compatibility" VARCHAR(255) NOT NULL, "validationLevel" VARCHAR(255) NOT NULL, -- added in 0.3.1, table should be altered to add this column from earlier versions. "description" TEXT, "evolve" BOOLEAN NOT NULL, "timestamp" BIGINT NOT NULL, PRIMARY KEY ( "name"), UNIQUE ("id") ); --Script which was originally used to install Schema Registry CREATE TABLE IF NOT EXISTS schema_metadata_info ( "id" SERIAL PRIMARY KEY, "type" VARCHAR(256) NOT NULL, "schemaGroup" VARCHAR(256) NOT NULL, "name" VARCHAR(256) NOT NULL, "compatibility" VARCHAR(256) NOT NULL, "description" TEXT, "evolve" BOOLEAN NOT NULL, "timestamp" BIGINT NOT NULL, UNIQUE("id","name") );
... View more
08-11-2017
07:40 PM
Hello Sriharsha, I am trying to run the drop-create command and there appears to be a problem with the create_tables.sql script.
Error: Exception in thread "main" org.postgresql.util.PSQLException: ERROR: multiple primary keys for table "schema_metadata_info" are not allowed Position: 555 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2455) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2155) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:288) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:430) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:356) at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:303) at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:289) at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:266) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:262) at com.hortonworks.registries.storage.tool.SQLScriptRunner.runScript(SQLScriptRunner.java:98) at com.hortonworks.registries.storage.tool.TablesInitializer.doExecute(TablesInitializer.java:198) at com.hortonworks.registries.storage.tool.TablesInitializer.doExecuteCreate(TablesInitializer.java:175) at com.hortonworks.registries.storage.tool.TablesInitializer.main(TablesInitializer.java:162) The create table command below has id as the serial primary key and also there is a primary key statement at the bottom indicating "name" is a primary key. Can you correct the script with the proper primary key and repost it on GitHub? CREATE TABLE IF NOT EXISTS schema_metadata_info ( "id" SERIAL PRIMARY KEY, "type" VARCHAR(255) NOT NULL, "schemaGroup" VARCHAR(255) NOT NULL, "name" VARCHAR(255) NOT NULL, "compatibility" VARCHAR(255) NOT NULL, "validationLevel" VARCHAR(255) NOT NULL, -- added in 0.3.1, table should be altered to add this column from earlier versions. "description" TEXT, "evolve" BOOLEAN NOT NULL, "timestamp" BIGINT NOT NULL, UNIQUE ("id"), PRIMARY KEY ( "name") );
Thanks, Kirk
... View more
08-10-2017
02:05 PM
Thanks this does work for us. Our port is different but I was able to see the api reference.
... View more
08-10-2017
01:53 PM
Hello. I work with Dave Holtzhouser. I am the system admin that has access to the PostgreSQL database. Here is the results of the sql query you requested above: id | type | schemaGroup | name | compatibility | description | evolve | timestamp ----+------+---------------------+---------------------------+---------------+------------------------------------------------------------------------------------------------------------------- ------------+--------+--------------- 1 | avro | Kafka | Test | BACKWARD | TEst | t | 1498680663869 2 | avro | Kafka | RoutingSlip | BACKWARD | An implementation of the Routing Slip EIP (http://www.dummyurl.com/patterns/messaging/Routing Table.html) | t | 1498756496668 3 | avro | Kafka | RoutingSlip | BACKWARD | An implementation of the Routing Slip EIP (http://www.dummyurl.com/patterns/messaging/Routing Table.html) | t | 1498756511480 4 | avro | Kafka | EmailAddressMsg | BACKWARD | An email address | t | 1500669984537 5 | avro | Kafka | EmailMessageMsg | BACKWARD | An email message | t | 1500670028183 6 | avro | truck-sensors-kafka | raw-truck_events_avro | BACKWARD | Raw Geo events from trucks in Kafka Topic | t | 1501266679367 7 | avro | Kafka | MMS_Sales_email_dev | BACKWARD | Email Test Schema | t | 1501269422220 8 | avro | Kafka | MMS_Sales_CarCompany_Emails | BACKWARD | CarCompany Email Topic | t | 1501281176150
... View more
08-09-2017
03:44 PM
Hello Sarah. Any updates on when the link will be working? Thanks
... View more
08-09-2017
03:39 PM
Hello. I am trying to use the tls-toolkit.sh utility to create some client certificates. We are running HDF-2.1.4.0 which is NIFI version 1.1.0. We have a two node cluster with the Certificate Authority installed on one of the two servers. We are running the commands below as root. I am using this as a reference - https://docs.hortonworks.com/HDPDocuments/HDF2/HDF-2.1.4/bk_administration/content/client.html I am running the command from - /var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/files/nifi-toolkit-$version Command is the following - tls-toolkit.sh client -c servername.domain.com -D "CN=admin, OU=NIFI" -t nifi -p 10443 -T pkcs12 When I run this command I get a error like this: tls-toolkit.sh: JAVA_HOME not set; results may vary 2017/08/09 10:08:18 INFO [main] org.apache.nifi.toolkit.tls.commandLine.BaseCommandLine: Command line argument --keyStoreType=pkcs12 only applies to keystore, recommended truststore type of JKS unaffected. 2017/08/09 10:08:19 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: Requesting new certificate from servername.domain.com:10443 2017/08/09 10:08:19 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer: Requesting certificate with dn CN=admin,OU=NIFI.maritz.com from servername.domain.com:10443 Service client error: Received response code 500 with payload <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 500 </title> </head> <body> <h2>HTTP ERROR: 500</h2> <p>Problem accessing /. Reason: <pre> javax.servlet.ServletException: Server error</pre></p> <hr /><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.3.9.v20160517</a><hr/> </body> </html> In the /var/log/nifi/nifi-ca.std.out file I see this: 2017/08/09 13:29:31 WARN [qtp1653844940-8] org.eclipse.jetty.server.HttpChannel: https://servername.domain.com:10443/ javax.servlet.ServletException: Server error at org.apache.nifi.toolkit.tls.service.server.TlsCertificateAuthorityServiceHandler.handle(TlsCertificateAuthorityServiceHandler.java:99) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:524) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:319) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:253) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:745) Any suggestions on what it might be looking for? Thanks in advance Kirk
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache NiFi
07-10-2017
07:45 PM
Thank you very much. I was able to get your suggestions to work. I have one other question. I have setup permissions at the /data/Process-Group/{uuid} level. The developer has created multiple Process Groups under where I have applied the permissions. Will these permissions propagate to the additional Process Groups or will I have to configure those as well? They have not run the flow completely yet is why I am asking.
... View more
07-10-2017
04:11 PM
Hello. We have a one node NIFI cluster configured with Ranger for NIFI security. We have setup the standard polices for /flow, /proxy and a policy for all nifi resources. When an admin user is signed in NIFI we can view Provenance data. If I sign into as a non-admin user such as one of our developers no provenance data is visible. I setup a /provenance policy which allows developers to see the NIFI Data Provenance screen but it has no data for the non-admin user. I found this article and have followed the steps in this document including this section below - https://community.hortonworks.com/articles/58769/hdf-20-enable-ranger-authorization-for-hdf-compone.html We have setup these two policies which allow the non-admin user to manage the Process-Group Grant user/group access to modify the NiFi flow with a policy for /process-groups/<root-group-id> with RW Create a separate a policy for /provenance/process-groups/<root-group-id> (with each of the cluster node DNs) for read access The first policy allows them to manage the process group properly but the second policy does not seem to work. I see no errors in the nifi-app.log or the nifi-user.log. In the nifi-user.log I see "Authentication Success" messages when the attempt is made. In ranger I see no denied messages on the Audit>Access screen. If I add the developer to the /* policy it works fine so I am missing a NIFI resource identifier in one of my policies. I cannot find documentation on what I might be missing. Any help would be appreciated.
... View more
Labels:
- Labels:
-
Apache NiFi
-
Apache Ranger
06-28-2017
08:46 PM
2 Kudos
Hello. I am referencing the Schema Registry overview article at this location - https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.0.0/bk_overview/content/ch04s01.html In the section called "Schema Registry API" is a link to the Schema Registry REST API Reference document. The "Schema Registry REST API Reference" link does not work Schema Registry API You can view the full API details included in the Schema Registry REST API Reference document included in this Technical Preview release. You can view the full API details in the Schema Registry REST API Reference documentation. When you click the link you receive this error: <Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message> "); vertical-align: bottom; height: 10px;"><Key>
HDPDocuments/HDF3/HDF-3.0.0/bk_schema-registry-rest-api-reference/content/index.html
</Key>
<RequestId>44BD194720EA0DED</RequestId>
"); vertical-align: bottom; height: 10px;"><HostId>
Mmz9LphnoNedL/By94JBQoE2RedYdeljwKq44kyQq3dPKO85mglKhPgZ2WhBRYXGStC5UQyId5I=
</HostId>
</Error> Has anyone found an updated link to the API reference?
... View more
Labels:
06-23-2017
02:45 PM
Hello Bob, Thanks for your reply. The ambari version which is installed is 2.4.2.0. The bug explains the issue I am having. I will check out your suggestion on the Solr data store. Thanks Kirk
... View more
06-22-2017
06:40 PM
Hello. I have a HDF 2.1.2.0 installed with both Infra and Log Search configured. I am missing data both in Log Search and also in Ambari when I try to view logs for services on a host. This seems to have appeared after configuring the cluster for SSL and LDAP is configured for both Ambari and Log Search. Other clusters I have setup in the past few days data is shown on these screens below. The errors included below are coming from the Service Logs screen in Log Search. This screen appears to return current data. I see this error roughly every 6 minutes in the Ambari Server Log - 2017-06-22 13:22:15,000 ERROR ambari-client-thread-3773 LoggingSearchPropertyProvider - Error occurred while making request to LogSearch service, unable to populate logging properties on this resource This error appears in the logsearch_app every time I sign in to Log Search. I am able to connect successfully though using my AD account 2017-06-22 13:24:50,286 ERROR qtp1545087375-14 org.apache.ambari.logsearch.web.security.LogsearchFileAuthenticationProvider LogsearchFileAuthenticationProvider.java:81 - Wrong password for user=<MyAccount> I do not see any errors in the logsearch_feeder logs. Log Search Ambari - Hosts - Logs screen
... View more
Labels:
- Labels:
-
Apache Ambari
-
Cloudera DataFlow (CDF)