Member since
02-06-2017
17
Posts
1
Kudos Received
0
Solutions
04-18-2018
09:14 AM
Some more questions, It seems that the OU=NIFI is hardcoded, I see this when I look at the certificate? Even if I set the FQDN to something else, the certificate seems to come in with OU=NIFI? Do you need a certificate for the user and the server, IE are there 2 certificates to be imported? When you run a clustered NiFi operation, the UI what you use, is this one specific server, IOW, at the moment I have 4 NiFi Quick links where I can open the GUI from, but seeing that all the flows etc should be the same, should there only be one? So here you would only specify the "master" or entry GUI server? Any document links that works would be appreciated please!
... View more
04-17-2018
05:23 AM
Hi all,
Been struggling with this problem for days, any hints please?
I'm using 4 desktop servers and NiFi is installed on all of them.
I configured NiFi for ssl authentication and I'm using the NiFi Certificate Authority to generate the certificates.
Problem 1, when I set the nifi.web.https.host to the default value, IE {{nifi_node_ssl_host}} and try to open the NiFi UI, the webpage does not load or does not respond.
When I set it to 0.0.0.0, the UI webpage responds, but I get an error as below
My NiFi CA is host digitata69.digitata.com and I generated the certificates with the following commands
[root@digitata66 temp]# export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64
[root@digitata66 temp]# ./files/nifi-toolkit-*/bin/tls-toolkit.sh client -c digitata69.digitata.com -D 'CN=nifiadmin, OU=digitata.com' -p 10443 -t 1digitata23 -T pkcs12
2018/04/17 07:19:50 INFO [main] org.apache.nifi.toolkit.tls.commandLine.BaseTlsToolkitCommandLine: Command line argument --keyStoreType=pkcs12 only applies to keystore, recommended truststore type of JKS unaffected.
2018/04/17 07:19:50 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: Requesting new certificate from digitata69.digitata.com:10443
2018/04/17 07:19:51 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer: Requesting certificate with dn CN=nifiadmin,OU=digitata.com from digitata69.digitata.com:10443
2018/04/17 07:19:51 INFO [main] org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer: Got certificate with dn CN=nifiadmin, OU=digitata.com
[root@digitata66 temp]
My users.xml files seems to be correct? see below
Also, what is with the space between the , and OU, is that needed or not, some of the tutorials says yes, it needed, some not? In this case I generated it with the space
[root@digitata66 temp]# cat /var/lib/nifi/conf/users.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="7b8918a1-c807-3c82-825c-45a9ed044b4a" identity="CN=nifiadmin, OU=digitata.com"/>
<user identifier="ad1dcdc4-8e55-3cac-af34-57bcd85f8d11" identity="CN=digitata67, OU=digitata.com"/>
<user identifier="ff183b49-fd1d-3588-90d3-1cfb8067a277" identity="CN=digitata66, OU=digitata.com"/>
<user identifier="a412e41e-66d0-3dc3-8e86-ea6dcb6d6e28" identity="CN=digitata68, OU=digitata.com"/>
<user identifier="d814e1d2-2d9f-31ac-8d73-9a18ff282ed2" identity="CN=digitata69, OU=digitata.com"/>
</users>
</tenants>
[root@digitata66 temp]
Copy the certificate to my local machine and import
If I look at the logfiles, I can see the authentication is successful
2018-04-17 07:42:27,249 WARN [main] o.a.n.a.util.IdentityMappingUtil Identity Mapping property nifi.security.identity.mapping.pattern.kerb was found, but was empty
2018-04-17 07:42:27,250 WARN [main] o.a.n.a.util.IdentityMappingUtil Identity Mapping property nifi.security.identity.mapping.pattern.dn was found, but was empty
2018-04-17 07:46:03,888 INFO [NiFi Web Server-120] o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: Kerberos ticket login not supported by this NiFi.. Returning Conflict response.
2018-04-17 07:46:03,934 INFO [NiFi Web Server-120] o.a.n.w.a.c.IllegalStateExceptionMapper java.lang.IllegalStateException: OpenId Connect is not configured.. Returning Conflict response.
2018-04-17 07:46:03,952 INFO [NiFi Web Server-24] o.a.n.w.s.NiFiAuthenticationFilter Attempting request for (CN=nifiadmin, OU=digitata.com) GET https://digitata66.digitata.com:9091/nifi-api/flow/current-user (source ip: 172.28.103.205)
2018-04-17 07:46:03,960 INFO [NiFi Web Server-24] o.a.n.w.s.NiFiAuthenticationFilter Authentication success for CN=nifiadmin, OU=digitata.com
Any ideas please?
My NiFi configuration is as follow
Bigger picture
Am I not understanding the setup correctly? I saw some posts that NiFi 1.5 does have problems, but I have not seen any reported problems for this particular problem
... View more
Labels:
03-29-2018
04:47 AM
Hi @Thomas Williams The error seems to be caused by incorrect paths to the keystore.jks and or incorrect generation of the keystore.jks, I have my NiFi Registry switched off now, I'm trying to get Ranger and NiFi to talk to each other so that I can get Ranger to authenticate the NiFi users, it has been an uphill battle Cheers
... View more
03-22-2018
10:56 AM
1 Kudo
Hi all,
I'm trying to start NiFi registry but it keeps on failing with this below
2018-03-22 12:50:49,102 INFO [main] o.apache.nifi.registry.bootstrap.Command Starting Apache NiFi Registry...
2018-03-22 12:50:49,104 INFO [main] o.apache.nifi.registry.bootstrap.Command Working Directory: /usr/hdf/current/nifi-registry
2018-03-22 12:50:49,105 INFO [main] o.apache.nifi.registry.bootstrap.Command Command: /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/bin/java -classpath /usr/hdf/current/nifi-registry/conf:/usr/hdf/current/nifi-registry/lib/shared/commons-lang3-3.5.jar:/usr/hdf/current/nifi-registry/lib/shared/nifi-registry-utils-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/apache-el-8.5.9.1.jar:/usr/hdf/current/nifi-registry/lib/apache-jsp-8.5.9.1.jar:/usr/hdf/current/nifi-registry/lib/taglibs-standard-spec-1.2.5.jar:/usr/hdf/current/nifi-registry/lib/apache-jsp-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/apache-jstl-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/asm-5.1.jar:/usr/hdf/current/nifi-registry/lib/asm-commons-5.1.jar:/usr/hdf/current/nifi-registry/lib/asm-tree-5.1.jar:/usr/hdf/current/nifi-registry/lib/bcprov-jdk15on-1.55.jar:/usr/hdf/current/nifi-registry/lib/commons-lang3-3.5.jar:/usr/hdf/current/nifi-registry/lib/ecj-4.4.2.jar:/usr/hdf/current/nifi-registry/lib/javax.annotation-api-1.2.jar:/usr/hdf/current/nifi-registry/lib/javax.servlet-api-3.1.0.jar:/usr/hdf/current/nifi-registry/lib/jcl-over-slf4j-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/jetty-annotations-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-continuation-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-http-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-io-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-jndi-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-plus-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-schemas-3.1.jar:/usr/hdf/current/nifi-registry/lib/jetty-security-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-server-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-servlet-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-servlets-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-util-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-webapp-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-xml-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jul-to-slf4j-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/log4j-over-slf4j-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/logback-classic-1.1.3.jar:/usr/hdf/current/nifi-registry/lib/logback-core-1.1.3.jar:/usr/hdf/current/nifi-registry/lib/slf4j-api-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-jetty-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-properties-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-provider-api-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/taglibs-standard-impl-1.2.5.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-runtime-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-security-api-0.1.0.3.1.1.0-35.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.registry.properties.file.path=/usr/hdf/current/nifi-registry/conf/nifi-registry.properties -Dnifi.registry.bootstrap.listen.port=29952 -Dapp=NiFiRegistry -Dorg.apache.nifi.registry.bootstrap.config.log.dir= org.apache.nifi.registry.NiFiRegistry
2018-03-22 12:50:49,116 INFO [main] o.apache.nifi.registry.bootstrap.Command Launched Apache NiFi Registry with Process ID 13082
2018-03-22 12:50:49,468 INFO [NiFi Registry Bootstrap Command Listener] o.a.n.registry.bootstrap.RunNiFiRegistry Apache NiFi Registry now running and listening for Bootstrap requests on port 19605
2018-03-22 12:50:51,604 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut Apache NiFi _ _
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut _ __ ___ __ _(_)___| |_ _ __ _ _
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut | '__/ _ \/ _` | / __| __| '__| | | |
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut | | | __/ (_| | \__ \ |_| | | |_| |
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut |_| \___|\__, |_|___/\__|_| \__, |
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut ==========|___/================|___/=
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut v0.1.0.3.1.1.0-35
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut
2018-03-22 12:50:56,380 ERROR [NiFi logging handler] org.apache.nifi.registry.StdErr Failed to start web server: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException: Validate failed: Detected failed migration to version 1.3 (DropBucketItemNameUniqueness)
2018-03-22 12:50:56,380 ERROR [NiFi logging handler] org.apache.nifi.registry.StdErr Shutting down...
2018-03-22 12:50:57,118 INFO [main] o.a.n.registry.bootstrap.RunNiFiRegistry NiFi Registry never started. Will not restart NiFi Registry
Any ideas on this please "Error creating bean with name 'flywayInitializer'"
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
01-25-2018
10:02 AM
I get exactly the same ambari-server upgrade Using python /usr/bin/python Upgrading ambari-server INFO: Upgrade Ambari Server INFO: Updating Ambari Server properties in ambari.properties ... WARNING: Can not find ambari.properties.rpmsave file from previous version, skipping import of settings INFO: Updating Ambari Server properties in ambari-env.sh ... INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping restore of environment settings. ambari-env.sh may not include any user customization. INFO: Fixing database objects owner Ambari Server configured for Postgres. Confirm you have made a backup of the Ambari Server database [y/n] (y)? y INFO: Upgrading database schema INFO: Return code from schema upgrade command, retcode = 1 ERROR: Error executing schema upgrade, please check the server logs. ERROR: Error output from schema upgrade command: ERROR: Exception in thread "main" org.apache.ambari.server.AmbariException: Unable to find any CURRENT repositories. at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418) Caused by: org.apache.ambari.server.AmbariException: Unable to find any CURRENT repositories. at org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:510) at org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194) at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200) ... 1 more ERROR: Ambari server upgrade failed. Please look at /var/log/ambari-server/ambari-server.log, for more details. ERROR: Exiting with exit code 11. REASON: Schema upgrade failed. The patch mentioned here Looks like you are hitting the issue reported in the JIRA: https://issues.apache.org/jira/browse/AMBARI-22469 which is expected to be addressed in Ambari 2.6.1 Seem NOT to be able to apply? -1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12898511/AMBARI-22469.patch
against trunk revision . -1 patch. The patch command could not apply the patch. Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/12699//console This message is automatically generated. I tried looking for the involved files, but I cannot find them, does anyone know how I would be able to get past this please BTW its not the yum repos problem, mine seem to be ok yum repolist Loaded plugins: fastestmirror HDP-2.6.3.0 | 2.9 kB 00:00:00 HDP-UTILS-1.1.0.21 | 2.9 kB 00:00:00 Supporting-repo | 2.9 kB 00:00:00 ambari-ambari-2.6.0.0 | 2.9 kB 00:00:00 (1/4): HDP-2.6.3.0/primary_db | 99 kB 00:00:00 (2/4): HDP-UTILS-1.1.0.21/primary_db | 37 kB 00:00:00 (3/4): Supporting-repo/primary_db | 91 kB 00:00:00 (4/4): ambari-ambari-2.6.0.0/primary_db | 12 kB 00:00:00 Determining fastest mirrors repo id repo name status HDP-2.6.3.0 HDP Version - HDP-2.6.3.0 236 HDP-UTILS-1.1.0.21 HDP-UTILS Version - HDP-UTILS-1.1.0.21 64 Supporting-repo Supporting-repo 121 ambari-ambari-2.6.0.0 ambari Version - ambari-ambari-2.6.0.0 15 repolist: 436
... View more
01-24-2018
09:45 AM
Thanks so much, I did that and then hit the next problem, this is like hitting your head against a solid brick wall... ambari=# CREATE TABLE cluster_version ( ambari(# id BIGINT NOT NULL, ambari(# repo_version_id BIGINT NOT NULL, ambari(# cluster_id BIGINT NOT NULL, ambari(# state VARCHAR(32) NOT NULL, ambari(# start_time BIGINT NOT NULL, ambari(# end_time BIGINT, ambari(# user_name VARCHAR(32)); CREATE TABLE ambari=# ambari-server upgrade Using python /usr/bin/python Upgrading ambari-server INFO: Upgrade Ambari Server INFO: Updating Ambari Server properties in ambari.properties ... WARNING: Can not find ambari.properties.rpmsave file from previous version, skipping import of settings INFO: Updating Ambari Server properties in ambari-env.sh ... INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping restore of environment settings. ambari-env.sh may not include any user customization. INFO: Fixing database objects owner Ambari Server configured for Postgres. Confirm you have made a backup of the Ambari Server database [y/n] (y)? y INFO: Upgrading database schema INFO: Return code from schema upgrade command, retcode = 1 ERROR: Error executing schema upgrade, please check the server logs. ERROR: Error output from schema upgrade command: ERROR: Exception in thread "main" java.lang.Exception: Unexpected error, upgrade failed at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:441) Caused by: java.lang.RuntimeException: Unable to read database version at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.readSourceVersion(SchemaUpgradeHelper.java:97) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:402) Caused by: org.postgresql.util.PSQLException: ERROR: relation "metainfo" does not exist Position: 30 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:283) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.readSourceVersion(SchemaUpgradeHelper.java:90) ... 1 more ERROR: Ambari server upgrade failed. Please look at /var/log/ambari-server/ambari-server.log, for more details. ERROR: Exiting with exit code 11. REASON: Schema upgrade failed.
... View more
01-24-2018
09:04 AM
I've now installed Postgres 9.4 with Ambari 2.5.0.3,
upgraded Ambari to 2.6.0.0 but when I do the upgrade, I get another
Postgres issue java.lang.IllegalArgumentException: cluster_version table does not contain repo_version_id column ambari-server upgrade Using python /usr/bin/python Upgrading ambari-server INFO: Upgrade Ambari Server INFO: Updating Ambari Server properties in ambari.properties ... INFO: Updating Ambari Server properties in ambari-env.sh ... WARNING: Original file ambari-env.sh kept INFO: Fixing database objects owner Ambari Server configured for Postgres. Confirm you have made a backup of the Ambari Server database [y/n] (y)? y INFO: Upgrading database schema INFO: Return code from schema upgrade command, retcode = 1 ERROR: Error executing schema upgrade, please check the server logs. ERROR: Error output from schema upgrade command: ERROR:
Exception in thread "main" org.apache.ambari.server.AmbariException:
cluster_version table does not contain repo_version_id column at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418) Caused by: java.lang.IllegalArgumentException: cluster_version table does not contain repo_version_id column at org.apache.ambari.server.orm.DBAccessorImpl.getIntColumnValues(DBAccessorImpl.java:1536) at org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:507) at org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194) at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200) ... 1 more ERROR: Ambari server upgrade failed. Please look at /var/log/ambari-server/ambari-server.log, for more details. ERROR: Exiting with exit code 11. REASON: Schema upgrade failed.
... View more
01-23-2018
11:22 AM
Hi all, I've upgraded Postgres 9.5 to Postgres 10.1 on one of my lab servers, Ambari works fine with Postgres 10, but when I do the upgrade from Version
2.5.0.3 to Version 2.6.0.0 yum info ambari-server Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Installed Packages Name : ambari-server Arch : x86_64 Version : 2.5.0.3 Release : 7 Size : 722 M Repo : installed Summary : Ambari Server URL : http://www.apache.org License : (c) Apache Software Foundation Description : Maven Recipe: RPM Package. Available Packages Name : ambari-server Arch : x86_64 Version : 2.6.0.0 Release : 267 Size : 712 M Repo : ambari-ambari-2.6.0.0 Summary : Ambari Server URL : http://www.apache.org License : (c) Apache Software Foundation Description : Maven Recipe: RPM Package. But when I try and do the upgrade I get Column t1.tgconstrname does not exist error, seem like it might be a Postgres 10 problem? https://liquibase.jira.com/browse/CORE-3135 See the logfile below 23 Jan 2018 12:35:00,848 INFO [main] DBAccessorImpl:874 - Executing query: DELETE FROM upgrade_group 23 Jan 2018 12:35:00,851 INFO [main] DBAccessorImpl:874 - Executing query: DELETE FROM upgrade 23 Jan 2018 12:35:00,855 INFO [main] DBAccessorImpl:874 - Executing query: ALTER TABLE upgrade DROP COLUMN to_version 23 Jan 2018 12:35:00,863 INFO [main] DBAccessorImpl:874 - Executing query: ALTER TABLE upgrade DROP COLUMN from_version 23 Jan 2018 12:35:00,872 INFO [main] DBAccessorImpl:874 - Executing query: ALTER TABLE upgrade ADD from_repo_version_id BIGINT NOT NULL 23 Jan 2018 12:35:00,888 ERROR [main] SchemaUpgradeHelper:202 - Upgrade failed. org.postgresql.util.PSQLException: ERROR: column t1.tgconstrname does not exist Position: 113 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:283) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getImportedExportedKeys(AbstractJdbc2DatabaseMetaData.java:3580) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getCrossReference(AbstractJdbc2DatabaseMetaData.java:3894) at org.apache.ambari.server.orm.DBAccessorImpl.tableHasForeignKey(DBAccessorImpl.java:404) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:509) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:482) at org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:181) at org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122) at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418) 23 Jan 2018 12:35:00,893 ERROR [main] SchemaUpgradeHelper:437 - Exception occurred during upgrade, failed org.apache.ambari.server.AmbariException: ERROR: column t1.tgconstrname does not exist Position: 113 at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418) Caused by: org.postgresql.util.PSQLException: ERROR: column t1.tgconstrname does not exist Position: 113 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:283) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getImportedExportedKeys(AbstractJdbc2DatabaseMetaData.java:3580) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getCrossReference(AbstractJdbc2DatabaseMetaData.java:3894) at org.apache.ambari.server.orm.DBAccessorImpl.tableHasForeignKey(DBAccessorImpl.java:404) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:509) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:482) Please I need some help, any suggestions?
... View more
Labels:
- Labels:
-
Apache Ambari
10-18-2017
07:56 AM
Hi all, I get an error when trying to start Llap, HiveServer2, see the output log below Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 681, in <module>
HiveServerInteractive().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 314, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 229, in restart_llap
self._llap_start(env)
File "/var/lib/ambari-agent/cache/common-services/HIVE/0.12.0.2.0/package/scripts/hive_server_interactive.py", line 332, in _llap_start
code, output, error = shell.checked_call(cmd, user=params.hive_user, quiet = True, stderr=subprocess.PIPE, logoutput=True)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hdp/current/hive-server2-hive2/bin/hive --service llap --slider-am-container-mb 512 --size 13824m --cache 1536m --xmx 9830m --loglevel INFO --output /var/lib/ambari-agent/tmp/llap-slider2017-10-18_07-47-53 --slider-placement 0 --skiphadoopversion --skiphbasecp --instances 4 --logger query-routing --args " -XX:+AlwaysPreTouch -Xss512k -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m"' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.0.3-8/hive2/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.6.0.3-8/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: [hive.aux.jars.path]
WARN cli.LlapServiceDriver: Java versions might not match : JAVA_HOME=[/usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.141-1.b16.el7_3.x86_64],process jre=[/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.141-1.b16.el7_3.x86_64/jre]
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
WARN conf.HiveConf: HiveConf hive.llap.daemon.vcpus.per.instance expects INT type value
Traceback (most recent call last):
File "/usr/hdp/2.6.0.3-8/hive2/scripts/llap/slider/package.py", line 201, in <module>
main(sys.argv[1:])
File "/usr/hdp/2.6.0.3-8/hive2/scripts/llap/slider/package.py", line 153, in main
os.makedirs(output)
File "/usr/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/var/lib/ambari-agent/tmp/llap-slider2017-10-18_07-47-53'
The permissions seem to be fine, ambari is run as non root user [root@xxx scripts]# ls -ltr /var/lib/ambari-agent
total 76
-rwxr-xr-x. 1 ambari ambari 2326 Apr 3 2017 upgrade_agent_configs.py
-rwxr-xr-x. 1 ambari ambari 6012 Apr 3 2017 install-helper.sh
-rwxr-xr-x. 1 ambari ambari 1345 Apr 3 2017 ambari-sudo.sh
-rwxr-xr-x. 1 ambari ambari 1123 Apr 3 2017 ambari-env.sh
drwxr-xr-x. 2 ambari ambari 6 Apr 3 2017 lib
drwxr-xr-x. 2 ambari ambari 6 Apr 3 2017 keys
-rwxr-xr-x. 1 ambari ambari 2235 Apr 3 2017 upgrade_agent_configs.pyo
drwxr-xr-x. 2 ambari ambari 51 Aug 7 09:24 tools
drwxr-xr-x. 9 ambari ambari 4096 Aug 7 13:15 cache
drwxr-xr-x. 4 ambari ambari 27 Aug 7 13:51 cred
drwxrwxr-x. 3 ambari ambari 4096 Oct 18 09:19 tmp
drwxr-xr-x. 4 ambari ambari 24576 Oct 18 11:23 data
[root@xxx scripts]# ls -ltr /var/lib/ambari-agent/tmp
total 468
-r-xr-xr-x. 1 ambari ambari 1556 Aug 7 13:49 changeUid.sh
-rw-r--r--. 1 ambari ambari 1931 Aug 7 13:51 DBConnectionVerification.jar
-rw-r--r--. 1 ambari ambari 11902 Aug 7 13:55 CredentialUtil.jar
-rw-r--r--. 1 ambari ambari 446067 Aug 7 13:55 postgresql-jdbc.jar
-rwxr-xr-x. 1 ambari ambari 1026 Aug 7 13:55 start_hiveserver2_script
drwxrwxrwt. 3 hdfs hadoop 4096 Oct 13 12:38 hadoop_java_io_tmpdir
-rwxr-xr-x. 1 ambari ambari 1079 Oct 18 09:19 start_hiveserver2_interactive_script
[root@xxx scripts]# cat /var/lib/ambari-agent/tmp/start_hiveserver2_
[root@xxx scripts]# cat /var/lib/ambari-agent/tmp/start_hiveserver2_interactive_script
#
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
#
HIVE_SERVER2_INTERACTIVE_OPTS=" -hiveconf hive.log.file=hiveserver2Interactive.log -hiveconf hive.log.dir=$5"
HIVE_INTERACTIVE_CONF_DIR=$4 /usr/hdp/current/hive-server2-hive2/bin/hiveserver2 -hiveconf hive.metastore.uris=" " ${HIVE_SERVER2_INTERACTIVE_OPTS} > $1 2> $2 &
echo $!|cat>$3[root@xxx scripts]# Does any one have any ideas of what I can look at please?
... View more
Labels:
- Labels:
-
Apache Hive
02-08-2017
08:31 AM
Hi @ Josh Elser I made the changes on the OS disabling the IPV6 and that seems to have done the trick, thanks so much for the suggestion [root@server02 ~]# vi /etc/sysctl.conf
[root@server02 ~]# sysctl -p
net.ipv4.tcp_keepalive_time = 300
net.ipv4.ip_local_port_range = 1024 65000
fs.file-max = 64000
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
[root@server02 ~]# systemctl restart network
[root@server02 ~]#
... View more
02-08-2017
05:36 AM
Thanks @ Josh Elser I going to disable IPV6 on OS level and I'll try the -Djava.net.preferIPv4Stack=true as well. One thing though, you say e.g. for "10.0.0.1", generate "1.0.0.10.in-addr.arpa" , the way I understand the rDNS lookup is that it will swap the 1st 2 and the last 2, IE 192.168.1.101 will be 168.192.101.1, see my DNS entries below, I'm thinking that if the resolve is like "10.0.0.1", generate "1.0.0.10.in-addr.arpa" , my entries will not be "hit" and that might be the problem? zone "168.192.in-addr.arpa" {
type master; file "/etc/named/zones/db.192.168"; # 192.168.1 subnet
}; [root@server01 zones]# cat db.192.168
$TTL 604800
@ IN SOA server01int.xxxx.com. admin.xxxx.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS server01int.xxxx.com.
IN NS server02int.xxxx.com.
; PTR Records
101.1 IN PTR server01int.xxxx.com. ; 192.168.1.101
102.1 IN PTR server02int.xxxx.com. ; 192.168.1.102
103.1 IN PTR server03int.xxxx.com. ; 192.168.1.103
104.1 IN PTR server04int.xxxx.com. ; 192.168.1.104
105.1 IN PTR server05int.xxxx.com. ; 192.168.1.105
106.1 IN PTR server06int.xxxx.com. ; 192.168.1.106
[root@server01 zones]#
... View more
02-07-2017
10:27 AM
Hi @ Josh Elser , I had a look at the document, but I just cannot seem to find the problem, I have gone so far as in to setup my own bind (DNS) server on one of the servers in the cluster. When I do nslookup with internal IP, external IP, internal hostname and external hostname, they are all resolved. The problem I think is two-fold, when I specify hbase.master.dns.interface=eno2 and hbase.regionserver.dns.interface=eno2, I get the following error (which seems to be documented all over) 2017-02-07 11:23:00,418 INFO [main] util.ServerCommandLine: vmName=OpenJDK 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.111-b15
2017-02-07 11:23:00,418 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -Dhdp.version=2.5.3.0-37, -XX:+UseConcMarkSweepGC, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201702071122, -Xmx1024m, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-master-server01.xxxx.com.log, -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/2.5.3.0-37/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.3.0-37/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2017-02-07 11:23:00,549 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2515)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2529)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.util.DNS.getDefaultHost(DNS.java:53)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getHostname(RSRpcServices.java:922)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:867)
at org.apache.hadoop.hbase.master.MasterRpcServices.<init>(MasterRpcServices.java:230)
at org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:581)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:540)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:411)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2510)
... 5 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
at org.apache.hadoop.net.DNS.reverseDns(DNS.java:82)
at org.apache.hadoop.net.DNS.getHosts(DNS.java:253)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:366)
... 21 more When I take these parameters out, the Active Master and Standby Master starts up, but on the external hostname and IP address, the alert says that it is trying to connect to the internal hostname and internal IP address [root@server01 hbase]# netstat -anp | grep 16000
tcp6 0 0 172.28.200.198:16000 :::* LISTEN 17293/java
tcp6 0 0 172.28.200.198:30230 172.28.200.214:16000 ESTABLISHED 17898/java
[root@server01 hbase]# Connection failed: [Errno 111] Connection refused to server01int.xxxx.com:16000 The ifconfig seems to be correct, eno1 is external and eno2 is internal, All the /etc/hosts files contain all the servers in the cluster [root@server01 hbase]# ifconfig -a
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.28.200.198 netmask 255.255.255.0 broadcast 172.28.200.255
inet6 fe80::ec4:7aff:fecd:f1f0 prefixlen 64 scopeid 0x20<link>
ether 0c:c4:7a:cd:f1:f0 txqueuelen 1000 (Ethernet)
RX packets 1559331 bytes 1448481094 (1.3 GiB)
RX errors 0 dropped 120 overruns 0 frame 0
TX packets 966299 bytes 324828255 (309.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc7500000-c757ffff
eno2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.101 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::ec4:7aff:fecd:f1f1 prefixlen 64 scopeid 0x20<link>
ether 0c:c4:7a:cd:f1:f1 txqueuelen 1000 (Ethernet)
RX packets 17758610 bytes 8386323271 (7.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 19826227 bytes 15357623455 (14.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc7400000-c747ffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 25674869 bytes 14514139121 (13.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25674869 bytes 14514139121 (13.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@server01 hbase]# [root@server03 hbase]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.28.200.214 server03.xxxx.com server03
192.168.1.103 server03int.xxxx.com server03int
# Entries for Ambari on internal IPs
192.168.1.106 server06int.xxxx.com server06int
192.168.1.105 server05int.xxxx.com server05int
192.168.1.104 server04int.xxxx.com server04int
192.168.1.101 server01int.xxxx.com server01int
192.168.1.102 server02int.xxxx.com server02int
192.168.1.103 server03int.xxxx.com server03int
# End-Entries for Ambari on internal IPs
[root@server03 hbase]# nslookup resolves with no problem [root@server03 hbase]# nslookup
> server01
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01.xxxx.com
Address: 172.28.200.198
> server01int
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01int.xxxx.com
Address: 192.168.1.101
> server01.xxxx.com
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01.xxxx.com
Address: 172.28.200.198
> server01int.xxxx.com
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01int.xxxx.com
Address: 192.168.1.101
> 192.168.1.101
Server: 192.168.1.101
Address: 192.168.1.101#53
101.1.168.192.in-addr.arpa name = server01int.xxxx.com.
> 172.28.200.198
Server: 192.168.1.101
Address: 192.168.1.101#53
198.200.28.172.in-addr.arpa name = server01.xxxx.com.
> exit
[root@server03 hbase]#
Any idea of what I'm not doing wrong here, please Regards
... View more
02-07-2017
06:13 AM
Thanks Josh, let me have a read on this, it is confirmed, my bind address is set to 0.0.0.0
... View more
02-06-2017
07:56 AM
Hi I have a cluster of servers (6 in total), each with 4 NIC interfaces, 2 of the NICs on each server connects to the outside world and the other 2 on each server connects internally to the other servers in the cluster using switches. All the traffic internally goes over this bond, so bond0 (eno1 and eno3) is for external traffic and bond1 (eno2 and eno4) is for internal traffic. In the /etc/hosts files there are entries for the hostname that points to the external bond and host names with xxxxyyint to indicate internal and that points to the internal bond, bond1. Everything works, but. In the config for Hbase, I can specify the Hbase active master in the field hbase.master.hostname=xxxx01int, my question is, how do I specify the Hbase Master hostname, I tried something like hbase.master.hostname=xxxx01int,xxxx03int , but that does not seem to work. The alert that I'm getting says Hbase Master Process - connection failed [Errno 111] Connection refused to xxxx03int:16000 When I telnet to 16000 from xxxx01int to xxxx03int, it only seems to work on the external IP address, not the internal IP address. It seem that the hostname command is used and of course the hostname reports the external host name, not the internal hostname.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase