Member since
02-06-2017
17
Posts
1
Kudos Received
0
Solutions
03-29-2018
04:47 AM
Hi @Thomas Williams The error seems to be caused by incorrect paths to the keystore.jks and or incorrect generation of the keystore.jks, I have my NiFi Registry switched off now, I'm trying to get Ranger and NiFi to talk to each other so that I can get Ranger to authenticate the NiFi users, it has been an uphill battle Cheers
... View more
03-22-2018
10:56 AM
1 Kudo
Hi all,
I'm trying to start NiFi registry but it keeps on failing with this below
2018-03-22 12:50:49,102 INFO [main] o.apache.nifi.registry.bootstrap.Command Starting Apache NiFi Registry...
2018-03-22 12:50:49,104 INFO [main] o.apache.nifi.registry.bootstrap.Command Working Directory: /usr/hdf/current/nifi-registry
2018-03-22 12:50:49,105 INFO [main] o.apache.nifi.registry.bootstrap.Command Command: /usr/lib/jvm/jre-1.8.0-openjdk-1.8.0.161-0.b14.el7_4.x86_64/bin/java -classpath /usr/hdf/current/nifi-registry/conf:/usr/hdf/current/nifi-registry/lib/shared/commons-lang3-3.5.jar:/usr/hdf/current/nifi-registry/lib/shared/nifi-registry-utils-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/apache-el-8.5.9.1.jar:/usr/hdf/current/nifi-registry/lib/apache-jsp-8.5.9.1.jar:/usr/hdf/current/nifi-registry/lib/taglibs-standard-spec-1.2.5.jar:/usr/hdf/current/nifi-registry/lib/apache-jsp-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/apache-jstl-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/asm-5.1.jar:/usr/hdf/current/nifi-registry/lib/asm-commons-5.1.jar:/usr/hdf/current/nifi-registry/lib/asm-tree-5.1.jar:/usr/hdf/current/nifi-registry/lib/bcprov-jdk15on-1.55.jar:/usr/hdf/current/nifi-registry/lib/commons-lang3-3.5.jar:/usr/hdf/current/nifi-registry/lib/ecj-4.4.2.jar:/usr/hdf/current/nifi-registry/lib/javax.annotation-api-1.2.jar:/usr/hdf/current/nifi-registry/lib/javax.servlet-api-3.1.0.jar:/usr/hdf/current/nifi-registry/lib/jcl-over-slf4j-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/jetty-annotations-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-continuation-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-http-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-io-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-jndi-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-plus-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-schemas-3.1.jar:/usr/hdf/current/nifi-registry/lib/jetty-security-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-server-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-servlet-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-servlets-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-util-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-webapp-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jetty-xml-9.4.3.v20170317.jar:/usr/hdf/current/nifi-registry/lib/jul-to-slf4j-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/log4j-over-slf4j-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/logback-classic-1.1.3.jar:/usr/hdf/current/nifi-registry/lib/logback-core-1.1.3.jar:/usr/hdf/current/nifi-registry/lib/slf4j-api-1.7.12.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-jetty-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-properties-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-provider-api-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/taglibs-standard-impl-1.2.5.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-runtime-0.1.0.3.1.1.0-35.jar:/usr/hdf/current/nifi-registry/lib/nifi-registry-security-api-0.1.0.3.1.1.0-35.jar -Dorg.apache.jasper.compiler.disablejsr199=true -Xmx512m -Xms512m -Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true -Djava.protocol.handler.pkgs=sun.net.www.protocol -Dnifi.registry.properties.file.path=/usr/hdf/current/nifi-registry/conf/nifi-registry.properties -Dnifi.registry.bootstrap.listen.port=29952 -Dapp=NiFiRegistry -Dorg.apache.nifi.registry.bootstrap.config.log.dir= org.apache.nifi.registry.NiFiRegistry
2018-03-22 12:50:49,116 INFO [main] o.apache.nifi.registry.bootstrap.Command Launched Apache NiFi Registry with Process ID 13082
2018-03-22 12:50:49,468 INFO [NiFi Registry Bootstrap Command Listener] o.a.n.registry.bootstrap.RunNiFiRegistry Apache NiFi Registry now running and listening for Bootstrap requests on port 19605
2018-03-22 12:50:51,604 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut Apache NiFi _ _
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut _ __ ___ __ _(_)___| |_ _ __ _ _
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut | '__/ _ \/ _` | / __| __| '__| | | |
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut | | | __/ (_| | \__ \ |_| | | |_| |
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut |_| \___|\__, |_|___/\__|_| \__, |
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut ==========|___/================|___/=
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut v0.1.0.3.1.1.0-35
2018-03-22 12:50:51,605 INFO [NiFi logging handler] org.apache.nifi.registry.StdOut
2018-03-22 12:50:56,380 ERROR [NiFi logging handler] org.apache.nifi.registry.StdErr Failed to start web server: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException: Validate failed: Detected failed migration to version 1.3 (DropBucketItemNameUniqueness)
2018-03-22 12:50:56,380 ERROR [NiFi logging handler] org.apache.nifi.registry.StdErr Shutting down...
2018-03-22 12:50:57,118 INFO [main] o.a.n.registry.bootstrap.RunNiFiRegistry NiFi Registry never started. Will not restart NiFi Registry
Any ideas on this please "Error creating bean with name 'flywayInitializer'"
... View more
Labels:
- Labels:
-
Apache NiFi
-
Cloudera DataFlow (CDF)
01-24-2018
09:45 AM
Thanks so much, I did that and then hit the next problem, this is like hitting your head against a solid brick wall... ambari=# CREATE TABLE cluster_version ( ambari(# id BIGINT NOT NULL, ambari(# repo_version_id BIGINT NOT NULL, ambari(# cluster_id BIGINT NOT NULL, ambari(# state VARCHAR(32) NOT NULL, ambari(# start_time BIGINT NOT NULL, ambari(# end_time BIGINT, ambari(# user_name VARCHAR(32)); CREATE TABLE ambari=# ambari-server upgrade Using python /usr/bin/python Upgrading ambari-server INFO: Upgrade Ambari Server INFO: Updating Ambari Server properties in ambari.properties ... WARNING: Can not find ambari.properties.rpmsave file from previous version, skipping import of settings INFO: Updating Ambari Server properties in ambari-env.sh ... INFO: Can not find ambari-env.sh.rpmsave file from previous version, skipping restore of environment settings. ambari-env.sh may not include any user customization. INFO: Fixing database objects owner Ambari Server configured for Postgres. Confirm you have made a backup of the Ambari Server database [y/n] (y)? y INFO: Upgrading database schema INFO: Return code from schema upgrade command, retcode = 1 ERROR: Error executing schema upgrade, please check the server logs. ERROR: Error output from schema upgrade command: ERROR: Exception in thread "main" java.lang.Exception: Unexpected error, upgrade failed at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:441) Caused by: java.lang.RuntimeException: Unable to read database version at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.readSourceVersion(SchemaUpgradeHelper.java:97) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:402) Caused by: org.postgresql.util.PSQLException: ERROR: relation "metainfo" does not exist Position: 30 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:283) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.readSourceVersion(SchemaUpgradeHelper.java:90) ... 1 more ERROR: Ambari server upgrade failed. Please look at /var/log/ambari-server/ambari-server.log, for more details. ERROR: Exiting with exit code 11. REASON: Schema upgrade failed.
... View more
01-24-2018
09:04 AM
I've now installed Postgres 9.4 with Ambari 2.5.0.3,
upgraded Ambari to 2.6.0.0 but when I do the upgrade, I get another
Postgres issue java.lang.IllegalArgumentException: cluster_version table does not contain repo_version_id column ambari-server upgrade Using python /usr/bin/python Upgrading ambari-server INFO: Upgrade Ambari Server INFO: Updating Ambari Server properties in ambari.properties ... INFO: Updating Ambari Server properties in ambari-env.sh ... WARNING: Original file ambari-env.sh kept INFO: Fixing database objects owner Ambari Server configured for Postgres. Confirm you have made a backup of the Ambari Server database [y/n] (y)? y INFO: Upgrading database schema INFO: Return code from schema upgrade command, retcode = 1 ERROR: Error executing schema upgrade, please check the server logs. ERROR: Error output from schema upgrade command: ERROR:
Exception in thread "main" org.apache.ambari.server.AmbariException:
cluster_version table does not contain repo_version_id column at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418) Caused by: java.lang.IllegalArgumentException: cluster_version table does not contain repo_version_id column at org.apache.ambari.server.orm.DBAccessorImpl.getIntColumnValues(DBAccessorImpl.java:1536) at org.apache.ambari.server.upgrade.UpgradeCatalog260.getCurrentVersionID(UpgradeCatalog260.java:507) at org.apache.ambari.server.upgrade.UpgradeCatalog260.executeDDLUpdates(UpgradeCatalog260.java:194) at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200) ... 1 more ERROR: Ambari server upgrade failed. Please look at /var/log/ambari-server/ambari-server.log, for more details. ERROR: Exiting with exit code 11. REASON: Schema upgrade failed.
... View more
01-23-2018
11:22 AM
Hi all, I've upgraded Postgres 9.5 to Postgres 10.1 on one of my lab servers, Ambari works fine with Postgres 10, but when I do the upgrade from Version
2.5.0.3 to Version 2.6.0.0 yum info ambari-server Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Installed Packages Name : ambari-server Arch : x86_64 Version : 2.5.0.3 Release : 7 Size : 722 M Repo : installed Summary : Ambari Server URL : http://www.apache.org License : (c) Apache Software Foundation Description : Maven Recipe: RPM Package. Available Packages Name : ambari-server Arch : x86_64 Version : 2.6.0.0 Release : 267 Size : 712 M Repo : ambari-ambari-2.6.0.0 Summary : Ambari Server URL : http://www.apache.org License : (c) Apache Software Foundation Description : Maven Recipe: RPM Package. But when I try and do the upgrade I get Column t1.tgconstrname does not exist error, seem like it might be a Postgres 10 problem? https://liquibase.jira.com/browse/CORE-3135 See the logfile below 23 Jan 2018 12:35:00,848 INFO [main] DBAccessorImpl:874 - Executing query: DELETE FROM upgrade_group 23 Jan 2018 12:35:00,851 INFO [main] DBAccessorImpl:874 - Executing query: DELETE FROM upgrade 23 Jan 2018 12:35:00,855 INFO [main] DBAccessorImpl:874 - Executing query: ALTER TABLE upgrade DROP COLUMN to_version 23 Jan 2018 12:35:00,863 INFO [main] DBAccessorImpl:874 - Executing query: ALTER TABLE upgrade DROP COLUMN from_version 23 Jan 2018 12:35:00,872 INFO [main] DBAccessorImpl:874 - Executing query: ALTER TABLE upgrade ADD from_repo_version_id BIGINT NOT NULL 23 Jan 2018 12:35:00,888 ERROR [main] SchemaUpgradeHelper:202 - Upgrade failed. org.postgresql.util.PSQLException: ERROR: column t1.tgconstrname does not exist Position: 113 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:283) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getImportedExportedKeys(AbstractJdbc2DatabaseMetaData.java:3580) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getCrossReference(AbstractJdbc2DatabaseMetaData.java:3894) at org.apache.ambari.server.orm.DBAccessorImpl.tableHasForeignKey(DBAccessorImpl.java:404) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:509) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:482) at org.apache.ambari.server.upgrade.UpgradeCatalog252.addRepositoryColumnsToUpgradeTable(UpgradeCatalog252.java:181) at org.apache.ambari.server.upgrade.UpgradeCatalog252.executeDDLUpdates(UpgradeCatalog252.java:122) at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:923) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:200) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418) 23 Jan 2018 12:35:00,893 ERROR [main] SchemaUpgradeHelper:437 - Exception occurred during upgrade, failed org.apache.ambari.server.AmbariException: ERROR: column t1.tgconstrname does not exist Position: 113 at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:203) at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:418) Caused by: org.postgresql.util.PSQLException: ERROR: column t1.tgconstrname does not exist Position: 113 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2161) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1890) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:559) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:283) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getImportedExportedKeys(AbstractJdbc2DatabaseMetaData.java:3580) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getCrossReference(AbstractJdbc2DatabaseMetaData.java:3894) at org.apache.ambari.server.orm.DBAccessorImpl.tableHasForeignKey(DBAccessorImpl.java:404) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:509) at org.apache.ambari.server.orm.DBAccessorImpl.addFKConstraint(DBAccessorImpl.java:482) Please I need some help, any suggestions?
... View more
Labels:
- Labels:
-
Apache Ambari
02-08-2017
08:31 AM
Hi @ Josh Elser I made the changes on the OS disabling the IPV6 and that seems to have done the trick, thanks so much for the suggestion [root@server02 ~]# vi /etc/sysctl.conf
[root@server02 ~]# sysctl -p
net.ipv4.tcp_keepalive_time = 300
net.ipv4.ip_local_port_range = 1024 65000
fs.file-max = 64000
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
[root@server02 ~]# systemctl restart network
[root@server02 ~]#
... View more
02-08-2017
05:36 AM
Thanks @ Josh Elser I going to disable IPV6 on OS level and I'll try the -Djava.net.preferIPv4Stack=true as well. One thing though, you say e.g. for "10.0.0.1", generate "1.0.0.10.in-addr.arpa" , the way I understand the rDNS lookup is that it will swap the 1st 2 and the last 2, IE 192.168.1.101 will be 168.192.101.1, see my DNS entries below, I'm thinking that if the resolve is like "10.0.0.1", generate "1.0.0.10.in-addr.arpa" , my entries will not be "hit" and that might be the problem? zone "168.192.in-addr.arpa" {
type master; file "/etc/named/zones/db.192.168"; # 192.168.1 subnet
}; [root@server01 zones]# cat db.192.168
$TTL 604800
@ IN SOA server01int.xxxx.com. admin.xxxx.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS server01int.xxxx.com.
IN NS server02int.xxxx.com.
; PTR Records
101.1 IN PTR server01int.xxxx.com. ; 192.168.1.101
102.1 IN PTR server02int.xxxx.com. ; 192.168.1.102
103.1 IN PTR server03int.xxxx.com. ; 192.168.1.103
104.1 IN PTR server04int.xxxx.com. ; 192.168.1.104
105.1 IN PTR server05int.xxxx.com. ; 192.168.1.105
106.1 IN PTR server06int.xxxx.com. ; 192.168.1.106
[root@server01 zones]#
... View more
02-07-2017
10:27 AM
Hi @ Josh Elser , I had a look at the document, but I just cannot seem to find the problem, I have gone so far as in to setup my own bind (DNS) server on one of the servers in the cluster. When I do nslookup with internal IP, external IP, internal hostname and external hostname, they are all resolved. The problem I think is two-fold, when I specify hbase.master.dns.interface=eno2 and hbase.regionserver.dns.interface=eno2, I get the following error (which seems to be documented all over) 2017-02-07 11:23:00,418 INFO [main] util.ServerCommandLine: vmName=OpenJDK 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.111-b15
2017-02-07 11:23:00,418 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -Dhdp.version=2.5.3.0-37, -XX:+UseConcMarkSweepGC, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.io.tmpdir=/tmp, -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201702071122, -Xmx1024m, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-master-server01.xxxx.com.log, -Dhbase.home.dir=/usr/hdp/current/hbase-master/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/2.5.3.0-37/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.3.0-37/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2017-02-07 11:23:00,549 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2515)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2529)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hbase.util.DNS.getDefaultHost(DNS.java:53)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getHostname(RSRpcServices.java:922)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.<init>(RSRpcServices.java:867)
at org.apache.hadoop.hbase.master.MasterRpcServices.<init>(MasterRpcServices.java:230)
at org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:581)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:540)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:411)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2510)
... 5 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 3
at org.apache.hadoop.net.DNS.reverseDns(DNS.java:82)
at org.apache.hadoop.net.DNS.getHosts(DNS.java:253)
at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:366)
... 21 more When I take these parameters out, the Active Master and Standby Master starts up, but on the external hostname and IP address, the alert says that it is trying to connect to the internal hostname and internal IP address [root@server01 hbase]# netstat -anp | grep 16000
tcp6 0 0 172.28.200.198:16000 :::* LISTEN 17293/java
tcp6 0 0 172.28.200.198:30230 172.28.200.214:16000 ESTABLISHED 17898/java
[root@server01 hbase]# Connection failed: [Errno 111] Connection refused to server01int.xxxx.com:16000 The ifconfig seems to be correct, eno1 is external and eno2 is internal, All the /etc/hosts files contain all the servers in the cluster [root@server01 hbase]# ifconfig -a
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.28.200.198 netmask 255.255.255.0 broadcast 172.28.200.255
inet6 fe80::ec4:7aff:fecd:f1f0 prefixlen 64 scopeid 0x20<link>
ether 0c:c4:7a:cd:f1:f0 txqueuelen 1000 (Ethernet)
RX packets 1559331 bytes 1448481094 (1.3 GiB)
RX errors 0 dropped 120 overruns 0 frame 0
TX packets 966299 bytes 324828255 (309.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc7500000-c757ffff
eno2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.101 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::ec4:7aff:fecd:f1f1 prefixlen 64 scopeid 0x20<link>
ether 0c:c4:7a:cd:f1:f1 txqueuelen 1000 (Ethernet)
RX packets 17758610 bytes 8386323271 (7.8 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 19826227 bytes 15357623455 (14.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xc7400000-c747ffff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 25674869 bytes 14514139121 (13.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25674869 bytes 14514139121 (13.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@server01 hbase]# [root@server03 hbase]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.28.200.214 server03.xxxx.com server03
192.168.1.103 server03int.xxxx.com server03int
# Entries for Ambari on internal IPs
192.168.1.106 server06int.xxxx.com server06int
192.168.1.105 server05int.xxxx.com server05int
192.168.1.104 server04int.xxxx.com server04int
192.168.1.101 server01int.xxxx.com server01int
192.168.1.102 server02int.xxxx.com server02int
192.168.1.103 server03int.xxxx.com server03int
# End-Entries for Ambari on internal IPs
[root@server03 hbase]# nslookup resolves with no problem [root@server03 hbase]# nslookup
> server01
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01.xxxx.com
Address: 172.28.200.198
> server01int
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01int.xxxx.com
Address: 192.168.1.101
> server01.xxxx.com
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01.xxxx.com
Address: 172.28.200.198
> server01int.xxxx.com
Server: 192.168.1.101
Address: 192.168.1.101#53
Name: server01int.xxxx.com
Address: 192.168.1.101
> 192.168.1.101
Server: 192.168.1.101
Address: 192.168.1.101#53
101.1.168.192.in-addr.arpa name = server01int.xxxx.com.
> 172.28.200.198
Server: 192.168.1.101
Address: 192.168.1.101#53
198.200.28.172.in-addr.arpa name = server01.xxxx.com.
> exit
[root@server03 hbase]#
Any idea of what I'm not doing wrong here, please Regards
... View more
02-07-2017
06:13 AM
Thanks Josh, let me have a read on this, it is confirmed, my bind address is set to 0.0.0.0
... View more
02-06-2017
07:56 AM
Hi I have a cluster of servers (6 in total), each with 4 NIC interfaces, 2 of the NICs on each server connects to the outside world and the other 2 on each server connects internally to the other servers in the cluster using switches. All the traffic internally goes over this bond, so bond0 (eno1 and eno3) is for external traffic and bond1 (eno2 and eno4) is for internal traffic. In the /etc/hosts files there are entries for the hostname that points to the external bond and host names with xxxxyyint to indicate internal and that points to the internal bond, bond1. Everything works, but. In the config for Hbase, I can specify the Hbase active master in the field hbase.master.hostname=xxxx01int, my question is, how do I specify the Hbase Master hostname, I tried something like hbase.master.hostname=xxxx01int,xxxx03int , but that does not seem to work. The alert that I'm getting says Hbase Master Process - connection failed [Errno 111] Connection refused to xxxx03int:16000 When I telnet to 16000 from xxxx01int to xxxx03int, it only seems to work on the external IP address, not the internal IP address. It seem that the hostname command is used and of course the hostname reports the external host name, not the internal hostname.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase