Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Postgres ERROR: value too long for type character varying(255)

avatar
New Contributor

Hi,

I been trying to install Ambari and ran into the following issue and so far have not been able to get passed it. When I try to register the host nodes in my cluster they fail.

I have tried Ambari versions 1.7.0 to 2.4.1 with same results. Google searches and searching this forum did not return results that helped me.

/var/log/ambari-server/ambari-server.log contains:

Caused by: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.6.2.v20151217-774c696): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: org.postgresql.util.PSQLException: ERROR: value too long for type character varying(255) Error Code: 0 Call: INSERT INTO hosts (host_id, cpu_count, cpu_info, discovery_status, host_attributes, host_name, ipv4, ipv6, last_registration_time, os_arch, os_info, os_type, ph_cpu_count, public_host_name, rack_info, total_mem) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) bind => [16 parameters bound] Query: InsertObjectQuery(org.apache.ambari.server.orm.entities.HostStateEntity@8fdfba02) at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:340) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.processExceptionForCommError(DatabaseAccessor.java:1620) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeDirectNoSelect(DatabaseAccessor.java:900) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeNoSelect(DatabaseAccessor.java:964) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:633) at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatch(ParameterizedSQLBatchWritingMechanism.java:149) at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.executeBatchedStatements(ParameterizedSQLBatchWritingMechanism.java:134) at org.eclipse.persistence.internal.databaseaccess.ParameterizedSQLBatchWritingMechanism.appendCall(ParameterizedSQLBatchWritingMechanism.java:82) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:605) at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:560) at org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:2055) at org.eclipse.persistence.sessions.server.ClientSession.executeCall(ClientSession.java:306) at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:242) at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:228) at org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.insertObject(DatasourceCallQueryMechanism.java:377) at org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:165) at org.eclipse.persistence.internal.queries.StatementQueryMechanism.insertObject(StatementQueryMechanism.java:180) at org.eclipse.persistence.internal.queries.DatabaseQueryMechanism.insertObjectForWrite(DatabaseQueryMechanism.java:489)

Here is some information about my setup: 3 node cluster running on Openstack VMs:

cat /etc/redhat-release CentOS release 6.8 (Final)

VCPUs: 4

free -m

total used free shared buffers cached

Mem: 7872 3290 4581 5 41 2435 -/+ buffers/cache: 812 7059 Swap: 0 0 0

Root passwordless ssh across all 3 nodes.

/etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.147.110.58 hbase17700.localhost.com hbase17700 hbase1 10.147.110.56 hbase27700.localhost.com hbase27700 hbase2 10.147.110.57 hbase37700.localhost.com hbase37700 hbase3

java -version java version "1.8.0_77"Java(TM) SE Runtime Environment (build 1.8.0_77-b03) Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

python.x86_64 2.6.6-64.el6

ambari-agent.x86_64 2.4.1.0-22 ambari-server.x86_64 2.4.1.0-22

1 ACCEPTED SOLUTION

avatar
New Contributor

Solved the issue. The MAC address that was being read from the OS was corrupted and it was trying to insert garbage that cause the tables to go over 255 character. Fixed it outside of Ambari.

View solution in original post

3 REPLIES 3

avatar

The only relevant columns that are varchar(256) in the hosts table are host_name, cpu_info, os_arch, os_type, and rack_info.

You can try to figure out which one is exceeding the limit by printing out these values on the agent, or setting up wireshark to look at the traffic.

ambari=> select host_name, cpu_info, os_arch, os_type, rack_info from hosts;
        host_name        | cpu_info | os_arch | os_type |   rack_info
-------------------------+----------+---------+---------+---------------
 c6401.ambari.apache.org |          | x86_64  | centos6 | /default-rack
 c6402.ambari.apache.org |          | x86_64  | centos6 | /default-rack
 c6403.ambari.apache.org |          | x86_64  | centos6 | /default-rack
(3 rows)

Or you can take a DB backup, increase the size of all of the varchar columns to 512 to find the offending value, fix it, and then restore your DB to perform the agent registration again.

http://sniptools.com/databases/resize-a-column-in-a-postgresql-table-without-changing-data

avatar
New Contributor

This is a new install so the database is empty:

ambari=> select host_name, cpu_info, os_arch, os_type, rack_info from hosts;
 host_name | cpu_info | os_arch | os_type | rack_info
-----------+----------+---------+---------+-----------
(0 rows)

ambari-> \d+ hosts
                                  Table "ambari.hosts"
         Column         |           Type           | Modifiers | Storage  | Description
------------------------+--------------------------+-----------+----------+-------------
 host_id                | bigint                   | not null  | plain    |
 host_name              | character varying(255)   | not null  | extended |
 cpu_count              | integer                  | not null  | plain    |
 ph_cpu_count           | integer                  |           | plain    |
 cpu_info               | character varying(255)   | not null  | extended |
 discovery_status       | character varying(2000)  | not null  | extended |
 host_attributes        | character varying(20000) | not null  | extended |
 ipv4                   | character varying(255)   |           | extended |
 ipv6                   | character varying(255)   |           | extended |
 public_host_name       | character varying(255)   |           | extended |
 last_registration_time | bigint                   | not null  | plain    |
 os_arch                | character varying(255)   | not null  | extended |
 os_info                | character varying(1000)  | not null  | extended |
 os_type                | character varying(255)   | not null  | extended |
 rack_info              | character varying(255)   | not null  | extended |
 total_mem              | bigint                   | not null  | plain    |
Indexes:
    "pk_hosts" PRIMARY KEY, btree (host_id)
    "uq_hosts_host_name" UNIQUE, btree (host_name)
Referenced by:
    TABLE "configgrouphostmapping" CONSTRAINT "fk_cghm_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "clusterhostmapping" CONSTRAINT "fk_clusterhostmapping_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "hostcomponentdesiredstate" CONSTRAINT "fk_hcdesiredstate_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "host_role_command" CONSTRAINT "fk_host_role_command_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "host_version" CONSTRAINT "fk_host_version_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "hostcomponentstate" CONSTRAINT "fk_hostcomponentstate_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "hostconfigmapping" CONSTRAINT "fk_hostconfmapping_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "topology_host_info" CONSTRAINT "fk_hostinfo_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "hoststate" CONSTRAINT "fk_hoststate_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "kerberos_principal_host" CONSTRAINT "fk_krb_pr_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
    TABLE "serviceconfighosts" CONSTRAINT "fk_scvhosts_host_id" FOREIGN KEY (host_id) REFERENCES hosts(host_id)
Has OIDs: no

Tried increasing the size of the host_name but permission is denied:

ambari=> UPDATE pg_attribute SET atttypmod = 259+259 WHERE attrelid = 'hosts'::regclass AND attname = 'host_name';
ERROR:  permission denied for relation pg_attribute

avatar
New Contributor

Solved the issue. The MAC address that was being read from the OS was corrupted and it was trying to insert garbage that cause the tables to go over 255 character. Fixed it outside of Ambari.