Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Failure to create shared memory segment while installing HDB (HAWQ)

Highlighted

Failure to create shared memory segment while installing HDB (HAWQ)

Guru

Following error was received when attempting to install HDB (HAWQ) on a physical cluster. Initial attempts to change shared buffers did not work.

Has anyone encountered this issue? Would appreciate information around resolving it.

selecting default shared_buffers/max_fsm_pages ... 125MB/200000

creating configuration files ... ok

creating template1 database in /data/hawq/segment/base/1 ... 2016-09-02 01:26:37.330995 GMT,,,p431658,th1833756800,,,,0,,,seg-10000,,,,,"WARNING","01000","""fsync"": can not be set by the user and will be ignored.",,,,,,,,"set_config_option","guc.c",10023,ok

loading file-system persistent tables for template1 ...2016-09-02 01:26:39.914890 GMT,,,p431732,th2068359296,,,,0,,,seg-10000,,,,,"WARNING","01000","""fsync"": can not be set by the user and will be ignored.",,,,,,,,"set_config_option","guc.c",10023,2016-09-01 21:26:40.103618 EDT,,,p431732,th2068359296,,,,0,,,seg-10000,,,,,"FATAL","XX000","could not create shared memory segment: Invalid argument (pg_shmem.c:183)","Failed system call was shmget(key=1, size=506213024, 03600).","

This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter.  You can either reduce the request size or reconfigure the kernel with larger SHMMAX.  To reduce the request size (currently 506213024 bytes), reduce PostgreSQL's shared_buffers parameter (currently 4000) and/or its max_connections parameter (currently 3000).

If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.

The PostgreSQL documentation contains more information about shared memory configuration.",,,,,,"InternalIpcMemoryCreate","pg_shmem.c",183,1  0x87463a postgres errstart + 0x22a2  0x74c5e6 postgres <symbol not found> + 0x74c5e63  0x74c7cd postgres PGSharedMemoryCreate + 0x3d4  0x7976b6 postgres CreateSharedMemoryAndSemaphores + 0x3365  0x880489 postgres BaseInit + 0x196  0x7b03bc postgres PostgresMain + 0xdbc7  0x6c07d5 postgres main + 0x5358  0x351a21ed1d libc.so.6 __libc_start_main + 0xfd9  0x4a14e9 postgres <symbol not found> + 0x4a14e9
4 REPLIES 4

Re: Failure to create shared memory segment while installing HDB (HAWQ)

New Contributor
@gkeys

Please validate these system parameters are set on every machine: http://hdb.docs.pivotal.io/hdb/install/install-cli.html#topic_eqn_fc4_15

Re: Failure to create shared memory segment while installing HDB (HAWQ)

Contributor

If you do it through Ambari-HAWQ config page and restart service, it will take care of it on all nodes.

Re: Failure to create shared memory segment while installing HDB (HAWQ)

New Contributor

Shared_buffer sets the amount of memory a HAWQ segment instance uses for shared memory buffers. This setting must be at least 128KB and at least 16KB times max_connections.

When setting shared_buffers, the values for the operating system parameters SHMMAX or SHMALL might also need to be adjusted

The value of SHMMAX must be greater than this value: shared_buffers + other_seg_shmem

You can set the parameter values using "hawq config " utility

hawq config -s shared_buffers (Will show you the value )

hawq config -c shared_buffers -v value .Please let me know how that goes !

Re: Failure to create shared memory segment while installing HDB (HAWQ)

New Contributor

Looks like your memory settings are too low to initialize the database. Check kernel.shmmax and = 500000000

kernel.shmall values .If it is low bring them to kernel.shmmax = 1000000000

kernel.shmall = 4000000000 as well

Don't have an account?
Coming from Hortonworks? Activate your account here