Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Has anyone tried installing HAWQ on an HDP Ambari cluster hosted on the Google Cloud free trial machines?

avatar
Rising Star

I'm getting the below error while getting the HAWQ master to come up.

2016-10-16 06:41:20.368546 UTC,,,p31467,th2085800224,,,,0,,,seg-10000,,,,,"FATAL","XX000","could not create shared memory segment: Invalid argument (pg_shmem.c:183)","Failed system call was shmget(key=1, size=506213024, 03600).","This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 506213024 bytes), reduce PostgreSQL's shared_buffers parameter (currently 4000) and/or its max_connections parameter (currently 3000). If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.

Please note that I'm trying to install the Pivotal HDB version 2.0.1 and following the steps in the documentation link below:

http://hdb.docs.pivotal.io/201/hdb/install/install-ambari.html

1 ACCEPTED SOLUTION

avatar
New Contributor

Hello,

It's very likely that, either the Google cloud scripts or automated scripts are overriding the following default 'shared memory' kernel parameters set by the Pivotal HDB/Hawq Ambari plugin during installation.

kernel.shmmax = 1000000000

kernel.shmmni = 4096

kernel.shmall = 4000000000

Although, you could try setting them using 'sysctl -w' on all the HDB cluster hosts and continue the installation. I would strongly advise, to verify the external scripts changing these kernel settings.

For eg:

sysctl -w kernel.shmmax=1000000000

View solution in original post

4 REPLIES 4

avatar
Contributor

@Shikhar Agarwal - make sure you have the following options in place and applied on each machine you wish to run HAWQ on. You may need to follow the CLI installation guide.:

kernel.shmmax = 1000000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 0
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 200000
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1281 65535
net.core.netdev_max_backlog = 200000
vm.overcommit_memory = 2
fs.nr_open = 3000000
kernel.threads-max = 798720
kernel.pid_max = 798720
# increase network
net.core.rmem_max=2097152
net.core.wmem_max=2097152

http://hdb.docs.pivotal.io/201/hdb/install/install-cli.html#topic_eqn_fc4_15

avatar
Rising Star

Thanks Kyle, GCP machines were somehow overriding these settings and the request size was increasing the buffer size each time the HAWQ master was trying to come up. I enforced the shmmax parameter and it successfully came up thereafter.

avatar
New Contributor

Hello,

It's very likely that, either the Google cloud scripts or automated scripts are overriding the following default 'shared memory' kernel parameters set by the Pivotal HDB/Hawq Ambari plugin during installation.

kernel.shmmax = 1000000000

kernel.shmmni = 4096

kernel.shmall = 4000000000

Although, you could try setting them using 'sysctl -w' on all the HDB cluster hosts and continue the installation. I would strongly advise, to verify the external scripts changing these kernel settings.

For eg:

sysctl -w kernel.shmmax=1000000000

avatar
Rising Star

Thanks Niranjan, the above solution worked for me...:)