Member since
08-02-2016
19
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6640 | 02-14-2017 04:43 AM |
06-02-2017
07:10 PM
Dr. Breitweg, you'll need to make the change with Ambari rather than manually editing the config file, please refer to the following page https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-operations/content/set_rack_id_individual_hosts.html
... View more
03-08-2017
09:23 PM
This looks to be a precision thing. Postgres 8.2 (which HAWQ is loosely based on) stores timestamps with microsecond precision, whereas the from_unixtime() function expects the number in seconds, which explains why you're a few centuries in the future 🙂
... View more
03-07-2017
06:16 PM
I had this issue, I modified the imports section of the topology_script to be Python 3 compatible: from __future__ import print_function
import sys, os
try:
from string import join
except ImportError:
join = lambda s: " ".join(s)
try:
import ConfigParser
except ModuleNotFoundError:
import configparser as ConfigParser
... View more
02-14-2017
04:46 PM
1 Kudo
Shikhar - I actually experienced the exact same issue - for me it was due to an incomplete PXF upgrade. Initially I noticed (on my system) the PXF RPMs were at a different build version than HAWQ was - this was the first thing I fixed. I then explored a number of dead ends, ultimately noticing the timestamps of the files in /var/pxf could not possibly be the correct ones, given when I upgraded HAWQ to a newer version. After I confirmed these files were created at runtime, I removed the entire directory and re-ran the "service pxf-service init", which re-created that folder and nested files from the correct RPM version. After doing that, all was well in PXF land. 🙂
... View more
02-14-2017
04:43 AM
Try removing /var/pxf, then run "service pxf-service init" on every HAWQ/PXF host
... View more
02-13-2017
09:23 PM
> 5.Also see whether you need to set pxf_service_address point to the hive metstore Shouldn't this be pointed at the namenode host:port or namenode host:51200?
... View more
10-17-2016
03:11 PM
1 Kudo
@Shikhar Agarwal - make sure you have the following options in place and applied on each machine you wish to run HAWQ on. You may need to follow the CLI installation guide.: kernel.shmmax = 1000000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 0
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 200000
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1281 65535
net.core.netdev_max_backlog = 200000
vm.overcommit_memory = 2
fs.nr_open = 3000000
kernel.threads-max = 798720
kernel.pid_max = 798720
# increase network
net.core.rmem_max=2097152
net.core.wmem_max=2097152 http://hdb.docs.pivotal.io/201/hdb/install/install-cli.html#topic_eqn_fc4_15
... View more
09-01-2016
11:16 PM
Alternatively, what are the limitations of out-of-stack version support for Falcon? The snapshot-based replication in Falcon 0.10 provides the ultimate functionality I'm looking for, but am currently running on HDP 2.3 / 2.4.
... View more
09-01-2016
06:55 PM
Would you be able to provide an example of what this code change might be similar to in the existing FeedReplicator code?
... View more