Member since
01-03-2017
181
Posts
44
Kudos Received
24
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1852 | 12-02-2018 11:49 PM | |
2474 | 04-13-2018 06:41 AM | |
2043 | 04-06-2018 01:52 AM | |
2349 | 01-07-2018 09:04 PM | |
5696 | 12-20-2017 10:58 PM |
10-21-2017
12:40 PM
1 Kudo
Hi @Narasimma varman, Apparently this is the error form Postgres Driver, when I dig the reference for the error, got to know that when the mismatch of the column list or any issues with the sql is the possible root cause for the error. however, when I look the sql I couldn't see the value for the Name is not quoted.( being the char column in SQL must be quoted). any other column which is of char/date must be quoted in sql. INSERT INTO public.detail (id,name, salary) VALUES (${id},'${name}',${salary}) On the other note, you could use convert json sql to --> put sql direct, perhaps you may use json to Attribute --> Replace Text and then to --> putSQL. for the same thing if you wish to do it in different way. Hope this helps !!
... View more
10-19-2017
12:04 PM
1 Kudo
Hi @sadanjan mallireddy, by default Zookeeper Server binds(listens) to all the IPs/interfaces specified in the host, on the reverse case, we can control this behavior by adjusting the parameter clientPortddress="required only IP/host name" Abstract from Zookeeper documentation clientPortAddress
New in 3.3.0: the address (ipv4, ipv6 or hostname) to listen for client connections; that is, the address that clients attempt to connect to. This is optional,
by default we bind in such a way that any connection to the clientPort for any address/interface/nic on the server will be accepted.
from this, you can redirect the required traffic to another nic (as long as the firewall allows). Hope this helps !!
... View more
10-19-2017
11:42 AM
1 Kudo
@Gerd Koenig, Apperently Yes, Still this is the case with HDF 3.0, we have tried this a month back and eventually gave up after figuring out that this is not supported at this moment.
... View more
10-17-2017
11:03 PM
Hi @Ashnee Sharma, can you please try by setting "hive.groupby.skewindata=true" to make randomized shuffle before reduce. In any case the following syntax should produce the accurate result select count(1) from ( select a21.company_code from dim_investment a21 group by a21.company_code) aa
inner group by ensure that there should be a mapper executed before counting the distinct rows.
... View more
10-17-2017
10:42 PM
Hi @Jonathan Bell, can you please have a look at /var/log/nifi/nifi-app.log file (tail from the bottom), I presume, that would be because of the insufficient heap, default heap size is about 512MB. That can be adjusted to something bigger value(4G or 8G), depends up on the host Memory availability. More on configuring the heap for HDF can be found here, under the section bootstrap configuration. hope this helps !!
... View more
10-14-2017
01:13 AM
Hi @Mamta Chawla, Prior to retive your keytabs form the host, you need to ensure that, host is prepared to connect to KDC. by default the configuraton details can be found at /etc/krb5.conf file, so after installing the krb5-workstation (krb5-client in SLES). [libdefaults]
ticket_lifetime = 24000
default_realm = <YOUR_REALM>
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
<YOUR_REALM> = {
kdc = <YOUR_AD_SERVER1>:88
kdc = <YOUR_AD_SERVER2>:88
}
#######Replace exmple.com with your REALM Name
[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
alter natively you can copy the same file from the host which is already configured for kerberos client. once after that, you may use the above command to retrieve the keytabs. however, please note that, you must have access to retrieve the keytabs from that host and user. for additional details please follow the instructions given at : https://hortonworks.com/blog/enabling-kerberos-hdp-active-directory-integration/ for more on step by step instructions you may refer here
... View more
10-14-2017
12:05 AM
Hi @Pawel Lagodzinski, can you please check what the value for the parameter, to check whether dynamic allocation is holding of the explicit resource argument. spark.dynamicAllocation.enabled =true at the same time can you please use --conf instead of --driver-cores (as this was not well documented in any spark docs- though it was showing in command line). spark-submit --master yarn --deploy-mode cluster \
--conf "spark.driver.cores=4" --driver-memory 4G \
--num-executors 3 --executor-memory 14G --executor-cores 4 \
--conf spark.yarn.maxAppAttempts=1 --files /usr/hdp/current/spark-client/conf/hive-site.xml {my python code}
... View more
10-10-2017
05:10 AM
Hi @Ashnee Sharma, Ohh, that means HBASE --> Phoenix -->Hive -->Spark to make sure Spark pick the configuration of the Hive - Phoenix can you please ensure the directory is added to hive aux path list. in hive-site.xml file add the following configuration (create custom path) <property>
<name>hive.aux.jars.path</name>
<value>/path/to/additionallibs</value>
</property>
ensure that the jar file : phoenix-version-hive.jar must present across all the nodes. then spark automatically pickup the configuration details from hive-site.xml and should uses the configuration.
... View more
10-09-2017
06:58 AM
Hi @Mustafa Kemal MAYUK, from the error, it apparently user have not authenticated with proper keytab. these are the possible root causes / solution for the problem. 1. check you have all the service keytabs are placed in "/etc/security/keytabs" for each host. 2. verify the service user for the service have at least read access for the keytab. 3. most common issue is with naming service keytab name & service principle name which mentioned in service configuration is not matched with keytab file . apart from this please ensure to check you are able to get the ticket using the keytabs. Hope this helps!!
... View more
10-09-2017
02:00 AM
Hi @Yevgen Shramko, "date" is a reserve word in Hive, not sure what is the difference for the parameter "set hive.support.sql11.reserved.keywords=false/true" for LLAP and non-LLAP execution, hence to avoid the ambiguity, can you please change the column n name from "date" to something (like prepending with some prefix or post-pending with suffix). Hope this helps !!
... View more