Member since
11-14-2017
13
Posts
0
Kudos Received
0
Solutions
12-08-2017
12:28 AM
@Venkata Sudheer Kumar M and @Jay Kumar SenSharma is this a bug then ?
... View more
12-08-2017
12:27 AM
@Jay Kumar SenSharma - Yes it is Version2.5.0.3
... View more
12-07-2017
05:58 AM
Hi @Venkata Sudheer Kumar M firstly thanks for being across all my issues. I may not have made myself clear. The size of the output differs every time . Every time we extract data as csv there is a different file size. It should be consistent and all rows must get extracted just like the insert overwrite functionality in Hive
... View more
12-07-2017
01:00 AM
Ambari Hive views 1.5 is running from a standalone Ambari server. The hive view is setup to access the cluster and the configuration using zookeeper ports to access data. The users are downloading output of a sql query from Ambari hive views using the "save as" button. There are two options here
Save to HDFS Download as CSV Every time the business users download a result set the row count of the output extract is different. Be it with option 1 or option 2. The Ambari.properties file content is embedded here. As far as I am aware there are only timeouts set here but there aren't any configurations to limit the result set. #
# Copyright 2011 The Apache Software Foundation
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#Mon Dec 04 11:28:46 AEDT 2017
agent.package.install.task.timeout=1800
agent.stack.retry.on_repo_unavailability=false
agent.stack.retry.tries=5
agent.task.timeout=900
agent.threadpool.size.max=25
ambari-server.user=root
ambari.python.wrap=ambari-python-wrap
api.ssl=true
bootstrap.dir=/var/run/ambari-server/bootstrap
bootstrap.script=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py
bootstrap.setup_agent.script=/usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
check_database_skipped=false
client.api.ssl.cert_name=https.crt
client.api.ssl.key_name=https.key
client.api.ssl.port=8080
client.threadpool.size.max=25
common.services.path=/var/lib/ambari-server/resources/common-services
custom.action.definitions=/var/lib/ambari-server/resources/custom_action_definitions
custom.postgres.jdbc.name=postgresql-42.1.1.jar
extensions.path=/var/lib/ambari-server/resources/extensions
http.cache-control=no-store
http.pragma=no-cache
http.strict-transport-security=max-age=31536000
http.x-content-type-options=nosniff
http.x-frame-options=DENY
http.x-xss-protection=1; mode=block
java.home=/usr/lib/java/jdk1.8.0_121
java.releases=jdk1.8,jdk1.7
java.releases.ppc64le=
jce.download.supported=true
jdk.download.supported=true
jdk1.7.desc=Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
jdk1.7.dest-file=jdk-7u67-linux-x64.tar.gz
jdk1.7.home=/usr/jdk64/
jdk1.7.jcpol-file=UnlimitedJCEPolicyJDK7.zip
jdk1.7.jcpol-url=http://public-repo-1.hortonworks.com/ARTIFACTS/UnlimitedJCEPolicyJDK7.zip
jdk1.7.re=(jdk.*)/jre
jdk1.7.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-7u67-linux-x64.tar.gz
jdk1.8.desc=Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
jdk1.8.dest-file=jdk-8u112-linux-x64.tar.gz
jdk1.8.home=/usr/jdk64/
jdk1.8.jcpol-file=jce_policy-8.zip
jdk1.8.jcpol-url=http://public-repo-1.hortonworks.com/ARTIFACTS/jce_policy-8.zip
jdk1.8.re=(jdk.*)/jre
jdk1.8.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-8u112-linux-x64.tar.gz
kerberos.keytab.cache.dir=/var/lib/ambari-server/data/cache
metadata.path=/var/lib/ambari-server/resources/stacks
mpacks.staging.path=/var/lib/ambari-server/resources/mpacks
pid.dir=/var/run/ambari-server
recommendations.artifacts.lifetime=1w
recommendations.dir=/var/run/ambari-server/stack-recommendations
resources.dir=/var/lib/ambari-server/resources
rolling.upgrade.skip.packages.prefixes=
security.server.disabled.ciphers=TLS_RSA_WITH_AES_256_GCM_SHA384|TLS_RSA_WITH_CAMELLIA_256_CBC_SHA|TLS_RSA_WITH_CAMELLIA_128_CBC_SHA|TLS_RSA_WITH_3DES_EDE_CBC_SHA|TLS_DHE_RSA_WITH_AES_128_GCM_SHA256|TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384|TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384|TLS_RSA_WITH_AES_256_CBC_SHA256|TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384|TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384|TLS_DHE_RSA_WITH_AES_256_CBC_SHA256|TLS_DHE_DSS_WITH_AES_256_CBC_SHA256|TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA|TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA|TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA|TLS_ECDH_RSA_WITH_AES_256_CBC_SHA|TLS_DHE_RSA_WITH_AES_256_CBC_SHA|TLS_DHE_DSS_WITH_AES_256_CBC_SHA|TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256|TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256|TLS_RSA_WITH_AES_128_CBC_SHA256|TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256|TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256|TLS_DHE_RSA_WITH_AES_128_CBC_SHA256|TLS_DHE_DSS_WITH_AES_128_CBC_SHA256|TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA|TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA|TLS_RSA_WITH_AES_128_CBC_SHA|TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA|TLS_ECDH_RSA_WITH_AES_128_CBC_SHA|TLS_DHE_RSA_WITH_AES_128_CBC_SHA|TLS_DHE_DSS_WITH_AES_128_CBC_SHA|TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA|TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA|TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA|TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA|SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA|SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA|TLS_EMPTY_RENEGOTIATION_INFO_SCSV|TLS_DH_anon_WITH_AES_256_CBC_SHA256|TLS_ECDH_anon_WITH_AES_256_CBC_SHA|TLS_DH_anon_WITH_AES_256_CBC_SHA|TLS_DH_anon_WITH_AES_128_CBC_SHA256|TLS_ECDH_anon_WITH_AES_128_CBC_SHA|TLS_DH_anon_WITH_AES_128_CBC_SHA|TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA|SSL_DH_anon_WITH_3DES_EDE_CBC_SHA|SSL_RSA_WITH_DES_CBC_SHA|SSL_DHE_RSA_WITH_DES_CBC_SHA|SSL_DHE_DSS_WITH_DES_CBC_SHA|SSL_DH_anon_WITH_DES_CBC_SHA|SSL_RSA_EXPORT_WITH_DES40_CBC_SHA|SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA|SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA|SSL_DH_anon_EXPORT_WITH_DES40_CBC_SHA|TLS_RSA_WITH_NULL_SHA256|TLS_ECDHE_ECDSA_WITH_NULL_SHA|TLS_ECDHE_RSA_WITH_NULL_SHA|SSL_RSA_WITH_NULL_SHA|TLS_ECDH_ECDSA_WITH_NULL_SHA|TLS_ECDH_RSA_WITH_NULL_SHA|TLS_ECDH_anon_WITH_NULL_SHA|SSL_RSA_WITH_NULL_MD5|TLS_KRB5_WITH_3DES_EDE_CBC_SHA|TLS_KRB5_WITH_3DES_EDE_CBC_MD5|TLS_KRB5_WITH_DES_CBC_SHA|TLS_KRB5_WITH_DES_CBC_MD5|TLS_KRB5_EXPORT_WITH_DES_CBC_40_SHA|TLS_KRB5_EXPORT_WITH_DES_CBC_40_MD5|TLS_RSA_WITH_AES_256_CBC_SHA
security.server.keys_dir=/var/lib/ambari-server/keys
server.connection.max.idle.millis=900000
server.execution.scheduler.isClustered=false
server.execution.scheduler.maxDbConnections=5
server.execution.scheduler.maxThreads=5
server.execution.scheduler.misfire.toleration.minutes=480
server.fqdn.service.url=http://169.254.169.254/latest/meta-data/public-hostname
server.http.session.inactive_timeout=1800
server.jdbc.connection-pool=internal
server.jdbc.database=postgres
server.jdbc.database_name=amabariview
server.jdbc.driver=org.postgresql.Driver
server.jdbc.hostname=lxdb4282-pgvip.dc.corp.telstra.com
server.jdbc.port=5432
server.jdbc.postgres.schema=amabariview
server.jdbc.rca.driver=org.postgresql.Driver
server.jdbc.rca.url=jdbc:postgresql://lxdb4282-pgvip.dc.corp.telstra.com:5432/amabariview
server.jdbc.rca.user.name=amabariview
server.jdbc.rca.user.passwd=/etc/ambari-server/conf/password.dat
server.jdbc.url=jdbc:postgresql://lxdb4282-pgvip.dc.corp.telstra.com:5432/amabariview?socketTimeout=6000000&tcpKeepAlive=true
server.jdbc.user.name=amabariview
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
server.os_family=redhat6
server.os_type=redhat6
server.persistence.type=remote
server.python.log.level=INFO
server.python.log.name=ambari-server-command.log
server.stages.parallel=true
server.task.timeout=1200
server.tmp.dir=/var/lib/ambari-server/data/tmp
server.version.file=/var/lib/ambari-server/resources/version
shared.resources.dir=/usr/lib/ambari-server/lib/ambari_commons/resources
skip.service.checks=false
ssl.trustStore.password=Offshore01
ssl.trustStore.path=/webserver/certs/ambari-server-truststore
ssl.trustStore.type=jks
stackadvisor.script=/var/lib/ambari-server/resources/scripts/stack_advisor.py
ulimit.open.files=65536
user.inactivity.timeout.default=0
user.inactivity.timeout.role.readonly.default=0
user.inactivity.timeout.role.readonly.default=0
views.ambari.hive.AUTO_HIVE_INSTANCE.result.fetch.timeout=500000
views.ambari.request.connect.timeout.millis=50000
views.ambari.request.read.timeout.millis=5000
views.http.cache-control=no-store
views.http.pragma=no-cache
views.http.strict-transport-security=max-age=31536000
views.http.x-content-type-options=nosniff
views.http.x-frame-options=SAMEORIGIN
views.http.x-xss-protection=1; mode=block
views.request.connect.timeout.millis=600000
views.request.read.timeout.millis=600000
views.skip.home-directory-check.file-system.list=wasb,adls,adl
webapp.dir=/usr/lib/ambari-server/web
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hive
12-06-2017
06:28 AM
@Kit Menke - Hey Kit , I have requirement where the users need to execute a query using beeline from hdfs.I tried your approach however i have tried several versions of it and the outcome unfortunately contradicts your posts.Can beeline access hdfs uri?? It would be great help if you could share your thoughts on this. beeline -u "jdbc:hive2://namenode2.dc.corp.astro.com:2181,namenode1.dc.corp.astro.com:2181,namenode3.dc.corp.astro.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNameSpace=hiveserver2sockettimeout=600000;tcpKeepAlive=true" -n xxx -p ******* -f "City.sql" --verbose true --hivevar HDFSDIR="hdfs://namenode1.dc.corp.astro.com:8020/user/xxx"
############ OUTPUT ##########################
Connected to: Apache Hive (version 1.2.1000.2.6.0.3-8)
Driver: Hive JDBC (version 1.2.1000.2.6.0.3-8)
Transaction isolation: TRANSACTION_REPEATABLE_READ
City.sql (No such file or directory)
##############################################
Option 2:
beeline -u "jdbc:hive2://namenode2.dc.corp.astro.com:2181,namenode1.dc.corp.astro.com:2181,namenode3.dc.corp.astro.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNameSpace=hiveserver2sockettimeout=600000;tcpKeepAlive=true" -n xxx -p ******* -f "hdfs://namenode1.dc.corp.astro.com:8020/user/xxx/City.sql" --verbose true
############ OUTPUT ##########################
Connected to: Apache Hive (version 1.2.1000.2.6.0.3-8)
Driver: Hive JDBC (version 1.2.1000.2.6.0.3-8)
Transaction isolation: TRANSACTION_REPEATABLE_READ
City.sql (No such file or directory)
##############################################
Option 3:
beeline -u "jdbc:hive2://namenode2.dc.corp.astro.com:2181,namenode1.dc.corp.astro.com:2181,namenode3.dc.corp.astro.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNameSpace=hiveserver2sockettimeout=600000;tcpKeepAlive=true" -n xxx -p ******* -f "hdfs://user/xxx/City.sql" --verbose true
############ OUTPUT ##########################
Connected to: Apache Hive (version 1.2.1000.2.6.0.3-8)
Driver: Hive JDBC (version 1.2.1000.2.6.0.3-8)
Transaction isolation: TRANSACTION_REPEATABLE_READ
City.sql (No such file or directory)
##############################################
... View more
11-24-2017
12:28 AM
Thanks a lot
... View more
11-14-2017
07:26 PM
Hi guys, I have a unique issue. Grouping__id function doesnt seem to be worked as expected or as shown in the hive manual. I am executing the same example as shown in the hive guide. https://cwiki.apache.org/confluence/display/Hive/Enhanced+Aggregation%2C+Cube%2C+Grouping+and+Rollup I could find any open bugs but I have tested it out in 3 different version of hdp HDP2.4 The statement executed are here below and the results are as expected and everything works fine -- create table
create table grp_tst( col1 int,col2 int);
-- insert query
insert into table grp_tst values (1, NULL);
insert into table grp_tst values (1, 1);
insert into table grp_tst values (2, 2);
insert into table grp_tst values (3, 3);
insert into table grp_tst values (3, NULL);
insert into table grp_tst values (4, 5);
-- select query
SELECT col1,
col2, GROUPING__ID, count(*) from grp_tst GROUP BY col1, col2 WITH ROLLUP
Results col1 col2 grouping_id count NULL NULL 0 6 1 NULL 1 2 1 NULL 3 1 1 1 3 1 2 NULL 1 1 2 2 3 1 3 NULL 1 2 3 NULL 3 1 3 3 3 1 4 NULL 1 1 4 5 3 1 HDP2.5 and HDP2.6.0 - Both the resutls seems to be wrong but consistently wrong. So i am wondering if there was a bug introduced in Hive from 2.4 to 2.5 upgrade. col1 col2 grouping_id count NULL NULL 3 6 1 NULL 0 1 1 NULL 1 2 1 1 0 1 2 NULL 1 1 2 2 0 1 3 NULL 0 1 3 NULL 1 2 3 3 0 1 4 NULL 1 1 4 5 0 1 Hive setting: I am happy to share any property files if required. It would be great if you could help me out here.
... View more
Labels:
- Labels:
-
Apache Hive