Member since
09-15-2015
25
Posts
46
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1646 | 02-07-2017 05:48 PM | |
653 | 01-17-2017 09:00 PM | |
2436 | 11-12-2015 03:49 PM | |
523 | 10-15-2015 02:24 PM | |
1544 | 10-09-2015 03:46 PM |
02-21-2017
01:49 PM
1 Kudo
Hi @Owen,
This has been patched in the HDP 2.5.3 release. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_release-notes/content/fixed_issues.html BUG-65983 N/A Increase the kite sdk version to 1.1 for fixing Sqoop parquet support issues
Regards, @Darwin
... View more
02-13-2017
07:56 PM
Hi @Greg Frair, Those paths are due to the 'find' command being run from the HDP Sandbox Docker host (instead of the container itself), hence the paths referencing Docker overlayFS. To check the ssh port redirections for the Docker container: docker port sandbox | grep 22
e.g.
docker port sandbox | grep 22
1220/tcp -> 0.0.0.0:1220
22/tcp -> 0.0.0.0:2222 To access the docker container itself (redirection forward for ssh should be 2222) ssh root@localhost -p 2222 Once inside of the docker 'sandbox' container, the binaries should be located at: /usr/hdp/current/hive-server2-hive2/bin which is a symlink to /usr/hdp/2.5.0.0-1245/hive2/bin Regards, @Darwin
... View more
02-13-2017
01:49 PM
5 Kudos
Hi @Greg Frair, HPLSQL support has been added for Hive 2.x provided on the HDP 2.5.0 Sandbox in Technical Preview mode. Both Hive 1.x and 2.x branches were provided together for the 2.5.x release. The binary can be found at the following location: /usr/hdp/current/hive-server2-hive2/bin ./hplsql --version
Hive 2.1.0.2.5.0.0-1245
Subversion git://c66-slave-20176e25-2/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hive2 -r 027527b9c5ce1a3d7d0b6d2e6de2378fb0c39232
Compiled by jenkins on Fri Aug 26 01:40:02 UTC 2016
From source with checksum 293c871560185db1cdd81d7d9f11c09d Regards, @Darwin
... View more
02-08-2017
06:40 PM
4 Kudos
Hive ODBC Driver 2.1.7 (located in Hortonworks Public Repo http://public-repo-1.hortonworks.com/HDP/hive-odbc/2.1.7.1010) should have fixed this issue as per the release notes: New installation location for macOS On macOS, the driver now installs to /Library/hortonworks/hive. Previously, the driver installed to /opt/hortonworks/hive. @Darwin
... View more
02-07-2017
05:48 PM
3 Kudos
Hi @Leon, This is a known issue, and Hive engineering is aware and working on it. There is not an Apache Hive JIRA created for this issue yet, but I will update this thread once one has been filed. Thanks, @Darwin
... View more
01-24-2017
05:56 PM
Hi Raj Jagannathan, The min memory required for the Sandbox is 8GB (which means total physical memory of the workstation needs to be at ~10GB). Can you highlight where it states that 4GB is enough (it might be leftover from earlier versions of the Sandbox). Thanks, @Darwin
... View more
01-24-2017
05:35 PM
Hi Raj Jagannathan, The docker sandbox requires at least 8GB of memory. Can you try allocating at least that much, and re-try launching the VM? Reference link: http://hortonworks.com/hadoop-tutorial/hortonworks-sandbox-guide/#section_2 At least 8 GB of RAM (The more, the better)
If you wish to enable services such as Ambari, HBase, Storm, Kafka, or Spark please ensure you have at least 10 Gb of physical RAM in order to run the VM using 8 GB. Thanks, @Darwin
... View more
01-17-2017
09:00 PM
3 Kudos
Hi Richard, I can confirm that the 'show columns' Hive command does display them in the table DDL/describe table order. Regards, @Darwin
... View more
07-01-2016
08:22 PM
1 Kudo
Can you confirm the md5sum of the original OVA downloaded from the following URL: http://hortonassets.s3.amazonaws.com/2.5/Hortonworks%20Sandbox%20with%20HDP%202.5%20Technical%20Preview.ova MD5 (Hortonworks Sandbox with HDP 2.5 Technical Preview.ova) = 5830c0c5737083c37597d2e30540aba1
... View more
03-10-2016
02:27 PM
2 Kudos
Known MR issue, and it is currently pending patch testing: https://issues.apache.org/jira/browse/MAPREDUCE-6338
... View more
03-09-2016
03:02 PM
Due to HIVE_8615, it is advised to start using csv2/tsv2 for Beeline output formatting as csv/tsv are deprecated as of Hive 0.14. More details can be on the Apache Hiveserver2 Client wiki: "Separated-Value Output Formats
Starting with Hive 0.14, there are improved SV output formats available, namely DSV, CSV2 and TSV2. These conform better to standard CSV convention, which adds quotes around a cell value only if it contains special characters (such as the delimiter character or a quote character) or spans multiple lines. These three formats differ only with the delimiter between cells, which is comma for CSV2, tab for TSV2, and configurable for DSV (delimiterForDSV property). CSV and TSV output formats are maintained for backward compatibility, but beware as they add additional single-quote characters around all cell values contrary to this convention."
... View more
11-20-2015
09:10 PM
8 Kudos
To upgrade JDK across all HDP services on all nodes using Ambari, the following instructions have proven to be helpful. This requires downtime on ALL HDP components. a. Use the 'Stop all' from the Actions button off the main Ambari dashboard to shut down all HDP components.
b. Using Clush/Pdsh/Cluster scripting tool of choice, confirm that ALL Java processes are fully stopped across ALL cluster nodes (excluding Ambari), killing any orphaned processes that Ambari was unable to stop.
c. Run the following from Ambari master node (as root) to stop the Ambari-agent and Ambari-server processes.
service ambari-agent stop
service ambari-server stop
d. Run the following from Ambari-server master node (as root):
ambari-server setup (and follow the prompts, example provided below)
e.g.:
[root@sandbox conf]# ambari-server setup
Using python /usr/bin/python2.6
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Customize user account for ambari-server daemon [y/n] (n)? n
Adjusting ambari-server permissions and ownership...
Checking firewall status...
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
[3] Custom JDK
==============================================================================
Enter choice (1): 3
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: {Enter your exact full path to JDK base location}
Validating JDK on Ambari Server...done.
Completing setup...
Configuring database...
Enter advanced database configuration [y/n] (n)? n
Configuring database...
Default properties detected. Using built-in database.
Configuring ambari database...
Checking PostgreSQL...
Configuring local database...
Connecting to local database...done.
Configuring PostgreSQL...
Backup for pg_hba found, reconfiguration not required
Extracting system views...
........
Adjusting ambari-server permissions and ownership...
Ambari Server 'setup' completed successfully.
e. Start the Ambari-server and ambari-agent processes:
service ambari-server start
service ambari-agent start
f. Using 'Start All' from Actions button on main Ambari Dashboard, start up all the HDP components.
g. Confirm with 'ps -ef' that all the processes are using the new java location for all HDP components.
e.g.
ps -ef | grep HiveServer2 | grep -v grep
hive 95143 1 0 15:26 ? 00:00:37 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/bin/java ...
ps -ef | grep ResourceManager | grep -v grep
yarn 116722 1 1 15:48 ? 00:03:22 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91.x86_64/bin/java... Due to Option #1 and Option #2 being hardcoded to older/problematic version of JDK on various Ambari versions, it is advised to use Option 3 only. https://hortonworks.jira.com/browse/BUG-44244 https://hortonworks.jira.com/browse/BUG-45794
... View more
- Find more articles tagged with:
- Ambari
- ambari-server
- Cloud & Operations
- FAQ
- java
- jdk
Labels:
11-19-2015
02:49 PM
3 Kudos
The current state of Hive ACID is as follows: Technical Alert sent out on 2015-09-10: Subject: Hortonworks Technical Alert: Hive ACID feature
Dear HDP Customer,
Hortonworks Technical Support would like to make you aware about a potential issue you may encounter.
Below are the details:
COMPONENT: Hive
VERSION:
HDP 2.2.*, HDP 2.3.0
PROBLEM:
Hive ACID feature not ready for production deployment
IMPACT:
Hive ACID supports two primary use cases:
Hive Streaming Ingest
Inserts/Updates/Deletes at low concurrency
As the first set of users are developing their ACID solutions and deploying them in their staging environments, they’ve run into a series of technical issues which are indicate insufficient test case coverage for this particular feature. Hortonworks is currently working closely with these customers to provide fixes for these issues and make them successful in production. In addition, we are going back to our engineering processes and identifying the missing test cases and increasing our test case coverage. While these issues are being addressed, we’d like to alert other customers of these issues. Currently, Hortonworks is recommending that users wait on the use of the Hive ACID features until these issues have been addressed. Subsequent maintenance release will include these fixes and as we have successfully assisted the early adopters with their production deployments, we will retire this technical alert.
SYMPTOMS:
N.A.
WORKAROUND:
N.A.
SOLUTION:
There are two options:
Wait for maintenance releases to appear which contain the necessary fixes to address known issues — or —
If you’d like to use Hive ACID, please contact Hortonworks support and we can look at the details of your use case and provide additional advice on a case-by-case basis
If you have any questions or concerns, please feel free to open a support case with us at support.hortoworks.com
Thank you,
The Hortonworks Team
In addition, for use in further discussions, the following powerpoint presentation is available to share externally: https://hortonworks.app.box.com/files/0/f/2070270300/1/f_37967540402 Last but not least, there is the pending RMP JIRA tracking the backport of the bulk of ACID changes to the HDP 2.2.x branch: https://hortonworks.jira.com/browse/RMP-4830
... View more
- Find more articles tagged with:
- Data Processing
- FAQ
- Hive
Labels:
11-13-2015
05:52 PM
Temp workaround: https://apple.stackexchange.com/questions/208478/how-do-i-disable-system-integrity-protection-sip-aka-rootless-on-os-x-10-11 Thanks to @Yi Zhang for the suggestion.
... View more
11-12-2015
09:16 PM
5 Kudos
The Hortonworks Hive ODBC Driver for OSX 10.11 (El Capitan) fails to install. "This package is incompatible with this version of OS X and may fail to install" This issue is being tracked on (with full installer log) here: https://hortonworks.jira.com/browse/BUG-47990 Support/Engineering will be reaching out to Simba Tech for a new Hive ODBC Driver.
... View more
- Find more articles tagged with:
- Data Processing
- Hive
- odbc
- osx
Labels:
11-12-2015
04:00 PM
1 Kudo
Does anyone know of how to track what Hive config variables are able to be set at the session level vs which are required to be set at the config xml level (and require a restart of the component)? Hive Config Java provides a list of most Hive variables, but there seems to be no easy way to tell. e.g. There are multiple hive.metastore.* variables that only can be set in the config XML and require a Hive Metastore restart.
... View more
Labels:
- Labels:
-
Apache Hive
11-12-2015
03:49 PM
1 Kudo
The symlink workaround of the library dependency does not work. This is currently unsupported as per Hive ODBC User Guide: Only currently supported Linux releases are:
Red Hat® Enterprise Linux® (RHEL) 5.0 or 6.0
CentOS 5.0 or 6.0 o SUSE Linux Enterprise Server (SLES) 11
Simba is building a new Hive ODBC driver for CentOS/RHEL 7 support: This request is being tracked with the following RMP-4954
... View more
10-15-2015
02:24 PM
Solution: export DESKTOP_LOG_DIR=/path/for/hue/logs Then run Hue CLI commands with the separate logging location.
... View more
10-14-2015
04:43 PM
Does anyone know an easy way to configure where Hue writes the logs for the CLI operations (e.g sync_ldap_users_and_groups, import_ldap_group, and import_ldap_user)? -Darwin
... View more
Labels:
- Labels:
-
Cloudera Hue
10-09-2015
03:46 PM
It was suggested to skip such files in the Avro's native reader itself. But the Avro project declined that option in https://issues.apache.org/jira/browse/AVRO-1530 and suggested clients ignore zero length files. The issue has been patched on the Hive side: https://issues.apache.org/jira/browse/HIVE-11977 -Darwin
... View more
10-09-2015
03:40 PM
1 Kudo
Can you try setting the HiveCLI to startup with DEBUG logging? e.g. hive -hiveconf hive.log.file=hivecli_debug.log -hiveconf hive.log.dir=/tmp/hivecli -hiveconf hive.root.logger=DEBUG,DRFA -Darwin
... View more