Member since
11-20-2015
226
Posts
9
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
86874 | 05-11-2018 12:26 PM | |
43199 | 08-26-2016 08:52 AM |
07-18-2022
02:31 AM
@newbieone, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
06-09-2022
01:19 AM
HI Nicolas, Did you get any resolution for this ? I also have a similar requirement where I need to do case-insensitive joins across the system and I don't wish to apply upper/lower functions. I tried setting TBLPROPERTIES('serialization.encoding'='utf8mb4_unicode_ci') at Table level but still the comparison is happening considering case sensitivity. PSB - drop table test.caseI; create table test.caseI (name string, id int) TBLPROPERTIES('serialization.encoding'='utf8mb4_unicode_ci'); insert into test.caseI values ('hj',1); drop table test.caseI_2; create table test.caseI_2 (name string, id int) TBLPROPERTIES('serialization.encoding'='utf8mb4_unicode_ci'); insert into test.caseI_2 values ('HJ',1); select * from test.caseI i inner join test.caseI_2 i2 on i.name=i2.name; --No Result Tried with encoding 'SQL_Latin1_General_CP1_CI_AI' but got same result as above. Any help would be appreciated, thanks!
... View more
11-09-2020
10:20 PM
Thanks, I'm able to access the Hadoop CLI after commenting out the line.
... View more
11-09-2020
08:56 AM
To create a table in this way, there are two steps: CREATE TABLE ... LOAD DATA INPATH ... The first statement creates the table schema within Hive, and the second directive tells Hive to move the data from the source HDFS directory into the Hive HDFS table directory /user/joe/sales.csv => /user/hive/warehouse/sales/sales.csv The move operation occurs as the 'hive' user, so in order for this to complete, the 'hive' user must have access to perform this move operation in HDFS. Ensure that the 'hive' user has the correct permissions to move this file into the final location. (Impala, but a lot of overlap with Hive) https://docs.cloudera.com/documentation/enterprise/6/latest/topics/impala_load_data.html Also please note that latest version is 6.3.4 and has lots of benefits over 6.0. https://docs.cloudera.com/documentation/enterprise/6/release-notes/topics/rg_cdh_63_packaging.html
... View more
04-28-2020
07:28 AM
Please check below command, here 2> /dev/null will consume all the logs and error. It will now allow standard output to be shown: beeline -u jdbc:hive2://somehost_ip/ -f 2> /dev/null hive.hql >op.txt if you like this please give me kudos. Thanks!!!
... View more
04-11-2020
09:16 PM
I was working on something unrelated, but I hit this same error, detailed the issue in Jira, and have proposed a workaround. The issue is that there is a feature in Hive called the REGEX Column Specification. IMHO this feature was ill conceived and is not standard SQL. It should be removed from Hive and this issue is yet another reason why. That's what I was working on when I hit this issue. When Hive looks at the table name surrounded by back ticks, it looks at that string and determines that it is a Regex. When Hive looks at the table name surrounded by quotes, it looks at that string and determines that it is a Table Name. The basic rule it uses is "most anything ASCII surrounded by back ticks is a Regex." However, when Hive sees the quotes, it sees the string as a table name. Using quotes (and technically back ticks too, but that's clearly broken) around table names can be allowed/disallowed with a feature in Hive called "hive.support.quoted.identifiers". This feature is enabled in the user's HS2 session by default. However, when performing masking, it is a multi step process: The query is parsed by HS2 The masking is applied The query is parsed again by HS2 The first parsing attempt respects the hive.support.quoted.identifiers configuration and allows a query with quotes to be parsed. However, the masking code does not pass this configuration information to the parser on the second attempt. And oddly enough, if the configuration information is not passed along, the parser will consider this feature to be disabled. So, it's actually on the second pass that it fails because the parser rejects the quotes. For the record, I hit this issue when I removed the Regex feature, because it forced all quoted strings to be considered table names (and subjected to this feature being enabled/disabled) instead of sneaking by as being considered a Regex. All the masking unit tests failed. https://issues.apache.org/jira/browse/HIVE-23182 https://issues.apache.org/jira/browse/HIVE-23176
... View more
06-28-2019
02:47 AM
I did a quick research on why rhel7/centos7 still contains such an old mysql java connector/driver version (see "Redhat response"), and checked for working alternative RPMs (as installing via an rpm has many advantages vs a manual tarball install) There are many working RPMs (I tested all of below, with CDH 6.2.0 and openjdk-8): - With Java >=v8, I recommend following: Ref: https://centos.pkgs.org/7/mysql-connectors-i386/mysql-connector-java-8.0.16-1.el7.noarch.rpm.html RPM: yum install http://repo.mysql.com/yum/mysql-connectors-community/el/7/i386//mysql-connector-java-8.0.16-1.el7.noarch.rpm - With Java v7 you can try the latest 5.1.x from Fedora: Ref: https://fedora.pkgs.org/29/fedora-x86_64/mysql-connector-java-5.1.38-7.fc29.noarch.rpm.html RPM: http://download-ib01.fedoraproject.org/pub/fedora/linux/releases/29/Everything/x86_64/os/Packages/m/mysql-connector-java-5.1.38-7.fc29.noarch.rpm - Find RPMs for other distros: https://pkgs.org/download/mysql-connector-java - Official Redhat response: Alternatively, for Java-8, Redhat proposes to use the MariaDB client: https://bugzilla.redhat.com/show_bug.cgi?id=1684349#c7 > this is the best we could do for the customers who need a recent version of the JDBC driver for MySQL/MariaDB. More infos: https://developers.redhat.com/blog/2019/06/25/mariadb-10-3-now-available-on-red-hat-enterprise-linux-7/ For centos7: https://centos.pkgs.org/7/centos-sclo-rh-testing-x86_64/rh-mariadb103-mariadb-java-client-2.4.1-1.el7.noarch.rpm.html I tested the 1-line yum install, but CDH would require more changes, due to the driver installed in /opt: yum install https://buildlogs.centos.org/centos/7/sclo/x86_64/rh/rh-mariadb103/rh-mariadb103-mariadb-java-client-2.4.1-1.el7.noarch.rpm https://buildlogs.centos.org/centos/7/sclo/x86_64/rh/rh-mariadb103/rh-mariadb103-runtime-3.3-3.el7.x86_64.rpm
... View more
05-10-2019
02:02 PM
By default, in Hive, Parquet files are not written with compression enabled. https://issues.apache.org/jira/browse/HIVE-11912 However, writing files with Impala into a Parquet table will create files with internal Snappy compression (by default).
... View more
09-11-2018
06:51 PM
Please check the link https://hortonworks.com/blog/update-hive-tables-easy-way/ hope this helps.
... View more
08-02-2018
11:42 AM
1 Kudo
@vratmuri Oh then you can use cloudera API Link for cloudera API reference: https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_intro_api.html Link for specific to service properties (you may need to explore little for impala query). It may help you https://www.cloudera.com/documentation/enterprise/5-9-x/topics/cm_intro_api.html#xd_583c10bfdbd326ba--7f25092b-13fba2465e5--7f20__example_txn_qcw_yr
... View more