Member since
10-18-2017
52
Posts
2
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1202 | 01-27-2022 01:11 AM | |
8637 | 05-03-2021 08:03 AM | |
4779 | 02-06-2018 02:32 AM | |
6264 | 01-26-2018 07:36 AM | |
4093 | 01-25-2018 01:29 AM |
10-02-2018
02:18 AM
Hi all,
I have a cluster that was working fine for weeks and am mainly using Impala on Kudu tables. Sentry is running on the cluster. Since recently I get an error for the 'DROP TABLE ' command:
`ImpalaRuntimeException: Error making 'dropTable' RPC to Hive Metastore: CAUSED BY: MetaException: Failed to connect to Sentry service null`.
I believe that the data indeed was deleted, since a SELECT query on the table will complain he can not find the file in hdfs.
When I run the 'invalidate metadata' command before dropping the table, the error goes away, but not always.If I would try to drop the same table again(I believe at this point the data is already removed), after the 2nd attempt I would get following error:
`ImpalaRuntimeException: Table xxx no longer exists in the Hive MetaStore. Run 'invalidate metadata xxx' to update the Impala catalog.`
Note: this does not happen with tables I created in Hive and that I now try to query in Impala-they are created all in Impala.I did not have the error before, and feel like it started since I have run an 'invalidate metadata statement' for some other reason for the first time recently .
Thanks for input
... View more
Labels:
- Labels:
-
Apache Impala
-
Apache Kudu
-
Apache Sentry
03-05-2018
12:06 AM
I was referring to the following which is not available yet in spark 1.6 : 1)create a DF 2)create a table to write direct sql queries on: df.createGlobalTempView("people") 3)query on this table : spark.sql("SELECT * FROM global_temp.people") But I think what is required for the section "data analysis: use spark sql to interact with the metastore programmatically in your application" is to create a SQL/HiveContext and then query on tables that are already stored in the HIVE metastore. ANy idea if this is correct?
... View more
03-02-2018
01:03 AM
Dear community, I notice the exam on the CCA175 will have spark version 1.6. One of the main topics of the exam is data analysis using spark SQL. I notice that the functionalities to load a dataframe into a format that can be used to perform sql queries, only exist since spark version >1.6 (e.g. registerTempTable or createorreplacetempview). ANy thoughts on this? I am surprised that such an outdated version of spark is used for the exam. Best to all!
... View more
Labels:
- Labels:
-
Apache Spark
-
Certification
02-06-2018
02:32 AM
1 Kudo
I just solved this issue by starting the command while I was in /tmp. The problem was the user of the hadoop jar job (hdfs) did not have sufficient rights to write to the local system. When executing from /tmp/ the output file with the results was correctly produced. Hope this can help others in the future!
... View more
01-31-2018
07:44 AM
Dear community manager, Which one exactly is this? Is this all the documentation available on this page: https://www.cloudera.com/documentation/enterprise/latest.html - these are the pages that I am mainly using to study all the topics that are part of the exam? Or only the administration section (this is a pdf of nearly 600 pages)? Thanks for response!!
... View more
01-26-2018
08:05 AM
I have the same issue. I am following both the documentation in https://www.bmc.com/blogs/how-to-write-a-hive-user-defined-function-udf-in-java/ and the link mentioned in previous post: https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cm_mc_hive_udf.html . These are the steps I have taken: 1) The goal is to create a temporary user defined function FNV.java. I have put in dir /src/main/java/com/company/hive/udf/FNV.java the following code: package com.company.hive.udf; import org.apache.hadoop.hive.ql.exec.UDF; import org.apache.hadoop.hive.ql.exec.Description; import org.apache.hadoop.io.Text; import java.math.BigInteger; public final class FNV extends UDF{ <...all tha java code...> } 2)I have added the 2 required JARS for the imports to the CLASSPATH, compiled, build a jar out of this: /src/main/java/com/company/hive/udf/FNV.jar. This is present on the host where the hive metastore and hiveserver2 is running. I check with jar tvf FNV.jar and see that my class src/main/java/com/company/hive/udf/FNV.class is present 3)I put the FNV.jar file on hdfs and did a chown hive:hive and a chmod with full 777 rights 4)I changed the configuration for 'Hive Auxiliary JARs Directory' in Hive to the path of the jar: /src/main/java/com/company/hive/udf/ 5)I redeployed the client config and restarted hive. Here I notice that the 2nd hiveserver (on a different node-not where the JAR is located) has trouble restarted. The host with the hive metastore, hiveserver2 and the jar is up and running. 6) I granted access to the hdfs location and the file on the local host to a role called 'hive_jar'. This is done by logging into beeline !connect jdbc:hive2://node009.cluster.local:10000/default GRANT ALL ON URI 'file:///src/main/java/com/company/hive/udf/FNV.jar' TO ROLE HIVE_JAR; GRANT ALL ON URI 'hdfs:///user/name/FNV.jar' TO ROLE HIVE_JAR; I do notice that SHOW CURRENT ROLES in beeline for the hive user does give the HIVE_JAR role as wanted. 7)I start hive and add the jar using the local hosts's path: add jar /src/main/java/com/company/hive/udf/FNV.jar; I check with list jars that the jar is present 😎 In the same session I try to create the temporary function: create temporary function FNV as 'com.company.hive.udf.FNV'; I keep on getting error : FAILED: Class com.company.hive.udf.FNV not found FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.FunctionTask Any clue what I am missing?? THanks for feedback!
... View more
01-26-2018
07:36 AM
Note that this is solved. When I closed my shell and again executed the export CLASSPATH statement, the error did not occur anymore.
... View more
01-25-2018
08:11 AM
Dear community, I am writing a UDF in JAVA that I want to use with HIVE. I like the description in this post http://www.bmc.com/blogs/how-to-write-a-hive-user-defined-function-udf-in-java/# so that is what I am folowing. The first lines of my function are : package com.companyname.hive.udf; import org.apache.hadoop.hive.ql.exec.UDF; import org.apache.hadoop.hive.ql.exec.Description; import org.apache.hadoop.io.Text; import java.math.BigInteger; public final class FNV extends UDF{ ... Whan comping my function, I always get following errors: package org.apache.hadoop.hive.ql.exec does not exist import org.apache.hadoop.hive.ql.exec.UDF; ^ FNV.java:5: package org.apache.hadoop.hive.ql.exec does not exist import org.apache.hadoop.hive.ql.exec.Description; ^ FNV.java:6: package org.apache.hadoop.io does not exist import org.apache.hadoop.io.Text; ^ FNV.java:10: cannot find symbol symbol: class UDF public final class FNV extends UDF{ ^ From what I find online, it seems that I should add the location of the jar files that are required (hadoop-core-1.2.1.jar and hive-exec-0.13.0.jar) to the CLASSPATH (jars are in /home/user on my linux system): export CLASSPATH=/home/user/hive-exec-0.13.0.jar:/home/user/hadoop-core-1.2.1.jar However after doing this, I still get the same errors. Any input would be greatly appreciated! Thanks
... View more
Labels:
- Labels:
-
Apache Hive
01-25-2018
01:29 AM
Note: I was able to solve this issue. The reason is that I was using hue and not beeline. Through beeline I was able to add the roles described here :https://www.cloudera.com/documentation/enterprise/5-13-x/topics/sg_hive_sql.html and after that I was able to access my table through beeline. THis link also states that beeline should be used. It is not clear to me yet why I could not grant the roles through hue. Hopefully this is useful for someone else in the future!
... View more
01-22-2018
07:43 AM
Dear Community, I am running a well-known benchmark test to measure the I/O of my cluster using the hadoop-test-2.6.0-mr1-cdh5.13.0.jar using the TestDFSIO test. Normally the ouput should be written to TestDFSIO_results.log in the current folder of the bash (not hdfs) where I run the command. THis is in my case just my home folder. I am running the command as follows: sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-test-2.6.0-mr1-cdh5.13.0.jar TestDFSIO -zrite -nrFiles 2 -size 100MB. As it seems I need to run the job with the hdfs user, I have changed the permission of the default output file to 777 and set the owner and group to hdfs, but also this does not help (also previously when my own user was the owner it did not work).
Thanks for any help!!
... View more
- « Previous
- Next »