Member since
11-11-2019
634
Posts
33
Kudos Received
27
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 260 | 10-09-2025 12:29 AM | |
| 4769 | 02-19-2025 09:43 PM | |
| 2124 | 02-28-2023 09:32 PM | |
| 4002 | 02-27-2023 03:33 AM | |
| 26008 | 12-24-2022 05:56 AM |
08-25-2021
01:07 AM
You need to provide ALL access to the User in the Ranger. Please check the ranger permission settings in Ranger UI
... View more
08-24-2021
03:51 AM
I already followed these instructions but i still have the same issue, here is my tez-site.xml file and screenshots of the error for more details thank you <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera Manager--> <configuration> <property> <name>tez.am.am-rm.heartbeat.interval-ms.max</name> <value>250</value> </property> <property> <name>tez.am.container.idle.release-timeout-max.millis</name> <value>20000</value> </property> <property> <name>tez.am.container.idle.release-timeout-min.millis</name> <value>10000</value> </property> <property> <name>tez.am.container.reuse.enabled</name> <value>true</value> </property> <property> <name>tez.am.container.reuse.locality.delay-allocation-millis</name> <value>250</value> </property> <property> <name>tez.am.container.reuse.non-local-fallback.enabled</name> <value>false</value> </property> <property> <name>tez.am.container.reuse.rack-fallback.enabled</name> <value>true</value> </property> <property> <name>tez.am.launch.cluster-default.cmd-opts</name> <value>-server -Djava.net.preferIPv4Stack=true</value> </property> <property> <name>tez.am.launch.cmd-opts</name> <value>-XX:+PrintGCDetails -verbose:gc -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp</value> </property> <property> <name>tez.am.launch.env</name> <value>LD_LIBRARY_PATH=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native</value> </property> <property> <name>tez.am.log.level</name> <value>INFO</value> </property> <property> <name>tez.am.max.app.attempts</name> <value>2</value> </property> <property> <name>tez.am.maxtaskfailures.per.node</name> <value>10</value> </property> <property> <name>tez.am.resource.memory.mb</name> <value>2048</value> </property> <property> <name>tez.am.tez-ui.history-url.template</name> <value>__HISTORY_URL_BASE__?viewPath=%2F%23%2Ftez-app%2F__APPLICATION_ID__</value> </property> <property> <name>tez.am.view-acls</name> <value>*</value> </property> <property> <name>tez.cluster.additional.classpath.prefix</name> <value></value> </property> <property> <name>tez.counters.max</name> <value>10000</value> </property> <property> <name>tez.counters.max.groups</name> <value>3000</value> </property> <property> <name>tez.generate.debug.artifacts</name> <value>false</value> </property> <property> <name>tez.grouping.max-size</name> <value>1073741824</value> </property> <property> <name>tez.grouping.min-size</name> <value>16777216</value> </property> <property> <name>tez.grouping.split-waves</name> <value>1.7</value> </property> <property> <name>tez.history.logging.proto-base-dir</name> <value>/warehouse/tablespace/managed/hive/sys.db</value> </property> <property> <name>tez.history.logging.timeline-cache-plugin.old-num-dags-per-group</name> <value>5</value> </property> <property> <name>tez.lib.uris</name> <value>/user/tez/0.9.1.7.1.4.0-203/tez.tar.gz</value> </property> <property> <name>tez.runtime.compress</name> <value>true</value> </property> <property> <name>tez.runtime.compress.codec</name> <value>org.apache.hadoop.io.compress.SnappyCodec</value> </property> <property> <name>tez.runtime.convert.user-payload.to.history-text</name> <value>false</value> </property> <property> <name>tez.runtime.io.sort.mb</name> <value>272</value> </property> <property> <name>tez.runtime.optimize.local.fetch</name> <value>true</value> </property> <property> <name>tez.runtime.pipelined.sorter.sort.threads</name> <value>2</value> </property> <property> <name>tez.runtime.shuffle.fetch.buffer.percent</name> <value>0.6</value> </property> <property> <name>tez.runtime.shuffle.keep-alive.enabled</name> <value>true</value> </property> <property> <name>tez.runtime.shuffle.memory.limit.percent</name> <value>0.25</value> </property> <property> <name>tez.runtime.unordered.output.buffer.size-mb</name> <value>100</value> </property> <property> <name>tez.session.am.dag.submit.timeout.secs</name> <value>300</value> </property> <property> <name>tez.session.client.timeout.secs</name> <value>-1</value> </property> <property> <name>tez.shuffle-vertex-manager.max-src-fraction</name> <value>0.4</value> </property> <property> <name>tez.shuffle-vertex-manager.min-src-fraction</name> <value>0.2</value> </property> <property> <name>tez.staging-dir</name> <value>/tmp/${user.name}/staging</value> </property> <property> <name>tez.task.am.heartbeat.counter.interval-ms.max</name> <value>4000</value> </property> <property> <name>tez.task.generate.counters.per.io</name> <value>true</value> </property> <property> <name>tez.task.get-task.sleep.interval-ms.max</name> <value>200</value> </property> <property> <name>tez.task.launch.cluster-default.cmd-opts</name> <value>-server -Djava.net.preferIPv4Stack=true</value> </property> <property> <name>tez.task.launch.cmd-opts</name> <value>-XX:+PrintGCDetails -verbose:gc -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp</value> </property> <property> <name>tez.task.launch.env</name> <value>LD_LIBRARY_PATH=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native</value> </property> <property> <name>tez.task.max-events-per-heartbeat</name> <value>500</value> </property> <property> <name>tez.task.resource.memory.mb</name> <value>1536</value> </property> <property> <name>tez.use.cluster.hadoop-libs</name> <value>true</value> </property> <property> <name>yarn.timeline-service.enabled</name> <value>false</value> </property> <property> <name>tez.runtime.sorter.class</name> <value>PIPELINED</value> </property> <property> <name>tez.history.logging.service.class</name> <value>org.apache.tez.dag.history.logging.proto.ProtoHistoryLoggingService</value> </property> </configuration>
... View more
08-12-2021
05:10 AM
@vidanimegh, Sure. Thanks for your response, will wait for your update.
... View more
08-02-2021
12:09 AM
@JananiViswa1, is your issue resolved? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If you are still experiencing the issue, can you provide the information @asish has requested?
... View more
07-25-2021
07:56 AM
I would need complete application logs,HS2 logs and beeline trace to analyse. The snippet provided does not give much info.
... View more
07-19-2021
10:26 AM
Hi Sai, Please follow https://community.cloudera.com/t5/Support-Questions/Does-hive-support-Photo-or-images-datatypes/td-p/221473 Copy the data to hdfs instead of loadpath and check, Thanks, Asish
... View more
07-18-2021
08:37 AM
All the hive related tables are stored under "hive" database in mysql. You can take mysql dump for a database hive and can prevent this from happening in the future. You can use command like: mysqldump -u root -p hive Reference: https://www.sqlshack.com/how-to-backup-and-restore-mysql-databases-using-the-mysqldump-command/
... View more
07-18-2021
08:32 AM
Following is the criteria:
ORC and Managed table
ACID enabled
HDFS directory should be owned by Hive
According to the shouldTableBeExternal method in HiveStrictMigration.java, a table is external if it is a StorageHandler table, or if it is an Avro/Text/Parquet table, or if it is a list bucketed table, or the directory is not owned by Hive.
Following is the codebase that describes the same:
String reason = shouldTableBeExternal(tableObj, ownerName, conf, hms, isPathOwnedByHive); if (reason != null) { LOG.debug("Converting {} to external table. {}", getQualifiedName(tableObj), reason); result = TableMigrationOption.EXTERNAL; } else { result = TableMigrationOption.MANAGED;
Reference: HiveStrictManagedMigration (GitHub)
It should be converted to an EXTERNAL table, in case it does not fit the criteria for the upgrade.
... View more
07-11-2021
06:52 PM
Hi @srinivasp I believe you are missing SELECT and LIST privilege for the table in ranger for the user. Can you try to list the tables using Hive user from beeline and check whether you are seeing the same problem?
... View more
07-05-2021
10:47 PM
By default hive displays in UTC. If you want to use specific timestamp,you can run below command SELECT from_utc_timestamp(cast(from_unixtime(cast(1623943533 AS bigint)) as TIMESTAMP),"Asia/Kolkata") ;
... View more