<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question [CDH 5.7] Can't create table in Hive compatible way from Spark DataFrame with sqlContext in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/48438#M48592</link>
    <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have CDH 5.7 and Kerberos, Sentry, Hive and Spark.&lt;/P&gt;&lt;P&gt;I've tried to create table in Hive from DF in Spark and it was created, but nothing but sqlContext can read it back.&lt;/P&gt;&lt;P&gt;During creation I get this WARNING:&lt;/P&gt;&lt;PRE&gt;scala&amp;gt; val df = sqlContext.sql("SELECT * FROM myschema.mytab")
df: org.apache.spark.sql.DataFrame = [browserid: int, browser: string]

scala&amp;gt; df.write.format("parquet").saveAsTable("myschema.mytab_v2")
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/hive-exec-1.1.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/hive-jdbc-1.1.0-cdh5.7.0-standalone.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/parquet-format-2.1.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/parquet-hadoop-bundle-1.5.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/parquet-pig-bundle-1.5.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [shaded.parquet.org.slf4j.helpers.NOPLoggerFactory]
&lt;STRONG&gt;16/12/12 16:40:28 WARN hive.HiveContext$$anon$2: Could not persist `myschema`.`mytab_v2` in a Hive compatible way. Persisting it into Hive metastore in Spark SQL specific format.
org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:User isegrim does not have privileges for CREATETABLE)&lt;/STRONG&gt;
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:759)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:716)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$createTable$1.apply$mcV$sp(ClientWrapper.scala:415)

...

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
&lt;STRONG&gt;Caused by: MetaException(message:User isegrim does not have privileges for CREATETABLE)&lt;/STRONG&gt;
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:29992)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:29960)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result.read(ThriftHiveMetastore.java:29886)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)

...&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The table is written to HiveMetastore, and can be read by Spark from Spark sqlContext:&lt;/P&gt;&lt;PRE&gt;scala&amp;gt; sqlContext.sql("select count(1) from myschema.mytable_v2").show(200, false)
+---+
|_c0|
+---+
|107|
+---+&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But can not be read from beeline and Hue:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; select count(1) from myschema.mytable_v2;
...
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;Table schema has some spark properties:&lt;/P&gt;&lt;PRE&gt;SHOW CREATE TABLE myschema.mytable_v2;
| CREATE TABLE `mytable_v2`(
| `col` array&amp;lt;string&amp;gt; COMMENT 'from deserializer')
| ROW FORMAT SERDE
| 'org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe'
| WITH SERDEPROPERTIES (
| 'path'='hdfs://myns/user/hive/warehouse/myschema.db/mytable_v2')
| STORED AS INPUTFORMAT
| 'org.apache.hadoop.mapred.SequenceFileInputFormat'
| OUTPUTFORMAT
| 'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
| LOCATION
| 'hdfs://myns/user/hive/warehouse/myschema.db/mytable_v2'
| TBLPROPERTIES (
| 'COLUMN_STATS_ACCURATE'='false',
| 'EXTERNAL'='FALSE',
| 'numFiles'='2',
| 'numRows'='-1',
| 'rawDataSize'='-1',
| 'spark.sql.sources.provider'='parquet',
| 'spark.sql.sources.schema.numParts'='1',
| 'spark.sql.sources.schema.part.0'='{\"type\":\"struct\",\"fields\":[{\"name\":\"myfield\"}]}',
| 'totalSize'='2300',
| 'transient_lastDdlTime'='1481557228')&lt;/PRE&gt;&lt;P&gt;Even when I set Spark as execution engine for Hive it won't work:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; set hive.execution.engine=spark;

0: jdbc:hive2://myhs2:10&amp;gt; select count(1) from myschema.mytable_v2;

...

Status: Running (Hive on Spark job[0])
INFO : Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
INFO : 2016-12-12 18:00:43,189 Stage-0_0: 0(+2)/2 Stage-1_0: 0/1
INFO : 2016-12-12 18:00:45,205 Stage-0_0: 0(+2,-4)/2 Stage-1_0: 0/1
ERROR : Status: Failed
ERROR : FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
INFO : Completed executing command(queryId=hive_20161212180000_4d80266f-15f5-4fe2-b044-c9324fd75ba6); Time taken: 25.74 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=3)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;User isegrim is in ldap group mygroup:&lt;/P&gt;&lt;PRE&gt;# id isegrim
uid=1001(isegrim) gid=501(mygroup) groups=501(mygroup)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sentry posess role mygroup, to which group mygroup is attached:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; show role grant group mygroup;

INFO : OK
+-------+---------------+-------------+----------+--+
| role | grant_option | grant_time | grantor |
+-------+---------------+-------------+----------+--+
| mygroup | false | NULL | -- |
+-------+---------------+-------------+----------+--+&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and this role mygroup has granted ALL privilege to myschema:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; show grant role mygroup;

INFO : OK
+-------------------------------------------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+
| database | table | partition | column | principal_name | principal_type | privilege | grant_option | grant_time | grantor |
+-------------------------------------------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+
| default | | | | mygroup | ROLE | select | false | 1473364008245000 | -- |
| myschema | | | | mygroup | ROLE | * | false | 1473364207568000 | -- |
+-------------------------------------------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I suppose all would be fine if (as WARNING says) I could persist `myschema`.`mytable_v2` in a Hive compatible way, instead persisting it into Hive metastore in Spark SQL specific format.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What to do to store tables in Hive compatible way from Spark DataFrame?&lt;BR /&gt;And why Sentry denies CREATETABLE in Hive compatible way, and grants in Spark SQL specific format?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 16 Sep 2022 10:50:44 GMT</pubDate>
    <dc:creator>Isegrim</dc:creator>
    <dc:date>2022-09-16T10:50:44Z</dc:date>
    <item>
      <title>[CDH 5.7] Can't create table in Hive compatible way from Spark DataFrame with sqlContext</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/48438#M48592</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have CDH 5.7 and Kerberos, Sentry, Hive and Spark.&lt;/P&gt;&lt;P&gt;I've tried to create table in Hive from DF in Spark and it was created, but nothing but sqlContext can read it back.&lt;/P&gt;&lt;P&gt;During creation I get this WARNING:&lt;/P&gt;&lt;PRE&gt;scala&amp;gt; val df = sqlContext.sql("SELECT * FROM myschema.mytab")
df: org.apache.spark.sql.DataFrame = [browserid: int, browser: string]

scala&amp;gt; df.write.format("parquet").saveAsTable("myschema.mytab_v2")
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/hive-exec-1.1.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/hive-jdbc-1.1.0-cdh5.7.0-standalone.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/parquet-format-2.1.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/parquet-hadoop-bundle-1.5.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p0.45/jars/parquet-pig-bundle-1.5.0-cdh5.7.0.jar!/shaded/parquet/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [shaded.parquet.org.slf4j.helpers.NOPLoggerFactory]
&lt;STRONG&gt;16/12/12 16:40:28 WARN hive.HiveContext$$anon$2: Could not persist `myschema`.`mytab_v2` in a Hive compatible way. Persisting it into Hive metastore in Spark SQL specific format.
org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:User isegrim does not have privileges for CREATETABLE)&lt;/STRONG&gt;
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:759)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:716)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$createTable$1.apply$mcV$sp(ClientWrapper.scala:415)

...

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
&lt;STRONG&gt;Caused by: MetaException(message:User isegrim does not have privileges for CREATETABLE)&lt;/STRONG&gt;
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:29992)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result$create_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:29960)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_table_with_environment_context_result.read(ThriftHiveMetastore.java:29886)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)

...&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The table is written to HiveMetastore, and can be read by Spark from Spark sqlContext:&lt;/P&gt;&lt;PRE&gt;scala&amp;gt; sqlContext.sql("select count(1) from myschema.mytable_v2").show(200, false)
+---+
|_c0|
+---+
|107|
+---+&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But can not be read from beeline and Hue:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; select count(1) from myschema.mytable_v2;
...
Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask (state=08S01,code=2)&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;Table schema has some spark properties:&lt;/P&gt;&lt;PRE&gt;SHOW CREATE TABLE myschema.mytable_v2;
| CREATE TABLE `mytable_v2`(
| `col` array&amp;lt;string&amp;gt; COMMENT 'from deserializer')
| ROW FORMAT SERDE
| 'org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe'
| WITH SERDEPROPERTIES (
| 'path'='hdfs://myns/user/hive/warehouse/myschema.db/mytable_v2')
| STORED AS INPUTFORMAT
| 'org.apache.hadoop.mapred.SequenceFileInputFormat'
| OUTPUTFORMAT
| 'org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat'
| LOCATION
| 'hdfs://myns/user/hive/warehouse/myschema.db/mytable_v2'
| TBLPROPERTIES (
| 'COLUMN_STATS_ACCURATE'='false',
| 'EXTERNAL'='FALSE',
| 'numFiles'='2',
| 'numRows'='-1',
| 'rawDataSize'='-1',
| 'spark.sql.sources.provider'='parquet',
| 'spark.sql.sources.schema.numParts'='1',
| 'spark.sql.sources.schema.part.0'='{\"type\":\"struct\",\"fields\":[{\"name\":\"myfield\"}]}',
| 'totalSize'='2300',
| 'transient_lastDdlTime'='1481557228')&lt;/PRE&gt;&lt;P&gt;Even when I set Spark as execution engine for Hive it won't work:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; set hive.execution.engine=spark;

0: jdbc:hive2://myhs2:10&amp;gt; select count(1) from myschema.mytable_v2;

...

Status: Running (Hive on Spark job[0])
INFO : Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
INFO : 2016-12-12 18:00:43,189 Stage-0_0: 0(+2)/2 Stage-1_0: 0/1
INFO : 2016-12-12 18:00:45,205 Stage-0_0: 0(+2,-4)/2 Stage-1_0: 0/1
ERROR : Status: Failed
ERROR : FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
INFO : Completed executing command(queryId=hive_20161212180000_4d80266f-15f5-4fe2-b044-c9324fd75ba6); Time taken: 25.74 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=3)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;User isegrim is in ldap group mygroup:&lt;/P&gt;&lt;PRE&gt;# id isegrim
uid=1001(isegrim) gid=501(mygroup) groups=501(mygroup)&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Sentry posess role mygroup, to which group mygroup is attached:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; show role grant group mygroup;

INFO : OK
+-------+---------------+-------------+----------+--+
| role | grant_option | grant_time | grantor |
+-------+---------------+-------------+----------+--+
| mygroup | false | NULL | -- |
+-------+---------------+-------------+----------+--+&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and this role mygroup has granted ALL privilege to myschema:&lt;/P&gt;&lt;PRE&gt;0: jdbc:hive2://myhs2:10&amp;gt; show grant role mygroup;

INFO : OK
+-------------------------------------------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+
| database | table | partition | column | principal_name | principal_type | privilege | grant_option | grant_time | grantor |
+-------------------------------------------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+
| default | | | | mygroup | ROLE | select | false | 1473364008245000 | -- |
| myschema | | | | mygroup | ROLE | * | false | 1473364207568000 | -- |
+-------------------------------------------+--------+------------+---------+-----------------+-----------------+------------+---------------+-------------------+----------+--+&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I suppose all would be fine if (as WARNING says) I could persist `myschema`.`mytable_v2` in a Hive compatible way, instead persisting it into Hive metastore in Spark SQL specific format.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What to do to store tables in Hive compatible way from Spark DataFrame?&lt;BR /&gt;And why Sentry denies CREATETABLE in Hive compatible way, and grants in Spark SQL specific format?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:50:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/48438#M48592</guid>
      <dc:creator>Isegrim</dc:creator>
      <dc:date>2022-09-16T10:50:44Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.7] Can't create table in Hive compatible way from Spark DataFrame with sqlContext</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/48446#M48593</link>
      <description>&lt;P&gt;As usuall solution was somwhere there in the deep of the Internet &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="http://stackoverflow.com/questions/37393017/convert-dataframe-to-hive-table-in-spark-scala" target="_blank"&gt;http://stackoverflow.com/questions/37393017/convert-dataframe-to-hive-table-in-spark-scala&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="http://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_spark_ki.html" target="_blank"&gt;http://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_spark_ki.html&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Proposed Workaround worked for me as charm.&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class="p"&gt;&lt;STRONG&gt;Workaround:&lt;/STRONG&gt; Explicitly create a Hive table to store the data. For example:&lt;/P&gt;&lt;PRE&gt;df.registerTempTable(tempName)
hsc.sql(s"""
CREATE TABLE $tableName (
// field definitions   )
STORED AS $format """)
hsc.sql(s"INSERT INTO TABLE $tableName SELECT * FROM $tempName")&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 12 Dec 2016 22:31:06 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/48446#M48593</guid>
      <dc:creator>Isegrim</dc:creator>
      <dc:date>2016-12-12T22:31:06Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.7] Can't create table in Hive compatible way from Spark DataFrame with sqlContext</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/49234#M48594</link>
      <description>&lt;P&gt;I have exactly the same problem, and the env is also the same.&lt;/P&gt;&lt;P&gt;But, I found that if you perform the procedure as the user who has ALL privilege on hive(may as: "server1"), it will suceessfully complete.And you can read table via both hive and spark.&lt;/P&gt;&lt;P&gt;In the log: "message:User xx does not have privileges for CREATETABLE", so I think it might have something to do with Sentry. But I stucked, anyone has a clue?&lt;/P&gt;</description>
      <pubDate>Tue, 10 Jan 2017 06:41:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/49234#M48594</guid>
      <dc:creator>rickmorty</dc:creator>
      <dc:date>2017-01-10T06:41:16Z</dc:date>
    </item>
    <item>
      <title>Re: [CDH 5.7] Can't create table in Hive compatible way from Spark DataFrame with sqlContext</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/49572#M48595</link>
      <description>&lt;P&gt;&lt;SPAN&gt;With a `SparkSession`, applications can create DataFrames from an [existing `RDD`](#interoperating-with-rdds), from a Hive table, or from [Spark data sources](#data-sources). As an example, the following creates a DataFrame based on the content of a JSON file: {% include_example create_df scala/org/apache/spark/examples/sql/SparkSQLExample.scala %}&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Jan 2017 01:36:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/CDH-5-7-Can-t-create-table-in-Hive-compatible-way-from-Spark/m-p/49572#M48595</guid>
      <dc:creator>ZachRoes</dc:creator>
      <dc:date>2017-01-18T01:36:35Z</dc:date>
    </item>
  </channel>
</rss>

