<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: LLAP, Livy &amp; Zeppelin not using LLAP in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226578#M75145</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14767/kbadani.html" nodeid="14767"&gt;@Kshitij Badani&lt;/A&gt; Thanks so much for replying and for writing the original article.  I confess I can't read the support Matrix.&lt;/P&gt;&lt;P&gt;I would have thought as I"m using spark 2.2 and HDP 2.6.3. (Which is admittedly not on the chart) that I would get the equivalent of v1.1.3-2.1.  I am sure you can read this table better and understand this better. Can you explain?  I'm not questioning you are right... I'm looking for understanding.&lt;/P&gt;</description>
    <pubDate>Thu, 01 Mar 2018 10:22:38 GMT</pubDate>
    <dc:creator>matt_andruff</dc:creator>
    <dc:date>2018-03-01T10:22:38Z</dc:date>
    <item>
      <title>LLAP, Livy &amp; Zeppelin not using LLAP</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226576#M75143</link>
      <description>&lt;P&gt;I am trying to get row level security for Zeppelin.&lt;/P&gt;&lt;P&gt;I followed:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://community.hortonworks.com/articles/110093/using-rowcolumn-level-security-of-spark-with-zeppe.html" target="_blank"&gt;https://community.hortonworks.com/articles/110093/using-rowcolumn-level-security-of-spark-with-zeppe.html&lt;/A&gt; &lt;/LI&gt;&lt;LI&gt;&lt;A href="https://community.hortonworks.com/content/kbentry/101181/rowcolumn-level-security-in-sql-for-apache-spark-2.html" target="_blank"&gt;https://community.hortonworks.com/content/kbentry/101181/rowcolumn-level-security-in-sql-for-apache-spark-2.html&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://community.hortonworks.com/questions/132769/problem-with-zeppelin-sparklivy-and-llap-in-kerber.html" target="_blank"&gt;https://community.hortonworks.com/questions/132769/problem-with-zeppelin-sparklivy-and-llap-in-kerber.html&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;Huge thanks to &lt;A rel="user" href="https://community.cloudera.com/users/15131/dhyun.html" nodeid="15131" target="_blank"&gt;@Dongjoon Hyun&lt;/A&gt;, &lt;A rel="user" href="https://community.cloudera.com/users/14767/kbadani.html" nodeid="14767" target="_blank"&gt;@Kshitij Badani&lt;/A&gt;, and &lt;A rel="user" href="https://community.cloudera.com/users/13196/berryosterlund.html" nodeid="13196" target="_blank"&gt;@Berry Österlund&lt;/A&gt; for their work in this area&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Hive: "Run as end user instead of Hive user" to 'false'&lt;/P&gt;&lt;P&gt;I am running a simple test in Zeppelin:&lt;/P&gt;&lt;PRE&gt;%livy2.spark

val wordsCounts = spark.sparkContext.parallelize(Seq(("a",1),("b",2))).toDF
wordsCounts.write.saveAsTable("ZeppelinTest")
&lt;/PRE&gt;&lt;P&gt;I am now getting an error:&lt;/P&gt;&lt;PRE&gt;org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.security.AccessControlException: Permission denied: user=ingest, access=READ, inode="/apps/hive/warehouse":hive:hadoop:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:252)
	at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:428)
	at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:304)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1956)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1940)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1914)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:8792)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2089)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1466)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
);
  at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
  at org.apache.spark.sql.hive.HiveExternalCatalog.getDatabase(HiveExternalCatalog.scala:189)
  at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getDatabaseMetadata(SessionCatalog.scala:241)
  at org.apache.spark.sql.catalyst.catalog.SessionCatalog.defaultTablePath(SessionCatalog.scala:443)
  at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:154)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
  at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
  at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
  at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
  at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
  at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:609)
  at org.apache.spark.sql.DataFrameWriter.createTable(DataFrameWriter.scala:419)
  at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:398)
  at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:354)
  ... 50 elided
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.security.AccessControlException: Permission denied: user=ingest, access=READ, inode="/apps/hive/warehouse":hive:hadoop:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:252)
	at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:428)
	at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:304)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1956)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1940)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1914)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:8792)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2089)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1466)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
)
  at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1305)
  at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getDatabase$1.apply(HiveClientImpl.scala:349)
  at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getDatabase$1.apply(HiveClientImpl.scala:355)
  at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:291)
  at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:232)
  at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:231)
  at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:274)
  at org.apache.spark.sql.hive.client.HiveClientImpl.getDatabase(HiveClientImpl.scala:348)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getDatabase$1.apply(HiveExternalCatalog.scala:190)
  at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getDatabase$1.apply(HiveExternalCatalog.scala:190)
  at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
  ... 69 more
Caused by: org.apache.hadoop.hive.metastore.api.MetaException: java.security.AccessControlException: Permission denied: user=edh_Ingest, access=READ, inode="/apps/hive/warehouse":hive:hadoop:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:353)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:252)
	at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:428)
	at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:304)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1956)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1940)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1914)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:8792)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2089)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1466)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2347)
  at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_database_result$get_database_resultStandardScheme.read(ThriftHiveMetastore.java:15345)
  at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_database_result$get_database_resultStandardScheme.read(ThriftHiveMetastore.java:15313)
  at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_database_result.read(ThriftHiveMetastore.java:15244)
  at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86)
  at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_database(ThriftHiveMetastore.java:654)
  at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_database(ThriftHiveMetastore.java:641)
  at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabase(HiveMetaStoreClient.java:1158)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
  at com.sun.proxy.$Proxy35.getDatabase(Unknown Source)
  at org.apache.hadoop.hive.ql.metadata.Hive.getDatabase(Hive.java:1301)
  ... 79 more
&lt;/PRE&gt;&lt;P&gt;My Livy interpreter settings:&lt;/P&gt;&lt;PRE&gt;livy.spark.hadoop.hive.llap.daemon.serivice.hosts 	@llap0
livy.spark.jars 	/user/zeppelin/lib/spark-llap-assembly-1.0.0.2.6.3.0-235.jar
livy.spark.jars.packages 	
livy.spark.sql.hive.hiveserver2.jdbc.url 	jdbc:hive2://hive.local:10500/
livy.spark.sql.hive.hiveserver2.jdbc.url.principal 	hive/_HOST@SOMETHING.LOCAL
livy.spark.sql.hive.llap 	true
livy.spark.yarn.security.credentials.hiveserver2.enabled 	true
zeppelin.interpreter.localRepo 	/usr/hdp/current/zeppelin-server/local-repo/2C8A4SZ9T_livy2
zeppelin.interpreter.output.limit 	102400
zeppelin.livy.concurrentSQL 	false
zeppelin.livy.displayAppInfo 	true
zeppelin.livy.keytab 	/etc/security/keytabs/zeppelin.server.kerberos.keytab
zeppelin.livy.principal 	zeppelin@SOMETHING.LOCAL
zeppelin.livy.pull_status.interval.millis 	1000
zeppelin.livy.session.create_timeout 	120
zeppelin.livy.spark.sql.maxResult 	1000
zeppelin.livy.url 	&lt;A href="http://livy.local:8999" target="_blank"&gt;http://livy.local:8999&lt;/A&gt; 
&lt;/PRE&gt;&lt;P&gt;Versions:&lt;/P&gt;&lt;P&gt;Spark2 2.2.0&lt;/P&gt;&lt;P&gt;Zeppelin Notebook 0.7.3&lt;/P&gt;&lt;P&gt;Hive 1.2.1000&lt;/P&gt;&lt;P&gt;HDP 2.6.3&lt;/P&gt;&lt;P&gt;FYI again I have set  "Run as end user instead of Hive user" to 'false'&lt;/P&gt;&lt;P&gt;Any ideas or thoughts would be appreicated.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 12:55:04 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226576#M75143</guid>
      <dc:creator>matt_andruff</dc:creator>
      <dc:date>2022-09-16T12:55:04Z</dc:date>
    </item>
    <item>
      <title>Re: LLAP, Livy &amp; Zeppelin not using LLAP</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226577#M75144</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12415/mattandruff.html" nodeid="12415"&gt;@Matt Andruff&lt;/A&gt; The operation you are trying to do is basically save a temporary spark table into Hive via Livy (i.e a spark-app). If you check the 2nd table in this support matrix, this one is not a supported operation via spark-llap connector&lt;/P&gt;&lt;P&gt;&lt;A href="https://github.com/hortonworks-spark/spark-llap/wiki/7.-Support-Matrix#spark-shells-and-spark-apps" target="_blank"&gt;https://github.com/hortonworks-spark/spark-llap/wiki/7.-Support-Matrix#spark-shells-and-spark-apps&lt;/A&gt;&lt;/P&gt;&lt;P&gt;But such operations(i.e. creating a table) should be supported by jdbc(spark1) interpreter as mentioned in the table 1 on the same link. jdbc(spark1) will direct the query through spark thrift server which is running as 'hive' principal as mentioned in the same wiki.&lt;/P&gt;&lt;P&gt;If you however want above operation to succeed, then you logged in user in Zeppelin should have proper authorizations on hive warehouse directory. Then only spark will be able to save the table in hive warehouse for you. &lt;/P&gt;&lt;P&gt;Hope that helps&lt;/P&gt;</description>
      <pubDate>Thu, 01 Mar 2018 05:27:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226577#M75144</guid>
      <dc:creator>kbadani</dc:creator>
      <dc:date>2018-03-01T05:27:12Z</dc:date>
    </item>
    <item>
      <title>Re: LLAP, Livy &amp; Zeppelin not using LLAP</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226578#M75145</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14767/kbadani.html" nodeid="14767"&gt;@Kshitij Badani&lt;/A&gt; Thanks so much for replying and for writing the original article.  I confess I can't read the support Matrix.&lt;/P&gt;&lt;P&gt;I would have thought as I"m using spark 2.2 and HDP 2.6.3. (Which is admittedly not on the chart) that I would get the equivalent of v1.1.3-2.1.  I am sure you can read this table better and understand this better. Can you explain?  I'm not questioning you are right... I'm looking for understanding.&lt;/P&gt;</description>
      <pubDate>Thu, 01 Mar 2018 10:22:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226578#M75145</guid>
      <dc:creator>matt_andruff</dc:creator>
      <dc:date>2018-03-01T10:22:38Z</dc:date>
    </item>
    <item>
      <title>Re: LLAP, Livy &amp; Zeppelin not using LLAP</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226579#M75146</link>
      <description>&lt;P&gt;Could I use the 1.1.3-2.1 jar in livy to give the feature I require? &lt;/P&gt;</description>
      <pubDate>Thu, 01 Mar 2018 10:53:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226579#M75146</guid>
      <dc:creator>matt_andruff</dc:creator>
      <dc:date>2018-03-01T10:53:14Z</dc:date>
    </item>
    <item>
      <title>Re: LLAP, Livy &amp; Zeppelin not using LLAP</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226580#M75147</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/14767/kbadani.html" nodeid="14767"&gt;@Kshitij Badani&lt;/A&gt;&lt;P&gt;How do we get full write access to llap in HDP 2.6.3?  I'm happy to do work to make this work, otherwise I'll have to tell my client to down grade back to 2.6.2.  I'd prefer not to do that.&lt;/P&gt;</description>
      <pubDate>Thu, 15 Mar 2018 00:15:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/LLAP-Livy-Zeppelin-not-using-LLAP/m-p/226580#M75147</guid>
      <dc:creator>matt_andruff</dc:creator>
      <dc:date>2018-03-15T00:15:29Z</dc:date>
    </item>
  </channel>
</rss>

