Member since
07-06-2020
18
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
25912 | 10-20-2020 07:39 AM | |
2772 | 07-06-2020 01:51 AM |
10-29-2020
03:34 AM
@aakulov Thank you for the response. This works.
... View more
10-20-2020
07:39 AM
@Rup you may need to consider adding below properties in your workflow.xml and pass cred='hcat_auth' in action tag. <credentials>
<credential name="hcat_auth" type="hcat">
<property>
<name>hcat.metastore.uri</name>
<value>${hcat_metastore_uri}</value>
</property>
<property>
<name>hcat.metastore.principal</name>
<value>${hcat_metastore_principal}</value>
</property>
</credential>
</credentials>
... View more
10-20-2020
07:34 AM
Hi Team,
We are using HDP 2.6.3 and ambari 2.6.0. we are facing issue while using sqoop import to MS Parallel Data Warehouse. As per https://community.cloudera.com/t5/Community-Articles/Sqoop-to-MS-PDW-Parallel-Data-Warehouse-issue/tac-p/304591 issue is fixed in HDP 2.5.5 version and above but we are still seeing the same issue in HDP 2.6.3. Do we have any workaround for this.
Thanks in Advance!
... View more
Labels:
10-19-2020
01:34 PM
@akapratwar @VidyaSargur we are using HDP 2.6.3 but still seeing the same issue. Do we have a workaround for this. Sqoop command arguments :
import
-Dmapreduce.job.user.classpath.first=true
--connect
jdbc:sqlserver://<host>:<port>;databaseName=<db_name>
--username
<username>
--password
<password>
--driver
com.microsoft.sqlserver.jdbc.SQLServerDriver
--table
<tablename>
--num-mappers
10
--compress
--compression-codec
org.apache.hadoop.io.compress.SnappyCodec
--delete-target-dir
--target-dir
<hdfs_path>
--as-avrodatafile
... View more
08-20-2020
03:17 AM
@Prakashcit Thank you for the update. We are managing permissions through Ranger.
... View more
08-15-2020
02:42 AM
Hi Team, Is there a way to restrict yarn queue access when hive.server2.enable.doAs is set to false. Ranger YARN plugin has been enabled. When submitting the query using individual user it is getting submitted as hive user which is expected. I have added hive user in deny condition for a specific queue but hive user is still able to submit job on the queue. I want only few users to submit job in that queue.
... View more
Labels:
07-29-2020
10:38 AM
Hello @Bender Thanks a lot for your response. Can you please also provide some insight on timeout parameter. Can this be increased and will there be any extra load on Knox if we increase the timeout and maxConnections
... View more
07-29-2020
03:28 AM
Hi Team, We are hitting a bug BUG-77340 where HS2 failover requires knox restart if cookie use is enabled for HS2. We checked https://my.cloudera.com/knowledge/quotError-opening?id=273455 and it is mentioned to set hive.server2.thrift.http.cookie.auth.enabled to false. We would like to know if there is any known performance impact if we set this parameter to false. Also, we have below parameters set to 600s but still we are seeing server 500 error. gateway.httpclient.connectionTimeout 600s
gateway.httpclient.socketTimeout 600s
... View more
Labels:
07-06-2020
01:51 AM
This seems to be related to https://issues.apache.org/jira/browse/SPARK-21447
... View more
07-06-2020
01:28 AM
History Server Log ERROR FsHistoryProvider: Exception encountered when attempting to load application log hdfs:///spark2-history/application_1574201808558_1228909.lz4
java.io.EOFException: Stream ended prematurely
at org.apache.spark.io.LZ4BlockInputStream.readFully(LZ4BlockInputStream.java:230)
at org.apache.spark.io.LZ4BlockInputStream.refill(LZ4BlockInputStream.java:203)
at org.apache.spark.io.LZ4BlockInputStream.read(LZ4BlockInputStream.java:125)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:161)
at java.io.BufferedReader.readLine(BufferedReader.java:324)
at java.io.BufferedReader.readLine(BufferedReader.java:389)
at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:72)
at scala.collection.Iterator$$anon$21.hasNext(Iterator.scala:836)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:461)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:78)
at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:58)
... View more