Member since
12-30-2015
164
Posts
29
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12190 | 01-07-2019 06:17 AM | |
595 | 12-27-2018 07:28 AM | |
2339 | 11-26-2018 10:12 AM | |
553 | 11-16-2018 12:15 PM | |
2233 | 10-22-2018 09:31 AM |
02-02-2022
08:15 AM
Spark and Hive use separate catalogs to access SparkSQL or Hive tables in HDP 3.0 and later. The Spark catalog contains a table created by Spark. The Hive catalog contains a table created by Hive. By default, standard Spark APIs access tables in the Spark catalog. To access tables in the hive catalog, we have to edit the metastore.catalog.default property in hive-site.xml (Set that property value to 'hive' instead of 'spark'). Config File Path: $SPARK_HOME/conf/hive-site.xml Before change the config <property>
<name>metastore.catalog.default</name>
<value>spark</value>
</property> After change the config <property>
<name>metastore.catalog.default</name>
<value>hive</value>
</property>
... View more
12-24-2021
06:40 AM
Try to clean your metadata with ./hbase-cleanup.sh --cleanAll command and restart your services. If you get "Regionservers are not expired. Exiting without cleaning hbase data" stop Hbase service before running the command.
... View more
04-07-2021
03:02 AM
Huge thanks. It works for me.
... View more
02-17-2021
08:52 PM
Hi @Narendra_, as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
10-07-2020
04:28 AM
Its a service that's running, not a job. So avoid killing it.
... View more
10-01-2020
08:29 AM
Just to clarify, we had this same issue and resolved it by recursively setting the permissions correctly on the folders below /tmp/hive in HDFS (not the filesystem of your Hive Server) and then restarting the Hive services in Ambari. Generally the folders below /tmp/hive/ required 700, apart from "_resultscache_" which needed 733.
... View more
05-14-2020
10:45 PM
Did you resolve the issue. what are the steps you follow. Help me with the steps
... View more
10-09-2019
09:28 AM
PFA the below error logs : 19/10/09 16:09:32 DEBUG ServletHandler: chain=org.apache.hadoop.security.authentication.server.AuthenticationFilter-418c020b->org.apache.spark.ui.JettyUtils$$anon$3-75e710b@986efce7==org.apache.spark.ui.JettyUtils$$anon$3,jsp=null,order=-1,inst=true 19/10/09 16:09:32 DEBUG ServletHandler: call filter org.apache.hadoop.security.authentication.server.AuthenticationFilter-418c020b 19/10/09 16:09:32 DEBUG AuthenticationFilter: Got token null from httpRequest http://ip-10-0-10.184. ************:18081/ 19/10/09 16:09:32 DEBUG AuthenticationFilter: Request [http://ip-10-0-10-184.*****:18081/] triggering authentication. handler: class org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler 19/10/09 16:09:32 DEBUG AuthenticationFilter: Authentication exception: java.lang.IllegalArgumentException org.apache.hadoop.security.authentication.client.AuthenticationException: java.lang.IllegalArgumentException at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:306) at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:536) at org.spark_project.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) at org.spark_project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.spark_project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.spark_project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.spark_project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.spark_project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.spark_project.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:493) at org.spark_project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) at org.spark_project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.spark_project.jetty.server.Server.handle(Server.java:539) at org.spark_project.jetty.server.HttpChannel.handle(HttpChannel.java:333) at org.spark_project.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.spark_project.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) at org.spark_project.jetty.io.FillInterest.fillable(FillInterest.java:108) at org.spark_project.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.spark_project.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.spark_project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.spark_project.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.IllegalArgumentException at java.nio.Buffer.limit(Buffer.java:275) at org.apache.hadoop.security.authentication.util.KerberosUtil$DER.<init>(KerberosUtil.java:365) at org.apache.hadoop.security.authentication.util.KerberosUtil$DER.<init>(KerberosUtil.java:358) at org.apache.hadoop.security.authentication.util.KerberosUtil.getTokenServerName(KerberosUtil.java:291) at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:285) ... 22 more 19/10/09 16:09:32 DEBUG GzipHttpOutputInterceptor: org.spark_project.jetty.server.handler.gzip.GzipHttpOutputInterceptor@17d4d832 exclude by status 403 19/10/09 16:09:32 DEBUG HttpChannel: sendResponse info=null content=HeapByteBuffer@26ea8849[p=0,l=365,c=32768,r=365]={<<<<html>\n<head>\n<me.../body>\n</html>\n>>>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00...\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00} complete=true committing=true callback=Blocker@137652aa{null} 19/10/09 16:09:32 DEBUG HttpChannel: COMMIT for / on HttpChannelOverHttp@4d71d816{r=2,c=true,a=DISPATCHED,uri=//ip-10-0-10-184.******:18081/} 403 java.lang.IllegalArgumentException HTTP/1.1 Date: Wed, 09 Oct 2019 16:09:32 GMT Set-Cookie: hadoop.auth=; HttpOnly Cache-Control: must-revalidate,no-cache,no-store Content-Type: text/html;charset=iso-8859-1
... View more
06-19-2019
09:40 PM
@subhash parise Here is the reason why you needed to add that "sslfactory=org.postgresql.ssl.NonValidatingFactory" property: Using SSL without Certificate Validation ==================================== In some situations it may not be possible to configure your Java environment to make the server certificate available, for example in an applet. For a large scale deployment it would be best to get a certificate signed by recognized certificate authority, but that is not always an option. The JDBC driver provides an option to establish a SSL connection without doing any validation, but please understand the risk involved before enabling this option. A non-validating connection is established via a custom SSLSocketFactory class that is provided with the driver. Setting the connection URL parameter sslfactory=org.postgresql.ssl.NonValidatingFactory will turn off all SSL validation. If you do not want to use that option "sslfactory=org.postgresql.ssl.NonValidatingFactory" to turn off all SSL validation. Then you might have to do the following: 1. Create a Truststore in Ambari 2. Import your Postgres Certificate inside the Ambari truststore.
... View more
05-27-2019
08:08 AM
Hi @Predrag Minovic, Thank you for the sharing your input. No issues with the zookeeper name space, have double checked i,e jdbc:hive2://hadoop-zknode01:2181,hadoop-zknode02:2181,hadoop-zknode03:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
... View more
05-20-2019
04:30 PM
The above was originally posted in the Community Help Track. On Mon May 20 16:29 UTC 2019, a member of the HCC moderation staff moved it to the Data Processing track. The Community Help Track is intended for questions about using the HCC site itself.
... View more
01-10-2019
09:57 AM
Hi Vikash, This https://community.hortonworks.com/questions/91265/oozie-hive-action-class-not-found-exception.html help you.
... View more
12-27-2018
10:49 AM
I know, i just wanted to prevent managaing multuiple configurations
... View more
12-12-2018
09:05 PM
You might look at the table definition (show create table x) and verify the prefix on the table location. Hive stores the whole filespec including the protocol.
... View more
11-26-2018
10:12 AM
This problem occurs while the ticket is expired. resolved after generating the ticket by using kinit -kt user.keytab use@realm
... View more
11-16-2018
12:15 PM
seems issue with the yum.conf , after removing proxy url from yum.conf installation went fine.
... View more
02-13-2019
02:07 PM
capture-decran-2019-02-12-a-185811.pngHi, Could you explain further please? I have the same problem. When I restart HiveServer2 Interactive via Ambari, the later starts a yarn application named llap0 and still running a moment before it fails. When I check via CLI : hive --service llapstatus --name llap0 I get : SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist
WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
LLAPSTATUS
--------------------------------------------------------------------------------
LLAP status unknown. Awaiting app launch
--------------------------------------------------------------------------------
LLAP status unknown. Awaiting app launch
--------------------------------------------------------------------------------
{
"state" : "APP_NOT_FOUND",
"runningThresholdAchieved" : false
}
But on the Yarn Applications dashboard, the llap0 is in Running state (see attached picture). Thanks.
... View more
05-11-2019
08:02 AM
hi @Rajesh Sampath @subhash parise, i am also having same error, i am trying to read hive external table, can you please tell me how you fixed it ?
... View more
10-09-2018
01:48 PM
Hi @subhash parise, Thank you! No alerts were observed after the maintenance mode was enabled.
... View more
09-27-2018
10:42 AM
1 Kudo
Hi @Thuy
Le
, you may need to change java heap size perameters in bootstrap.conf file. Please refer the below document. https://community.hortonworks.com/articles/7882/hdfnifi-best-practices-for-setting-up-a-high-perfo.html
... View more
09-26-2018
07:04 AM
The above error wiped out once we have added HADOOP_CONF_DIR and YARN_CONF_DIR in .bashrc file in users home directory
... View more
03-18-2019
09:08 PM
Hi. I have installed and configured Ambari using the guide here: https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+2.7.3. Unfortunately, I got the same error in this page. I already tried reinstalling twice but no success. I'm running Ubuntu on a Virtual Box running on Windows host. Any idea? Thanks.
... View more
09-24-2018
01:01 PM
Hi @Felix Albani, I have configured required properties in custom spark-default config but still spark in not utilizing those properties. i have used above syntax got worked for me.
... View more
09-11-2018
12:57 PM
alter table schema_7539.activityparameters_4ITEM_1 COMPACT 'MAJOR'; in hive then re run query. If that does't work run query in spark hive context.
... View more
09-28-2018
05:33 AM
Hi @vamsi krishna, First check the resource manager port is listening or not by using netstat -an | grep 8088. if the port is listening check resource manager web ui is working or not ? if both are working try to restart the ambari-agent and if resource manager web page not coming restart the resource manager.
... View more
08-28-2018
10:58 AM
2 Kudos
Hi @subhash parise What did you set the ownership to for the version-2 folder and contents? It should be zookeeper:hadoop and zookeeper (owner) should have write permissions. Did you also check that you can traverse the folder structure, as the zookeeper user? Ex does this work; [root@host]# su - zookeeper
[zookeeper@host ~]$ cd /data/hadoop/zookeeper/version-2/ [zookeeper@host version-2]$ ls -al
drwxr-xr-x. 2 zookeeper hadoop 4096 Aug 27 08:03 .
drwxr-xr-x. 3 zookeeper hadoop 4096 Aug 27 08:03 ..
-rw-r--r--. 1 zookeeper hadoop 1 Aug 27 08:03 acceptedEpoch
-rw-r--r--. 1 zookeeper hadoop 1 Aug 27 08:03 currentEpoch
-rw-r--r--. 1 zookeeper hadoop 67108880 Aug 28 10:52 log.100000001
-rw-r--r--. 1 zookeeper hadoop 296 Aug 27 08:03 snapshot.0
... View more
08-30-2018
11:19 AM
@yong lau I seen your other post, but the error was hard to find in the bulk paste. Try posting your error in a code box so that we can see it better. Also be sure to dial into the actual error versus sending needless text. That said, HIVE LLAP requires some configurations we outline above in this post. Check those out as well as the links above Make sure you have the settings for low specs and for a single LLAP container and try to start LLAP. Sometimes it takes me 2-3 times to start without errors. Once you have it working with low specs, slowly increase specs. It is also important to know that the actual errors you need to find, are likely inside of the YARN containers, so you will have to dig them out to truely know the issue that actually stops it from starting.
... View more
08-10-2018
07:25 PM
Hmm I am not familiar with this set of services. Does it have any API endpoints? Are you trying to collect metric data on the services performance to hadoop? Or do you mean to send the output of a virtual device (temperature from a virtual thermometer) to hadoop?
... View more
10-16-2017
11:48 AM
@subhash parise On the offending node, do the following, bizzare thee are no files in /var/lib/ambari-agent/data: Stop and remove ambari-agent ambari-agent stop
yum erase ambari-agent
rm -rf /var/lib/ambari-agent
rm -rf /var/run/ambari-agent
rm -rf /usr/lib/amrbari-agent
rm -rf /etc/ambari-agent
rm -rf /var/log/ambari-agent
rm -rf /usr/lib/python2.6/site-packages/ambari* Re-install the Ambari Agent yum install ambari-agent
vi /etc/ambari-agent/conf/ambari-agent.ini Change hostname to Ambari Server [server]
hostname={Ambari-server_host_FQDN}
url_port=8440
secured_url_port=8441
connect_retry_delay=10
max_reconnect_retry_delay=30 Restart the agent ambari-agent start That should resolve the issue
... View more