Member since
03-17-2016
132
Posts
106
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1366 | 03-28-2019 11:16 AM | |
1619 | 03-28-2019 09:19 AM | |
1248 | 02-02-2017 07:52 AM | |
1690 | 10-03-2016 08:08 PM | |
596 | 09-13-2016 08:00 PM |
10-01-2016
12:23 PM
1 Kudo
Please accept the answer. So it will help to others.
... View more
09-13-2016
08:45 PM
Please run below command as hive user: /usr/bin/hive --service hiveserver2 --hiveconf hive.root.logger=DEBUG . yes kill hiveserver2 process on hadoop2 and run this on the same node.
... View more
09-13-2016
08:00 PM
7 Kudos
Configuring clients to use the High Availability feature The Atlas Web Service can be accessed in two ways: Using the Atlas Web UI: This is a browser based client that can be used to query the metadata stored in Atlas. Using the Atlas REST API: As Atlas exposes a RESTful API, one can use any standard REST client including libraries in other applications. In fact, Atlas ships with a client called AtlasClient that can be used as an example to build REST client access. In order to take advantage of the High Availability feature in the clients, there are two options possible.
HAProxy. Here is an example HAProxy configuration that can be used. Note this is provided for illustration only, and not as a recommended production configuration. For that, please refer to the HAProxy documentation for appropriate instructions. frontend atlas_fe
bind *:41000
default_backend atlas_be
backend atlas_be
mode http
option httpchk get /api/atlas/admin/status
http-check expect string ACTIVE
balance roundrobin
server host1_21000 host1:21000 check
server host2_21000 host2:21000 check backup
listen atlas
bind localhost:42000
The above configuration binds HAProxy to listen on port 41000 for incoming client connections. It then routes the connections to either of the hosts host1 or host2 depending on a HTTP status check. The status check is done using a HTTP GET on the REST URL /api/atlas/admin/status, and is deemed successful only if the HTTP response contains the string ACTIVE. Using automatic detection of active instance If one does not want to setup and manage a separate proxy, then the other option to use the High Availability feature is to build a client application that is capable of detecting status and retrying operations. In such a setting, the client application can be launched with the URLs of all Atlas Web Service instances that form the ensemble. The client should then call the REST URL /api/atlas/admin/status on each of these to determine which is the active instance. The response from the Active instance would be of the form {Status:ACTIVE}. Also, when the client faces any exceptions in the course of an operation, it should again determine which of the remaining URLs is active and retry the operation. The AtlasClient class that ships with Atlas can be used as an example client library that implements the logic for working with an ensemble and selecting the right Active server instance. Utilities in Atlas, like quick_start.py and import-hive.sh can be configured to run with multiple server URLs. When launched in this mode, the AtlasClient automatically selects and works with the current active instance. If a proxy is set up in between, then its address can be used when running quick_start.py or import-hive.sh.
... View more
09-13-2016
07:45 PM
Can you run this, So that we can exactly know whats gi\oing on /usr/bin/hive --service hiveserver2 --hiveconf hive.root.logger=DEBUG
... View more
09-13-2016
07:33 PM
4 Kudos
follow this http://blog.h2o.ai/2014/09/sparkling-water-tutorials/
... View more
09-13-2016
07:30 PM
5 Kudos
In order to provide HA for the index store, we recommend that Atlas be configured to use Solr as the backing index store for Titan. In order to configure Atlas to use Solr in HA mode, do the following:
Choose an existing SolrCloud cluster setup in HA mode to configure in Atlas (OR) Set up a new SolrCloud cluster.
Ensure Solr is brought up on at least 2 physical hosts for redundancy, and each host runs a Solr node. We recommend the number of replicas to be set to at least 2 for redundancy. Create the SolrCloud collections required by Atlas, as described in Installation Steps Refer to the Configuration page for the options to configure in atlas.properties to setup Atlas with Solr. Source - http://atlas.incubator.apache.org/HighAvailability.html
... View more
09-13-2016
06:59 PM
The hive user didn't have write access to /tmp/hive .set required permissions and have a try. . I can see this in hiveserver2 log
... View more
09-13-2016
06:30 PM
Did you also looked inside /var/log/hive/hive-server2.out file after starting HS2 from Ambari?
... View more
09-13-2016
06:28 PM
3 Kudos
@Sami Ahmad Can paste out for this ps -ef |grep hive
... View more
09-12-2016
11:30 AM
4 Kudos
Please find below link http://www.kognitio.com/forums/Kognitio%20Technote%20-%20v8.x%20Hadoop%20Connector%20Setup.pdf
... View more
09-02-2016
11:47 AM
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>http://example.com:8050</job-tracker>
<name-node>hdfs://example</name-node>
<job-xml>/user/test/hive-site.xml</job-xml>
<script>/user/test/test_dev.hql</script>
<file>/user/test/test_dev.hql#test_dev.hql</file>
</hive>
... View more
09-02-2016
09:47 AM
1 Kudo
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:494)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:680)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:306)
at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:290)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:241)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1533)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)
... 19 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
... 25 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:426)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1531)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3000)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3019)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:475)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:680)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:624)
at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:306)
at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:290)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:241)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:472)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 30 more
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Oozie
-
Cloudera Hue
08-04-2016
07:26 PM
2 Kudos
I have resolved the issue by giving appropriate permissions to the user in ranger and it worked fine for me
... View more
08-04-2016
11:56 AM
1 Kudo
This is what we found in hiveserver2 log ERROR
[hiveServer2.async.multi_dest.batch_hiveServer2.async.multi_dest.batch.hdfs_destWriter]:
provider.BaseAuditHandler (BaseAuditHandler.java: logError(329)) - Error
writing to log file. java.io.IOException:
Failed on local exception: java.io.IOException:
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException:
No valid credential s provided (Mechanism
level: Failed to find any Kerberos tgt)]; Host Details : local host is:
"*.*.*.*/10.190.80.11"; destination host is:
"*.*.*.*":8020; Thanks in advance
... View more
Labels:
- Labels:
-
Apache Hive
07-14-2016
11:06 AM
Thanks sindhu It worked for me
... View more
07-14-2016
11:04 AM
1 Kudo
Thanks Everyone, I resolved the issue by increasing the /etc/security/limits.d/hive.conf . Everything went fine.
... View more
07-14-2016
07:37 AM
2 Kudos
Resource_management.core.exceptions.Fail: Execution of 'export HIVE_CONF_DIR=/usr/hdp/current/hive-metastore/conf/conf.server ; /usr/hdp/current/hive-metastore/bin/schematool -initSchema -dbType postgres -userName hive -passWord [PROTECTED]' returned 254. /etc/profile: fork: retry: Resource temporarily unavailable
/etc/profile: fork: retry: Resource temporarily unavailable
/etc/profile: fork: retry: Resource temporarily unavailable
/etc/profile: fork: retry: Resource temporarily unavailable
/etc/profile: fork: Resource temporarily unavailable
/etc/profile: fork: retry: Resource temporarily unavailable /etc/profile: fork: retry: Resource temporarily unavailable
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
07-13-2016
12:09 PM
This is what i can see in my logs
... View more
07-13-2016
11:58 AM
1 Kudo
When i run a beeline query . It throws following excetion and some time query produces correct output Found these on hiveserver2 logs ERROR [HiveServer2-Background-Pool: Thread-160573]: operation.Operation (SQLOperation.java:run(209)) - Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.tez.TezTask. unable to create new native thread
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:156)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:71)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:206)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:218)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
... View more
Labels:
- Labels:
-
Apache Hive
06-09-2016
03:05 PM
Not able to start both namenodes at a time . Id one node is active other doesnt start. this is wat i found in logs ambari java.io.IOException: Cannot lock storage /nn/dfs/. The directory is already locked
... View more
06-09-2016
03:04 PM
Not able to start namenode on ha enabled cluster
... View more
- Tags:
- Ambari
- HA
- Hadoop Core
Labels:
- Labels:
-
Apache Ambari
06-09-2016
07:50 AM
1 Kudo
@plevinson and @Sagar Shimpi I want to use ranger but i would like to manage it through Ambari. If i am following the steps mentioned by you guys will ranger show up in Ambari console?. @emaxwell I want to explore how easy it would be using HDI and as a part of that i want to see how easy it is to add unsupported services. Thanks for mentioning to run HDP on Azure i can have that as a option if i cant install it on HDI. I made some progress installing Ranger on HDI. I copied the Ranger related directories from "/var/lib/ambari-server/resources/common-services/" and placed them in "/var/lib/ambari-server/resources/stacks/HDP/2.4/services/". After restart Ambari was able to list Ranger as a service in add service wizard however i cannot go past the "Customize Services" step. Is this an valid method of adding Ranger to Ambari. Any help would be much appreciated.
... View more
06-08-2016
01:43 PM
2 Kudos
I am using azure hdinsights(hadoop cluster) with HDP 2.4.2 and the services available in ambari are 1: HDFS 2: MR2 3: YARN 4: Hive 5: Tez 6: Pig 7: Sqoop 8: Oozie 9: Zookeeper 10: Ambari Metrics 11: Kerberos 12: Slider When i try to add a service in ambari i dont see ranger available. So how do i install Ranger and still be able to start and stop it from ambari. I tried to follow this link but it is not clear https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133#Overview%28Ambari1.5.0orlater%29-Example:ImplementingaCustomClientService
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Ranger
06-07-2016
01:45 PM
2 Kudos
@Kaliyug Antagonist No this does not qualify for 'Temporary Access to Internet' case in the Hortonworks doc . We have to download the required packages and then install them
... View more
06-07-2016
06:24 AM
2 Kudos
@Srikaran Jangidi
Start Tez session at Initialization - Enables a user to use HiveServer2 without enabling Tez for HiveServer2. Users might potentially want to run queries with Tez without a pool of sessions. Default value is False hive.execution.engine=tez - This setting determines whether Hive queries will be executed using Tez or MapReduce. Default value is - If this value is set to "mr," Hive queries will be executed using MapReduce. If this value is set to "tez," Hive queries will be executed using Tez. All queries executed through HiveServer2 will use the specified hive.execution.engine setting.
... View more
06-03-2016
03:47 AM
2 Kudos
Thanks @Jitendra Yadav I have resolved this issue by changing these parameters on Advanced ams-hbase-site hbase.rootdir - hdfs://nameservice/user/ams/hbase earlier it was pointing to my namenode after enabling ha i had to change these and it worked fine for me
... View more
05-23-2016
01:15 PM
After enabling HA I foung following error on /var/log/ambari-metrics-collector Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null Please help me out
... View more
Labels:
- Labels:
-
Apache Ambari
03-17-2016
09:51 AM
5 Kudos
You can check data nodes hdfs configured capacity by 1. Going into namenode UI http://namenodeIp:50070/dfshealth.html#tab-datanode 2. from command line use hadoop dfsadmin -report namenode.png
... View more
- « Previous
- Next »