Member since
04-13-2016
422
Posts
150
Kudos Received
55
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1933 | 05-23-2018 05:29 AM | |
| 4965 | 05-08-2018 03:06 AM | |
| 1685 | 02-09-2018 02:22 AM | |
| 2714 | 01-24-2018 08:37 PM | |
| 6169 | 01-24-2018 05:43 PM |
06-23-2016
03:03 AM
2 Kudos
Hi, I'm able to run the job in client mode but unable to run the same job in cluster mode. Can someone please help me? Below is the error message. 16/06/22 21:57:10 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:117)
at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:165)
at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:163)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:170)
at DisplayAnalysisForecast$.main(DisplayAnalysisForecast.scala:35)
at DisplayAnalysisForecast.main(DisplayAnalysisForecast.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:486)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
... 11 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
... 16 more
Caused by: javax.jdo.JDOFatalUserException: Class org.datanucleus.api.jdo.JDOPersistenceManagerFactory was not found.
NestedThrowables:
java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1175)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:310)
at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:339)
at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:248)
at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58)
at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:497)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:356)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:171)
... 21 more
Caused by: java.lang.ClassNotFoundException: org.datanucleus.api.jdo.JDOPersistenceManagerFactory
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at javax.jdo.JDOHelper$18.run(JDOHelper.java:2018)
at javax.jdo.JDOHelper$18.run(JDOHelper.java:2016)
at java.security.AccessController.doPrivileged(Native Method)
at javax.jdo.JDOHelper.forName(JDOHelper.java:2015)
at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1162)
... 40 more
16/06/22 21:57:10 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient)
16/06/22 21:57:18 ERROR yarn.ApplicationMaster: SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application.
16/06/22 21:57:18 INFO spark.SparkContext: Invoking stop() from shutdown hook
Using Spark 1.4 and running on Hadoop cluster only. Any help is highly appreciated and thanks in advance.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
06-21-2016
08:27 PM
2 Kudos
@Radhakrishnan Rk 1. Stop the Hue instances, if any. /etc/init.d/hue stop 2. On the node where Hue is installed take a backup of hue.ini cp /etc/hue/conf/hue.ini /etc/hue/conf/hue.ini.bkup 3. On all the Hue instances edit /etc/hue/conf/hue.ini # Configuration options for connecting to LDAP and Active Directory
# -------------------------------------------------------------------
[[ldap]]
# The search base for finding users and groups
base_dn="DC=mycompany,DC=com"
# URL of the LDAP server
ldap_url=ldap://auth.mycompany.com
# A PEM-format file containing certificates for the CA's that
# Hue will trust for authentication over TLS.
# The certificate for the CA that signed the
# LDAP server certificate must be included among these certificates.
# See more here http://www.openldap.org/doc/admin24/tls.html.
## ldap_cert=
## use_start_tls=true
# Distinguished name of the user to bind as -- not necessary if the LDAP server
# supports anonymous searches
bind_dn=" uid=hadoopService,CN=ServiceAccount,DC=mycompany,DC=com"
# Password of the bind user -- not necessary if the LDAP server supports
# anonymous searches
bind_password=
# Pattern for searching for usernames -- Use <username> for the parameter
# For use when using LdapBackend for Hue authentication
ldap_username_pattern="uid=<username>,ou=People,dc=mycompany,dc=com"
# Create users in Hue when they try to login with their LDAP credentials
# For use when using LdapBackend for Hue authentication
create_users_on_login = true
# Synchronize a users groups when they login
sync_groups_on_login=true
# Ignore the case of usernames when searching for existing users in Hue.
ignore_username_case=true
# Force usernames to lowercase when creating new users from LDAP.
force_username_lowercase=true
# Use search bind authentication.
search_bind_authentication=true
# Choose which kind of subgrouping to use: nested or suboordinate (deprecated).
subgroups=suboordinate
# Define the number of levels to search for nested members.
nested_members_search_depth=10
[[[users]]]
# Base filter for searching for users
user_filter="objectclass=*"
# The username attribute in the LDAP schema
user_name_attr=sAMAccountName
[[[groups]]]
# Base filter for searching for groups
group_filter="objectclass=*"
# The username attribute in the LDAP schema
group_name_attr=cn 4. Start the /etc/init.d/hue start and test it.
... View more
06-17-2016
03:03 AM
@Rajkumar Singh Thank's for the response, I need the data of queue information for the last one month to get the metrics.
... View more
06-15-2016
07:26 PM
Hi, Is there any way to get statistics for the usage of the queue of the cluster for a period of a month. I have configured 3 groups A, B & C in allocating A=30%, B=50% and C=20% now I would like to get statics like when is the queue heavily used, which is not and at a particular time I would like to know what is the percentage queue it used, which are peak hour's which is using its queue beyond 100% for particular queue etc., Is there any command above information for the capacity scheduler. Thanks in advance.
... View more
06-14-2016
02:32 PM
1 Kudo
@Rahul Pathak I got the solution, When debugging pig view issue, found that in WebHCat log that “/usr/hdp/2.3.2.0-2950/hive/lib/hive-common.jar/zookeeper.jar: cannot open
`/usr/hdp/2.3.2.0-2950/hive/lib/hive-common.jar/zookeeper.jar' (No such file or
directory) “. I was getting this from property templeton.libjars where I don’t have /hive/lib/hive-common.jar/zookeeper.jar path. So I have changed path to point appropriate jar file in "/usr/hdp/${hdp.version}/zookeeper/zookeeper.jar,/usr/hdp/${hdp.version}/hive/lib/hive-common.jar" location.
... View more
06-14-2016
01:59 AM
1 Kudo
@Michael Dennis "MD" Uanang Below steps might help you: If you are running external applications, it is recommended that you test the connection to HBase using the following connection strings for the Phoenix JDBC driver:
Add hbase-site.xml and core-site.xml to your application or client's class path: set CLASSPATH=<path_to_hbase-site.xml>;<path_to_core-site.xml>
i.e. export HBASE_CONF_PATH=/etc/hbase/conf:/etc/hadoop/conf
Depending on whether you have an unsecured cluster or a cluster secured with Kerberos, use one of the following connection strings to connect to HBase.
For unsecured clusters: jdbc:phoenix:<Zookeeper_host_name>:<port_number>:<root_node> Where < Zookeeper_host_name > can specify one host or several hosts. If you specify several Zookeeper hosts, insert commas between host names. For example, <ZK_host1, ZK_host2, ZK_host3>. Example: jdbc:phoenix:zk_quorum:2181:zk_parent
(or)
if you are logged in as your own user
kinit
/usr/hdp/current/phoenix-client/bin/sqlline.py hostname.domain.com:2181:/hbase-secure
For clusters secured with Kerberos: jdbc:phoenix:<Zookeeper_host_name>:<port_number>:<secured_Zookeeper_node>:<principal_name>:<HBase_headless_keytab_file> Where < secured_Zookeeper_node > is the path to the secured Zookeeper node, and < HBase_headless_keytab_file > is the location of this keytab file. Example: jdbc:phoenix:zk_quorum:2181:/hbase-secure:hbase@EXAMPLE.COM:/hbase-secure/keytab/keytab_file
(or)
if using logged in as hbase user
kinit -k -t /etc/security/keytabs/hbase.headless.keytab hbase
/usr/hdp/current/phoenix-client/bin/sqlline.py hostname.domain.com:2181:/hbase-secure:hbase@domain.com:/etc/security/keytabs/hbase.headless.keytab
... View more
06-13-2016
10:25 PM
@Kit Menke Can you please provide WebHCat log file?
... View more
06-13-2016
03:19 AM
@Manikandan Durairaj Here is the sample code: CREATE EXTERNAL TABLE IF NOT EXISTS Cars(
Name STRING,
Miles_per_Gallon INT,
Cylinders INT,
Displacement INT,
Horsepower INT,
Weight_in_lbs INT,
Acceleration DECIMAL,
Year DATE,
Origin CHAR(1))
COMMENT 'Data about cars from a public database'
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
location '/user/<username>/visdata'; Link might help you more https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_dataintegration/content/moving_data_from_hdfs_to_hive_external_table_method.html
... View more
06-07-2016
04:25 PM
@Benjamin Leonhardi @Josh Elser Thanks for quick response, Phoenix client will be already installed on all the Region Servers right? May I know how will the firewalls problem impact PQS?
... View more
06-07-2016
04:11 PM
@Smart Solutions Can you please explain in details how do you use in production? Based on that I can figure out the plan how it works for me.
... View more