Member since
04-17-2017
42
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1931 | 04-17-2018 02:52 AM |
08-16-2020
12:15 PM
./tpch-setup.sh 5 /hive-data-dir-benchmark TPC-H text data generation complete. Loading text data into external tables. WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. Optimizing table part (1/8). ^CCommand failed, try 'export DEBUG_SCRIPT=ON' and re-running sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : export DEBUG_SCRIPT=ON sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : ./tpch-setup.sh 5 /hive-data-dir-benchmark + '[' X5 = X ']' + '[' X/hive-data-dir-benchmark = X ']' + '[' 5 -eq 1 ']' + hdfs dfs -mkdir -p /hive-data-dir-benchmark + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + echo 'TPC-H text data generation complete.' TPC-H text data generation complete. + echo 'Loading text data into external tables.' Loading text data into external tables. + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5 Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.198 seconds OK Time taken: 0.013 seconds OK Time taken: 0.578 seconds OK Time taken: 1.183 seconds OK Time taken: 0.814 seconds OK Time taken: 0.494 seconds OK Time taken: 0.504 seconds OK Time taken: 0.493 seconds OK Time taken: 0.506 seconds OK Time taken: 0.495 seconds OK Time taken: 0.502 seconds OK Time taken: 0.494 seconds OK Time taken: 0.503 seconds OK Time taken: 0.496 seconds OK Time taken: 0.505 seconds OK Time taken: 0.495 seconds OK Time taken: 0.503 seconds OK Time taken: 0.495 seconds WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. + i=1 + total=8 + test 5 -le 1000 + SCHEMA_TYPE=flat + DATABASE=tpch_flat_orc_5 + MAX_REDUCERS=2600 ++ test 5 -gt 2600 ++ echo 5 + REDUCERS=5 + for t in '${TABLES}' + echo 'Optimizing table part (1/8).' Optimizing table part (1/8). + COMMAND='hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.152 seconds OK Time taken: 0.017 seconds OK Time taken: 0.051 seconds Query ID = c095784_20200816094848_3f33b234-3f7f-4d1b-b862-2624c0bb43cd Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> ^C+ '[' 130 -ne 0 ']' + echo 'Command failed, try '\''export DEBUG_SCRIPT=ON'\'' and re-running' Command failed, try 'export DEBUG_SCRIPT=ON' and re-running + exit 1 sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : klist Ticket cache: FILE:/tmp/krb5cc_895784 Default principal: neha@EXELONDS.COM Valid starting Expires Service principal 08/16/20 08:41:04 08/16/20 18:41:04 krbtgt/EXELONDS.COM@EXELONDS.COM 08/16/20 08:41:04 08/16/20 18:41:04 BDAL1CCC1N06$@EXELONDS.COM sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : export DEBUG_SCRIPT=ON sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : ./tpch-setup.sh 5 /hive-data-dir-benchmark + '[' X5 = X ']' + '[' X/hive-data-dir-benchmark = X ']' + '[' 5 -eq 1 ']' + hdfs dfs -mkdir -p /hive-data-dir-benchmark + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + echo 'TPC-H text data generation complete.' TPC-H text data generation complete. + echo 'Loading text data into external tables.' Loading text data into external tables. + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5 Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.225 seconds OK Time taken: 0.018 seconds OK Time taken: 0.802 seconds OK Time taken: 0.991 seconds OK Time taken: 0.506 seconds OK Time taken: 0.494 seconds OK Time taken: 0.504 seconds OK Time taken: 0.493 seconds OK Time taken: 0.505 seconds OK Time taken: 0.495 seconds OK Time taken: 0.502 seconds OK Time taken: 0.496 seconds OK Time taken: 0.503 seconds OK Time taken: 0.495 seconds OK Time taken: 0.502 seconds OK Time taken: 0.496 seconds OK Time taken: 0.503 seconds OK Time taken: 0.497 seconds WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. + i=1 + total=8 + test 5 -le 1000 + SCHEMA_TYPE=flat + DATABASE=tpch_flat_orc_5 + MAX_REDUCERS=2600 ++ test 5 -gt 2600 ++ echo 5 + REDUCERS=5 + for t in '${TABLES}' + echo 'Optimizing table part (1/8).' Optimizing table part (1/8). + COMMAND='hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.126 seconds OK Time taken: 0.015 seconds OK Time taken: 0.049 seconds Query ID = c095784_20200816115353_616b8c96-f2da-4ea7-94a3-2d1501c02691 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1597596829164_0002, Tracking URL = http://sandbox.cluster.com:8088/proxy/application_1597596829164_0002/ Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1597596829164_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2020-08-16 11:54:44,433 Stage-1 map = 0%, reduce = 0% 2020-08-16 11:54:54,917 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.9 sec 2020-08-16 11:55:13,757 Stage-1 map = 0%, reduce = 0% 2020-08-16 11:55:18,933 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.41 sec 2020-08-16 11:55:19,971 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.41 sec MapReduce Total cumulative CPU time: 3 seconds 410 msec Ended Job = job_1597596829164_0002 with errors Error during job, obtaining debugging information... Examining task ID: task_1597596829164_0002_m_000000 (and more) from job job_1597596829164_0002 Task with the most failures(4): ----- Task ID: task_1597596829164_0002_r_000000 URL: http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1597596829164_0002&tipid=task_1597596829164_0002_r_000000 ----- Diagnostic Messages for this Task: Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392) at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198) FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.41 sec HDFS Read: 5155 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 3 seconds 410 msec WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. + '[' 2 -ne 0 ']' + echo 'Command failed, try '\''export DEBUG_SCRIPT=ON'\'' and re-running' Command failed, try 'export DEBUG_SCRIPT=ON' and re-running + exit 1
... View more
08-16-2020
12:14 PM
Getting the below error: Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4 Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
07-27-2018
05:16 AM
Hi Team, I am unable to start the services in the hadoop cluster. It displays error message as below: Command timed-out after 150 seconds Cloudera-scm-agent cloudera-scm-server, cloudera-scm-db all are running on the server. In the CM the services aren't starting although. I am guessing issuemight lie in Cloudera Agent log as Cloudera Manager console is accessible. Please look into the below mentioned logs and assist: [27/Jul/2018 07:36:25 +0000] 2705 MainThread agent ERROR Failed to connect to previous supervisor. Traceback (most recent call last): File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.15.0-py2.7.egg/cmf/agent.py", line 2137, in find_or_start_supervisor self.get_supervisor_process_info() File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.15.0-py2.7.egg/cmf/agent.py", line 2281, in get_supervisor_process_info self.identifier = self.supervisor_client.supervisor.getIdentification() File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/xmlrpc.py", line 460, in request self.connection.request('POST', handler, request_body, self.headers) File "/usr/lib64/python2.7/httplib.py", line 1017, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.7/httplib.py", line 1051, in _send_request self.endheaders(body) File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 826, in send self.connect() File "/usr/lib64/python2.7/httplib.py", line 807, in connect self.timeout, self.source_address) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 111] Connection refused [27/Jul/2018 07:36:25 +0000] 2705 MainThread tmpfs INFO Reusing mounted tmpfs at /run/cloudera-scm-agent/process [27/Jul/2018 07:36:25 +0000] 2705 Dummy-1 daemonize WARNING Stopping daemon. [27/Jul/2018 07:36:25 +0000] 2705 Dummy-1 agent INFO Stopping agent... [27/Jul/2018 07:36:25 +0000] 2705 Dummy-1 agent INFO No extant cgroups; unmounting any cgroup roots
... View more
Labels:
- Labels:
-
Cloudera Manager
04-26-2018
01:36 PM
Hi All, Please let me know if we can install HDP 2.6.4 on RHEL 7.5?
... View more
- Tags:
- rhel
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
04-17-2018
02:52 AM
Rebooted the server. Was able to access the postgres after that. Deleted the web_tls after it
... View more
04-16-2018
10:15 AM
Made TLS implementation for Cloudera Manager. Unable to access CM after that.
Referred this link: https://community.cloudera.com/t5/Cloudera-Manager-Installation/how-to-rollback-cloudera-manager-tls-configuration-without-UI/td-p/46484
to revert the changes. Not able to access the embedded Postgres console to do so.
Getting the error: [root@hostname pgsql]# psql -U cloudera-scm -p 7432 -h localhost -d postgres psql: FATAL: the database system is shutting down
# lsof -i | grep LISTEN | grep cloudera-scm postgres 11232 cloudera-scm 3u IPv4 105319 0t0 TCP *:7432 (LISTEN) postgres 11232 cloudera-scm 4u IPv6 105320 0t0 TCP *:7432 (LISTEN) java 24548 cloudera-scm 243u IPv4 3988695 0t0 TCP hostname.com:us-srv (LISTEN) java 24548 cloudera-scm 244u IPv4 3988697 0t0 TCP *:rrac (LISTEN) java 24550 cloudera-scm 347u IPv4 3988161 0t0 TCP hostname.com:palace-6 (LISTEN) java 24550 cloudera-scm 348u IPv4 3989833 0t0 TCP hostname.com:d-s-n (LISTEN) java 24550 cloudera-scm 349u IPv4 3989832 0t0 TCP *:palace-5 (LISTEN) java 24587 cloudera-scm 237u IPv4 3987633 0t0 TCP *:7184 (LISTEN) java 24587 cloudera-scm 241u IPv4 3987635 0t0 TCP *:8084 (LISTEN) java 24587 cloudera-scm 245u IPv4 3987637 0t0 TCP *:7185 (LISTEN) java 24627 cloudera-scm 268u IPv4 3989842 0t0 TCP *:distinct32 (LISTEN) java 24627 cloudera-scm 272u IPv4 3989844 0t0 TCP hostname.com:distinct (LISTEN) java 24627 cloudera-scm 273u IPv4 3988199 0t0 TCP hostname.com:simplifymedia (LISTEN) java 24656 cloudera-scm 302u IPv4 3984366 0t0 TCP hostname.com:palace-4 (LISTEN) java 24656 cloudera-scm 308u IPv4 3986063 0t0 TCP *:palace-3 (LISTEN) java 24656 cloudera-scm 309u IPv4 3986068 0t0 TCP hostname.com:jamlink (LISTEN) java 24704 cloudera-scm 232u IPv4 3987626 0t0 TCP *:ezmeeting-2 (LISTEN)
Please suggest.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Security
03-07-2018
05:36 AM
Suddenly my cloudera-scm-server has gone dead, when i restart it it again goes to being dead in 3 secs. Getting the below cloudera-scm-server.log output: JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera_backup Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.cloudera.server.cmf.TrialState': Cannot resolve reference to bean 'entityManagerFactoryBean' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactoryBean': FactoryBean threw exception on object creation; nested exception is javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Could not open connection at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:616) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:148) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1003) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:907) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at com.cloudera.server.cmf.Main.bootstrapSpringContext(Main.java:393) at com.cloudera.server.cmf.Main.<init>(Main.java:243) at com.cloudera.server.cmf.Main.main(Main.java:216) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactoryBean': FactoryBean threw exception on object creation; nested exception is javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Could not open connection at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:149) at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:102) at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1440) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:247) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:322) ... 17 more Caused by: javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Could not open connection at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1387) at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1310) at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:1397) at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:62) at com.cloudera.enterprise.AbstractWrappedEntityManager.beginForRollbackAndReadonly(AbstractWrappedEntityManager.java:89) at com.cloudera.enterprise.dbutil.DbUtil.isInnoDbEnabled(DbUtil.java:549) at com.cloudera.server.cmf.bootstrap.EntityManagerFactoryBean.checkMysqlTableEngineType(EntityManagerFactoryBean.java:139) at com.cloudera.server.cmf.bootstrap.EntityManagerFactoryBean.getObject(EntityManagerFactoryBean.java:122) at com.cloudera.server.cmf.bootstrap.EntityManagerFactoryBean.getObject(EntityManagerFactoryBean.java:65) at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:142) ... 22 more Caused by: org.hibernate.exception.GenericJDBCException: Could not open connection at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:221) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:157) at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67) at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:160) at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1426) at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:59) ... 28 more Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database! at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:529) at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128) at org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:84) at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:292) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:214) ... 33 more Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source. at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319) at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557) at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525) ... 37 more
... View more
02-20-2018
12:50 AM
Hive Gateway is present on the host I am trying to run the spark job.
... View more
02-20-2018
12:47 AM
hiveContext.sql("select * from test.automation_inp3").show 18/02/19 08:53:55 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 18/02/19 08:53:55 INFO ObjectStore: ObjectStore, initialize called 18/02/19 08:53:55 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars/datanucleus-api-jdo-3.2.1.jar." 18/02/19 08:53:55 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars/datanucleus-core-3.2.2.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars/datanucleus-core-3.2.10.jar." 18/02/19 08:53:55 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars/datanucleus-rdbms-3.2.1.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars/datanucleus-rdbms-3.2.9.jar." 18/02/19 08:53:55 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored 18/02/19 08:53:55 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 18/02/19 08:53:56 WARN HiveMetaStore: Retrying creating default database after error: Error creating transactional connection factory javax.jdo.JDOFatalInternalException: Error creating transactional connection factory at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701) at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:390) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:419) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:314) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:281) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:56) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:65) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:584) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:562) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:611) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:453) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5716) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:198) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1486) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:64) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:74) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2895) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2914) at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3139) at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:205) at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:192) at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:302) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:263) at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:238) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:491) at org.apache.spark.sql.hive.HiveContext.sessionState$lzycompute(HiveContext.scala:229) at org.apache.spark.sql.hive.HiveContext.sessionState(HiveContext.scala:225) at org.apache.spark.sql.hive.HiveContext.hiveconf$lzycompute(HiveContext.scala:241) at org.apache.spark.sql.hive.HiveContext.hiveconf(HiveContext.scala:240) at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:86) at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29) at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:34) at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36) at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:38) at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:40) at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:42) at $line25.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:44) at $line25.$read$$iwC$$iwC$$iwC.<init>(<console>:46) at $line25.$read$$iwC$$iwC.<init>(<console>:48) at $line25.$read$$iwC.<init>(<console>:50) at $line25.$read.<init>(<console>:52) at $line25.$read$.<init>(<console>:56) at $line25.$read$.<clinit>(<console>) at $line25.$eval$.<init>(<console>:7) at $line25.$eval$.<clinit>(<console>) at $line25.$eval.$print(<console>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338) at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:856) at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901) at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813) at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:656) at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:664) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:669) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:996) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944) at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:944) at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944) at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058) at org.apache.spark.repl.Main$.main(Main.scala:31) at org.apache.spark.repl.Main.main(Main.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) NestedThrowablesStackTrace: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325) at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282) at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240) at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301) at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187) at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333) at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701) at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:390) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:419) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:314) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:281) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133) at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:56) at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:65) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:584) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:562) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:611) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:453) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:78) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:84) at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5716) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:198) at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
... View more
- Tags:
- Spark
02-19-2018
10:28 PM
Getting error: HiveMetaStore: Retrying creating default database after error: Error creating transactional connection factory javax.jdo.JDOFatalInternalException: Error creating transactional connection factory Query run: import org.apache.spark.sql.hive.HiveContext val hiveContext = new HiveContext(sc) import hiveContext.implicits._ import hiveContext.sql hiveContext.sql("select * from test.test_3").show
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
02-08-2018
03:44 AM
Thanks, Chose the last option and went ahead, installation happened successfully. Not sure of any drawbacks it will lead to.
... View more
02-07-2018
08:37 AM
Manually installed mysql and tried to do path B install, getting the below error: BEGIN zypper --gpg-auto-import-keys -n se -i --match-exact -t package cloudera-manager-agent | grep -E '^i[[:space:]]*\|[[:space:]]*cloudera-manager-agent' END (1) BEGIN zypper --gpg-auto-import-keys -n se --match-exact -t package cloudera-manager-agent Loading repository data... Reading installed packages... S | Name | Summary | Type --+------------------------+----------------------------+-------- | cloudera-manager-agent | The Cloudera Manager Agent | package END (0) BEGIN zypper info cloudera-manager-agent | grep -E 'Version[[:space:]]*:[[:space:]]5.14.0-.*\.25' Version: 5.14.0-1.cm5140.p0.25.sles11 END (0) BEGIN zypper --gpg-auto-import-keys -n in cloudera-manager-agent Loading repository data... Reading installed packages... Resolving package dependencies... Problem: nothing provides apache2 needed by cloudera-manager-agent-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 1: do not install cloudera-manager-agent-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 2: break cloudera-manager-agent-5.14.0-1.cm5140.p0.25.sles11.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c END (4) remote package cloudera-manager-agent could not be installed, giving up waiting for rollback request Please suggest any fix.
... View more
02-05-2018
09:01 AM
Did the same as you mentioned, but still the same error is popping up: hostname:~ # zypper addrepo http://download.opensuse.org/repositories/server:database:postgresql/SLE_11_SP4/server:database:postgresql.repo Adding repository 'PostgreSQL and related packages (SLE_11_SP4)' [done] Repository 'PostgreSQL and related packages (SLE_11_SP4)' successfully added Enabled: Yes Autorefresh: No GPG check: Yes URI: http://download.opensuse.org/repositories/server:/database:/postgresql/SLE_11_SP4/ zypper refresh hostname:~ # zypper refresh Retrieving repository 'PostgreSQL and related packages (SLE_11_SP4)' metadata [\] New repository or package signing key received: Key ID: 562111AC05905EA8 Key Name: server:database OBS Project <server:database@build.opensuse.org> Key Fingerprint: 116EB86331583E47E63CDF4D562111AC05905EA8 Key Created: Thu 02 Feb 2017 08:09:57 AM EST Key Expires: Sat 13 Apr 2019 09:09:57 AM EDT Repository: PostgreSQL and related packages (SLE_11_SP4) Do you want to reject the key, trust temporarily, or trust always? [r/t/a/? shows all options] (r): a Retrieving repository 'PostgreSQL and related packages (SLE_11_SP4)' metadata [done] Building repository 'PostgreSQL and related packages (SLE_11_SP4)' cache [done] All repositories have been refreshed. hostname:~ # ls -l /etc/zypp/repos.d/ | grep postgres -rw-r--r-- 1 root root 330 Feb 5 11:51 server_database_postgresql.repo hostname:~ # zypper clean --all All repositories have been cleaned up. Still getting the same error: hostname:/var/log/cloudera-manager-installer # ll total 28 -rw-r--r-- 1 root root 56 Feb 5 11:53 0.check-selinux.log -rw-r--r-- 1 root root 0 Feb 5 11:54 1.install-repo-pkg.log -rw-r--r-- 1 root root 784 Feb 5 11:54 2.install-oracle-j2sdk1.7.log -rw-r--r-- 1 root root 1368 Feb 5 11:56 3.install-cloudera-manager-server.log -rw-r--r-- 1 root root 33 Feb 5 11:56 4.check-for-systemd.log -rw-r--r-- 1 root root 496 Feb 5 11:56 5.install-cloudera-manager-server-db-2.log -rw-r--r-- 1 root root 51 Feb 5 11:56 6.remove-cloudera-manager-server.log -rw-r--r-- 1 root root 0 Feb 5 11:56 7.remove-cloudera-manager-daemons.log -rw-r--r-- 1 root root 106 Feb 5 11:56 8.remove-cloudera-manager-repository.log hostname:/var/log/cloudera-manager-installer # pwd /var/log/cloudera-manager-installer hostname:/var/log/cloudera-manager-installer # cat 5.install-cloudera-manager-server-db-2.log Loading repository data... Reading installed packages... Resolving package dependencies... Problem: nothing provides postgresql-server >= 8.4 needed by cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 1: do not install cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 2: break cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c
... View more
- Tags:
- cloudera manager
02-05-2018
07:19 AM
Unable to install Cloudera Manager on SUSE Linux Enterprise Server 11 (x86_64) getting the below error in the /var/log/cloudera-manager-installer/5.install-cloudera-manager-server-db-2.log: Problem: nothing provides postgresql-server >= 8.4 needed by cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 1: do not install cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 2: break cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 by ignoring some of its dependencies.
... View more
Labels:
- Labels:
-
Cloudera Manager
01-29-2018
07:16 AM
So, 1. I should uncheck thie below? Limit Nonsecure Container Executor Users --> YARN (MR2 Included) (Service-Wide) { currently the check box is selected} 2. what should I put instead of nobody for the below? UNIX User for Nonsecure Mode with Linux Container Executor yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user YARN (MR2 Included) (Service-Wide) ---> Nobody 3. Is there something more I should change? 4. Kerberos is not enabled in the system.
... View more
- Tags:
- Oozie
01-29-2018
06:43 AM
Running the below script: #!/bin/bash ssh user@host.abcd.com "if [ -f /data/home/user/datafiles/Test.csv ] ; then echo 1 ; else echo 0 ; fi;" Set up passwordless ssh for 'user' too. Getting the below error: Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1] Please Assist!
... View more
Labels:
- Labels:
-
Apache Oozie
-
Cloudera Hue
01-11-2018
09:22 AM
Logged to beeline: beeline !connect jdbc:hive2://<server name>:10000/default username: hive password: hive Able to perform grant and create role but not able to do anything else. Hive should have super user privileges Error: Error: Error while compiling statement: FAILED: SemanticException No valid privileges User hive does not have privileges for CREATEDATABASE The required privileges: Server=server1->action=*; (state=42000,code=40000) Kindly Assist.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sentry
12-01-2017
04:55 AM
@Jay Kumar SenSharma It throws an exception, need to add it to the exception list to access it. When enable https options comes up, we just need to upload the server cert out of the three certs in the keystore, right?
... View more
11-30-2017
11:20 AM
Hi @Jay Kumar SenSharma In regards to your link: https://community.hortonworks.com/articles/39865/enabling-https-for-ambariserver-and-troubleshootin.html I have configured https for my ambari -server, but when opening ambari Ui in the web browser, it shows connection not secure. Can you help me to fix this issue? I have 3 certificates- server certificate, intermediate certificate and root certificate.
... View more
Labels:
- Labels:
-
Apache Ambari
11-20-2017
10:29 AM
@Vipin Rathor Yes, Added this property on one of the server that had agent file. Still it was also not able to communicate with the ambari-server.
... View more
11-20-2017
05:09 AM
@Vipin Rathor I am facing the same issue of agents losing heartbeat, even though the version of ambari being used here is 2.5.0. Please suggest.
... View more
11-10-2017
04:35 AM
@Aditya Sirna did you find anything on this ?
... View more
11-08-2017
04:11 AM
Details of siro_ini_content [urls] # This section is used for url-based security. # You can secure interpreter, configuration and credential information by urls. Comment or uncomment the below urls that you want to hide. # anon means the access is anonymous. # authc means Form based Auth Security # To enfore security, comment the line below and uncomment the next one #/api/version = anon #/api/interpreter/** = authc, roles[admin] #/api/configurations/** = authc, roles[admin] #/api/credential/** = authc, roles[admin] #/** = anon #/** = authc /api/configurations/** = authc, roles[admin] /api/credential/** = authc, roles[admin] /** = authc
... View more
11-08-2017
04:09 AM
@Aditya Sirna , @Geoffrey Shelton Okot Should this line be commented or not? #/api/version = anon What is this exactly doing? The problem is that the requests are getting submitted as anonymous irrespective to keeping the user impersonation option checked or unchecked. It's an Access Control Exception. Please suggest.
... View more
11-07-2017
07:35 AM
@Aditya Sirna, This property is set true for both in security part and in hive interactive as well. In hive-interactive this is set as default setting. Although using hiveserver2.
... View more
11-07-2017
06:51 AM
For hive.server2.enable.doAs, hive-interactive site has "Run as end user instead of Hive user=true" When running the queries from beeline for the same user it works fine so permissions are correct. Jdbc interpreter does not have any of the above parameter. Although tried with these as well. It keeps taking user as anonymous if no default user is specified. And anonymous has no access. In Zeppelin config: "zeppelin.anonymous.allowed=false" is also set.
... View more
11-07-2017
04:04 AM
@Jay Kumar SenSharma if you could help me on this?
... View more
11-07-2017
04:00 AM
Created %hive interpreter with user impersonation enabled and submitting queries getting this error: user [anonymous] does not have [USE] privilege on [null] Created %jdbc(hive) and submitting jobs, the user is able to access everything as the query is running with hive as a user(Ranger permissions set for hive to access everything) Please help me configure this so that the user logged in is able to access only those tables which it has access for in Ranger policies.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Zeppelin
11-06-2017
05:02 AM
1. Version 2.6 2. Hive 1.5 3. 5-6 concurrent users only for now 4. Using Beeline query execution works at normal speed 5. Error is intermittent, but performance is bad most of the times 6. Both load fine 7. Query Execution is slow This server is a standalone Ambari server. Please assist in standard configurational changes that should be made?
... View more
10-30-2017
11:01 PM
SELECT COUNT(*) FROM drtest; NoViableAltException(26@[]) at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1028) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:201) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:466) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1278) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1265) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:186) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:267) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:337) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:439) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:416) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:282) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1309) at com.sun.proxy.$Proxy25.ExecuteStatement(Unknown Source) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:246) at org.apache.hive.beeline.Commands.executeInternal(Commands.java:990) at org.apache.hive.beeline.Commands.execute(Commands.java:1192) at org.apache.hive.beeline.Commands.sql(Commands.java:1106) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1169) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1003) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:915) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:511) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) FAILED: ParseException line 1:0 cannot recognize input near 'OK' '17' '/' 17/10/31 02:01:10 [main]: ERROR ql.Driver: FAILED: ParseException line 1:0 cannot recognize input near 'OK' '17' '/' org.apache.hadoop.hive.ql.parse.ParseException: line 1:0 cannot recognize input near 'OK' '17' '/' at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:466) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1278) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1265) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:186) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:267) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:337) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:439) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:416) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:282) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1309) at com.sun.proxy.$Proxy25.ExecuteStatement(Unknown Source) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:246) at org.apache.hive.beeline.Commands.executeInternal(Commands.java:990) at org.apache.hive.beeline.Commands.execute(Commands.java:1192) at org.apache.hive.beeline.Commands.sql(Commands.java:1106) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1169) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1003) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:915) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:511) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) 17/10/31 02:01:10 [main]: WARN thrift.ThriftCLIService: Error executing statement: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'OK' '17' '/' at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:400) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:188) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:267) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:337) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:439) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:416) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:282) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1309) at com.sun.proxy.$Proxy25.ExecuteStatement(Unknown Source) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:246) at org.apache.hive.beeline.Commands.executeInternal(Commands.java:990) at org.apache.hive.beeline.Commands.execute(Commands.java:1192) at org.apache.hive.beeline.Commands.sql(Commands.java:1106) at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1169) at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1003) at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:915) at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:511) at org.apache.hive.beeline.BeeLine.main(BeeLine.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.hive.ql.parse.ParseException: line 1:0 cannot recognize input near 'OK' '17' '/' at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204) at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:466) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1278) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1265) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:186) ... 27 more Error: Error while compiling statement: FAILED: ParseException line 1:0 cannot recognize input near 'OK' '17' '/' (state=42000,code=40000)
... View more