Member since
04-17-2017
42
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2530 | 04-17-2018 02:52 AM |
08-16-2020
12:15 PM
./tpch-setup.sh 5 /hive-data-dir-benchmark TPC-H text data generation complete. Loading text data into external tables. WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. Optimizing table part (1/8). ^CCommand failed, try 'export DEBUG_SCRIPT=ON' and re-running sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : export DEBUG_SCRIPT=ON sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : ./tpch-setup.sh 5 /hive-data-dir-benchmark + '[' X5 = X ']' + '[' X/hive-data-dir-benchmark = X ']' + '[' 5 -eq 1 ']' + hdfs dfs -mkdir -p /hive-data-dir-benchmark + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + echo 'TPC-H text data generation complete.' TPC-H text data generation complete. + echo 'Loading text data into external tables.' Loading text data into external tables. + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5 Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.198 seconds OK Time taken: 0.013 seconds OK Time taken: 0.578 seconds OK Time taken: 1.183 seconds OK Time taken: 0.814 seconds OK Time taken: 0.494 seconds OK Time taken: 0.504 seconds OK Time taken: 0.493 seconds OK Time taken: 0.506 seconds OK Time taken: 0.495 seconds OK Time taken: 0.502 seconds OK Time taken: 0.494 seconds OK Time taken: 0.503 seconds OK Time taken: 0.496 seconds OK Time taken: 0.505 seconds OK Time taken: 0.495 seconds OK Time taken: 0.503 seconds OK Time taken: 0.495 seconds WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. + i=1 + total=8 + test 5 -le 1000 + SCHEMA_TYPE=flat + DATABASE=tpch_flat_orc_5 + MAX_REDUCERS=2600 ++ test 5 -gt 2600 ++ echo 5 + REDUCERS=5 + for t in '${TABLES}' + echo 'Optimizing table part (1/8).' Optimizing table part (1/8). + COMMAND='hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.152 seconds OK Time taken: 0.017 seconds OK Time taken: 0.051 seconds Query ID = c095784_20200816094848_3f33b234-3f7f-4d1b-b862-2624c0bb43cd Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> ^C+ '[' 130 -ne 0 ']' + echo 'Command failed, try '\''export DEBUG_SCRIPT=ON'\'' and re-running' Command failed, try 'export DEBUG_SCRIPT=ON' and re-running + exit 1 sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : klist Ticket cache: FILE:/tmp/krb5cc_895784 Default principal: neha@EXELONDS.COM Valid starting Expires Service principal 08/16/20 08:41:04 08/16/20 18:41:04 krbtgt/EXELONDS.COM@EXELONDS.COM 08/16/20 08:41:04 08/16/20 18:41:04 BDAL1CCC1N06$@EXELONDS.COM sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : export DEBUG_SCRIPT=ON sandbox.cluster.com /home/c095784/benchmarking/hive-testbench : ./tpch-setup.sh 5 /hive-data-dir-benchmark + '[' X5 = X ']' + '[' X/hive-data-dir-benchmark = X ']' + '[' 5 -eq 1 ']' + hdfs dfs -mkdir -p /hive-data-dir-benchmark + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + hdfs dfs -ls /hive-data-dir-benchmark/5/lineitem + '[' 0 -ne 0 ']' + echo 'TPC-H text data generation complete.' TPC-H text data generation complete. + echo 'Loading text data into external tables.' Loading text data into external tables. + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/alltables.sql -d DB=tpch_text_5 -d LOCATION=/hive-data-dir-benchmark/5 Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.225 seconds OK Time taken: 0.018 seconds OK Time taken: 0.802 seconds OK Time taken: 0.991 seconds OK Time taken: 0.506 seconds OK Time taken: 0.494 seconds OK Time taken: 0.504 seconds OK Time taken: 0.493 seconds OK Time taken: 0.505 seconds OK Time taken: 0.495 seconds OK Time taken: 0.502 seconds OK Time taken: 0.496 seconds OK Time taken: 0.503 seconds OK Time taken: 0.495 seconds OK Time taken: 0.502 seconds OK Time taken: 0.496 seconds OK Time taken: 0.503 seconds OK Time taken: 0.497 seconds WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. + i=1 + total=8 + test 5 -le 1000 + SCHEMA_TYPE=flat + DATABASE=tpch_flat_orc_5 + MAX_REDUCERS=2600 ++ test 5 -gt 2600 ++ echo 5 + REDUCERS=5 + for t in '${TABLES}' + echo 'Optimizing table part (1/8).' Optimizing table part (1/8). + COMMAND='hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + runcommand 'hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc' + '[' XON '!=' X ']' + hive -i settings/load-flat.sql -f ddl-tpch/bin_flat/part.sql -d DB=tpch_flat_orc_5 -d SOURCE=tpch_text_5 -d BUCKETS=13 -d SCALE=5 -d REDUCERS=5 -d FILE=orc Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties OK Time taken: 2.126 seconds OK Time taken: 0.015 seconds OK Time taken: 0.049 seconds Query ID = c095784_20200816115353_616b8c96-f2da-4ea7-94a3-2d1501c02691 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapreduce.job.reduces=<number> Starting Job = job_1597596829164_0002, Tracking URL = http://sandbox.cluster.com:8088/proxy/application_1597596829164_0002/ Kill Command = /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/bin/hadoop job -kill job_1597596829164_0002 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1 2020-08-16 11:54:44,433 Stage-1 map = 0%, reduce = 0% 2020-08-16 11:54:54,917 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.9 sec 2020-08-16 11:55:13,757 Stage-1 map = 0%, reduce = 0% 2020-08-16 11:55:18,933 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 3.41 sec 2020-08-16 11:55:19,971 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.41 sec MapReduce Total cumulative CPU time: 3 seconds 410 msec Ended Job = job_1597596829164_0002 with errors Error during job, obtaining debugging information... Examining task ID: task_1597596829164_0002_m_000000 (and more) from job job_1597596829164_0002 Task with the most failures(4): ----- Task ID: task_1597596829164_0002_r_000000 URL: http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1597596829164_0002&tipid=task_1597596829164_0002_r_000000 ----- Diagnostic Messages for this Task: Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4 at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out. at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392) at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366) at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198) FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.41 sec HDFS Read: 5155 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 3 seconds 410 msec WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked. WARN: Please see http://www.slf4j.org/codes.html#release for an explanation. + '[' 2 -ne 0 ']' + echo 'Command failed, try '\''export DEBUG_SCRIPT=ON'\'' and re-running' Command failed, try 'export DEBUG_SCRIPT=ON' and re-running + exit 1
... View more
08-16-2020
12:14 PM
Getting the below error: Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#4 Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache YARN
04-17-2018
02:52 AM
Rebooted the server. Was able to access the postgres after that. Deleted the web_tls after it
... View more
04-16-2018
10:15 AM
Made TLS implementation for Cloudera Manager. Unable to access CM after that.
Referred this link: https://community.cloudera.com/t5/Cloudera-Manager-Installation/how-to-rollback-cloudera-manager-tls-configuration-without-UI/td-p/46484
to revert the changes. Not able to access the embedded Postgres console to do so.
Getting the error: [root@hostname pgsql]# psql -U cloudera-scm -p 7432 -h localhost -d postgres psql: FATAL: the database system is shutting down
# lsof -i | grep LISTEN | grep cloudera-scm postgres 11232 cloudera-scm 3u IPv4 105319 0t0 TCP *:7432 (LISTEN) postgres 11232 cloudera-scm 4u IPv6 105320 0t0 TCP *:7432 (LISTEN) java 24548 cloudera-scm 243u IPv4 3988695 0t0 TCP hostname.com:us-srv (LISTEN) java 24548 cloudera-scm 244u IPv4 3988697 0t0 TCP *:rrac (LISTEN) java 24550 cloudera-scm 347u IPv4 3988161 0t0 TCP hostname.com:palace-6 (LISTEN) java 24550 cloudera-scm 348u IPv4 3989833 0t0 TCP hostname.com:d-s-n (LISTEN) java 24550 cloudera-scm 349u IPv4 3989832 0t0 TCP *:palace-5 (LISTEN) java 24587 cloudera-scm 237u IPv4 3987633 0t0 TCP *:7184 (LISTEN) java 24587 cloudera-scm 241u IPv4 3987635 0t0 TCP *:8084 (LISTEN) java 24587 cloudera-scm 245u IPv4 3987637 0t0 TCP *:7185 (LISTEN) java 24627 cloudera-scm 268u IPv4 3989842 0t0 TCP *:distinct32 (LISTEN) java 24627 cloudera-scm 272u IPv4 3989844 0t0 TCP hostname.com:distinct (LISTEN) java 24627 cloudera-scm 273u IPv4 3988199 0t0 TCP hostname.com:simplifymedia (LISTEN) java 24656 cloudera-scm 302u IPv4 3984366 0t0 TCP hostname.com:palace-4 (LISTEN) java 24656 cloudera-scm 308u IPv4 3986063 0t0 TCP *:palace-3 (LISTEN) java 24656 cloudera-scm 309u IPv4 3986068 0t0 TCP hostname.com:jamlink (LISTEN) java 24704 cloudera-scm 232u IPv4 3987626 0t0 TCP *:ezmeeting-2 (LISTEN)
Please suggest.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Security
03-07-2018
05:36 AM
Suddenly my cloudera-scm-server has gone dead, when i restart it it again goes to being dead in 3 secs. Getting the below cloudera-scm-server.log output: JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera_backup Exception in thread "main" org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.cloudera.server.cmf.TrialState': Cannot resolve reference to bean 'entityManagerFactoryBean' while setting constructor argument; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactoryBean': FactoryBean threw exception on object creation; nested exception is javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Could not open connection at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.ConstructorResolver.resolveConstructorArguments(ConstructorResolver.java:616) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:148) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1003) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:907) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:485) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:293) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:585) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:895) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:425) at com.cloudera.server.cmf.Main.bootstrapSpringContext(Main.java:393) at com.cloudera.server.cmf.Main.<init>(Main.java:243) at com.cloudera.server.cmf.Main.main(Main.java:216) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactoryBean': FactoryBean threw exception on object creation; nested exception is javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Could not open connection at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:149) at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:102) at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1440) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:247) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:192) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:322) ... 17 more Caused by: javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Could not open connection at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1387) at org.hibernate.ejb.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1310) at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:1397) at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:62) at com.cloudera.enterprise.AbstractWrappedEntityManager.beginForRollbackAndReadonly(AbstractWrappedEntityManager.java:89) at com.cloudera.enterprise.dbutil.DbUtil.isInnoDbEnabled(DbUtil.java:549) at com.cloudera.server.cmf.bootstrap.EntityManagerFactoryBean.checkMysqlTableEngineType(EntityManagerFactoryBean.java:139) at com.cloudera.server.cmf.bootstrap.EntityManagerFactoryBean.getObject(EntityManagerFactoryBean.java:122) at com.cloudera.server.cmf.bootstrap.EntityManagerFactoryBean.getObject(EntityManagerFactoryBean.java:65) at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:142) ... 22 more Caused by: org.hibernate.exception.GenericJDBCException: Could not open connection at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:54) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:125) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:110) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:221) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:157) at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67) at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:160) at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1426) at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:59) ... 28 more Caused by: java.sql.SQLException: Connections could not be acquired from the underlying database! at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:529) at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128) at org.hibernate.service.jdbc.connections.internal.C3P0ConnectionProvider.getConnection(C3P0ConnectionProvider.java:84) at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:292) at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:214) ... 33 more Caused by: com.mchange.v2.resourcepool.CannotAcquireResourceException: A ResourcePool could not acquire a resource from its primary factory or source. at com.mchange.v2.resourcepool.BasicResourcePool.awaitAvailable(BasicResourcePool.java:1319) at com.mchange.v2.resourcepool.BasicResourcePool.prelimCheckoutResource(BasicResourcePool.java:557) at com.mchange.v2.resourcepool.BasicResourcePool.checkoutResource(BasicResourcePool.java:477) at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:525) ... 37 more
... View more
02-20-2018
12:50 AM
Hive Gateway is present on the host I am trying to run the spark job.
... View more
02-08-2018
03:44 AM
Thanks, Chose the last option and went ahead, installation happened successfully. Not sure of any drawbacks it will lead to.
... View more
02-07-2018
08:37 AM
Manually installed mysql and tried to do path B install, getting the below error: BEGIN zypper --gpg-auto-import-keys -n se -i --match-exact -t package cloudera-manager-agent | grep -E '^i[[:space:]]*\|[[:space:]]*cloudera-manager-agent' END (1) BEGIN zypper --gpg-auto-import-keys -n se --match-exact -t package cloudera-manager-agent Loading repository data... Reading installed packages... S | Name | Summary | Type --+------------------------+----------------------------+-------- | cloudera-manager-agent | The Cloudera Manager Agent | package END (0) BEGIN zypper info cloudera-manager-agent | grep -E 'Version[[:space:]]*:[[:space:]]5.14.0-.*\.25' Version: 5.14.0-1.cm5140.p0.25.sles11 END (0) BEGIN zypper --gpg-auto-import-keys -n in cloudera-manager-agent Loading repository data... Reading installed packages... Resolving package dependencies... Problem: nothing provides apache2 needed by cloudera-manager-agent-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 1: do not install cloudera-manager-agent-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 2: break cloudera-manager-agent-5.14.0-1.cm5140.p0.25.sles11.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c END (4) remote package cloudera-manager-agent could not be installed, giving up waiting for rollback request Please suggest any fix.
... View more
02-05-2018
09:01 AM
Did the same as you mentioned, but still the same error is popping up: hostname:~ # zypper addrepo http://download.opensuse.org/repositories/server:database:postgresql/SLE_11_SP4/server:database:postgresql.repo Adding repository 'PostgreSQL and related packages (SLE_11_SP4)' [done] Repository 'PostgreSQL and related packages (SLE_11_SP4)' successfully added Enabled: Yes Autorefresh: No GPG check: Yes URI: http://download.opensuse.org/repositories/server:/database:/postgresql/SLE_11_SP4/ zypper refresh hostname:~ # zypper refresh Retrieving repository 'PostgreSQL and related packages (SLE_11_SP4)' metadata [\] New repository or package signing key received: Key ID: 562111AC05905EA8 Key Name: server:database OBS Project <server:database@build.opensuse.org> Key Fingerprint: 116EB86331583E47E63CDF4D562111AC05905EA8 Key Created: Thu 02 Feb 2017 08:09:57 AM EST Key Expires: Sat 13 Apr 2019 09:09:57 AM EDT Repository: PostgreSQL and related packages (SLE_11_SP4) Do you want to reject the key, trust temporarily, or trust always? [r/t/a/? shows all options] (r): a Retrieving repository 'PostgreSQL and related packages (SLE_11_SP4)' metadata [done] Building repository 'PostgreSQL and related packages (SLE_11_SP4)' cache [done] All repositories have been refreshed. hostname:~ # ls -l /etc/zypp/repos.d/ | grep postgres -rw-r--r-- 1 root root 330 Feb 5 11:51 server_database_postgresql.repo hostname:~ # zypper clean --all All repositories have been cleaned up. Still getting the same error: hostname:/var/log/cloudera-manager-installer # ll total 28 -rw-r--r-- 1 root root 56 Feb 5 11:53 0.check-selinux.log -rw-r--r-- 1 root root 0 Feb 5 11:54 1.install-repo-pkg.log -rw-r--r-- 1 root root 784 Feb 5 11:54 2.install-oracle-j2sdk1.7.log -rw-r--r-- 1 root root 1368 Feb 5 11:56 3.install-cloudera-manager-server.log -rw-r--r-- 1 root root 33 Feb 5 11:56 4.check-for-systemd.log -rw-r--r-- 1 root root 496 Feb 5 11:56 5.install-cloudera-manager-server-db-2.log -rw-r--r-- 1 root root 51 Feb 5 11:56 6.remove-cloudera-manager-server.log -rw-r--r-- 1 root root 0 Feb 5 11:56 7.remove-cloudera-manager-daemons.log -rw-r--r-- 1 root root 106 Feb 5 11:56 8.remove-cloudera-manager-repository.log hostname:/var/log/cloudera-manager-installer # pwd /var/log/cloudera-manager-installer hostname:/var/log/cloudera-manager-installer # cat 5.install-cloudera-manager-server-db-2.log Loading repository data... Reading installed packages... Resolving package dependencies... Problem: nothing provides postgresql-server >= 8.4 needed by cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 1: do not install cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 2: break cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 by ignoring some of its dependencies Choose from above solutions by number or cancel [1/2/c] (c): c
... View more
02-05-2018
07:19 AM
Unable to install Cloudera Manager on SUSE Linux Enterprise Server 11 (x86_64) getting the below error in the /var/log/cloudera-manager-installer/5.install-cloudera-manager-server-db-2.log: Problem: nothing provides postgresql-server >= 8.4 needed by cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 1: do not install cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 Solution 2: break cloudera-manager-server-db-2-5.14.0-1.cm5140.p0.25.sles11.x86_64 by ignoring some of its dependencies.
... View more
Labels:
- Labels:
-
Cloudera Manager