Member since
09-29-2014
224
Posts
11
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
719 | 01-24-2024 10:45 PM | |
3652 | 03-30-2022 08:56 PM | |
2931 | 08-12-2021 10:40 AM | |
7059 | 04-28-2021 01:30 AM | |
3571 | 09-27-2016 08:16 PM |
10-22-2014
10:12 PM
thanks buddy, yes, the upgrading is successful after i added cloudera management service. everything is running ok now except sentry, it seems it initnalizes the table privileges and database privileges, i am not sure this is a bug or not . just remind you i have encountered this case. thanks again. see you .
... View more
10-22-2014
11:25 AM
HI, Cloudera supporter: i am upgrading CDH 5.1 to CDH5.2 following the documentation . currently i have finished CM5.1 to CM5.2 , but the cloudera manager can't start. here is my upgrading steps: 1) stop agent, cm service 2) backup mysql database 3) download cm package from http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5/RPMS/x86_64/ and set local YUM 4) yum upgrade cloudera-* 5) check CM package after the above steps has been completed, but CM can't start now, the error is below: Exception in thread "MainThread" java.util.NoSuchElementException: Cannot find management service. at com.cloudera.api.dao.impl.ServiceManagerDaoImpl.getMgmtService(ServiceManagerDaoImpl.java:539) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:208) at com.sun.proxy.$Proxy80.getMgmtService(Unknown Source) at com.cloudera.api.v1.impl.MgmtServiceResourceImpl.readService(MgmtServiceResourceImpl.java:41) at com.cloudera.api.v3.impl.MgmtServiceResourceV3Impl$RoleConfigGroupsResourceWrapper.<init>(MgmtServiceResourceV3Impl.java:37) at com.cloudera.api.v3.impl.MgmtServiceResourceV3Impl.getRoleConfigGroupsResource(MgmtServiceResourceV3Impl.java:25) at com.cloudera.cmf.service.upgrade.RemoveBetaFromRCG.upgrade(RemoveBetaFromRCG.java:71) at com.cloudera.cmf.service.upgrade.AbstractApiAutoUpgradeHandler.upgrade(AbstractApiAutoUpgradeHandler.java:36) at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgradesForOneVersion(AutoUpgradeHandlerRegistry.java:233) at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgrades(AutoUpgradeHandlerRegistry.java:167) at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgrades(AutoUpgradeHandlerRegistry.java:138) at com.cloudera.server.cmf.Main.run(Main.java:587) at com.cloudera.server.cmf.Main.main(Main.java:198) could you give me some suggestion, thanks very much .
... View more
Labels:
10-05-2014
05:49 PM
i have got to resolve this issue. at first i open debug model to check the details, but find nothing. then i am going to open namenode:8088 to check the history file and container details, but container log can't open, means container doesn't exist at last, i am going to hdfs /user directory by HUE file browse, and open some logs, found log has recorded /tmp/history is permission denied. delete /tmp/history, and try again, it's ok now.
... View more
10-05-2014
04:11 PM
everyone, the below is my some tests, i am going to set HADOOP_YARN_HOME manually. Test one: if the home is hadoop-0.20-mapreduce, then it's ok. [hdfs@namenode02 ~]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/ [hdfs@namenode02 ~]$ hive 14/10/06 06:59:04 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties hive> select count(*) from test; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_local1939864979_0001, Tracking URL = http://localhost:8080/ Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_local1939864979_0001 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2014-10-06 06:59:14,364 Stage-1 map = 0%, reduce = 100% Ended Job = job_local1939864979_0001 MapReduce Jobs Launched: Job 0: HDFS Read: 0 HDFS Write: 12904 SUCCESS Total MapReduce CPU Time Spent: 0 msec OK 0 Time taken: 4.095 seconds, Fetched: 1 row(s) hive> exit; TEST two: this is default, it menas i didn't change anyting, just test when i am login OS by hdfs, it's failed. [hdfs@datanode03 ~]$ hive 14/10/06 07:03:27 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 14/10/06 07:03:27 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack 14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node 14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 14/10/06 07:03:27 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties hive> select count(*) from test > ; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_1412549128740_0004, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0004/ Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0004 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2014-10-06 07:03:53,523 Stage-1 map = 0%, reduce = 0% Ended Job = job_1412549128740_0004 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Job 0: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec TEST THREE: it's failed. [hdfs@namenode02 hadoop-yarn]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop-yarn/ [hdfs@namenode02 hadoop-yarn]$ hive 14/10/06 06:44:38 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties hive> show tables; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient hive> show tables; OK database_params **bleep**you sequence_table tbls test test1 Time taken: 0.338 seconds, Fetched: 6 row(s) hive> select count(*) from test; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> In order to limit the maximum number of reducers: set hive.exec.reducers.max=<number> In order to set a constant number of reducers: set mapred.reduce.tasks=<number> Starting Job = job_1412549128740_0003, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0003/ Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0003 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2014-10-06 06:54:19,156 Stage-1 map = 0%, reduce = 0% Ended Job = job_1412549128740_0003 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Job 0: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec the conclusion is , just when i set the HADOOP_YARN_HOME with *0.20-", it will be fine, so what can i do right now ?
... View more
10-05-2014
02:48 PM
HI, everyone, i have setup Kerberos + Sentry service(not policy file), currently everything works fine except HIVE, "select * from table ", this statement is ok, it means if the statement no any condition, it can finish ok. but select count(*) from table or select * from table where xxx=xxx will appears errors like title. that's so strange, anybody has this experience? thanks in advance. the more details like below: 14/10/06 05:28:31 INFO mapreduce.Job: The url to track the job: http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/
14/10/06 05:28:31 INFO exec.Task: Starting Job = job_1412544483910_0001, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412544483910_0001/
14/10/06 05:28:31 INFO exec.Task: Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412544483910_0001
14/10/06 05:28:31 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:32 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:33 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:34 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:35 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:36 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:37 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:38 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:40 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:41 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:43 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:45 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:46 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:48 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:50 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:53 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
14/10/06 05:28:55 INFO exec.Task: Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 INFO exec.Task: 2014-10-06 05:28:55,502 Stage-1 map = 0%, reduce = 0%
14/10/06 05:28:55 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
14/10/06 05:28:55 ERROR exec.Task: Ended Job = job_1412544483910_0001 with errors
14/10/06 05:28:55 INFO impl.YarnClientImpl: Killed application application_1412544483910_0001
14/10/06 05:28:55 INFO log.PerfLogger: </PERFLOG method=task.MAPRED.Stage-1 start=1412544509576 end=1412544535559 duration=25983 from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 ERROR ql.Driver: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
14/10/06 05:28:55 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1412544509575 end=1412544535560 duration=25985 from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 INFO ql.Driver: MapReduce Jobs Launched:
14/10/06 05:28:55 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
14/10/06 05:28:55 INFO ql.Driver: Job 0: HDFS Read: 0 HDFS Write: 0 FAIL
14/10/06 05:28:55 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
14/10/06 05:28:55 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager: about to release lock for default/**bleep**you
14/10/06 05:28:55 INFO ZooKeeperHiveLockManager: about to release lock for default
14/10/06 05:28:55 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1412544535562 end=1412544535579 duration=17 from=org.apache.hadoop.hive.ql.Driver>
14/10/06 05:28:55 ERROR operation.Operation: Error:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:146)
at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:64)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:177)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/10/06 05:28:57 INFO cli.CLIService: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=c31c63b7-e53e-4574-a128-09d1c6aa7728]: getOperationStatus()
... View more
10-03-2014
01:32 PM
Just like other services. Pls look carefully ,you will find the botton. I have done this many times.
... View more
10-03-2014
07:50 AM
i have got to resolve this problems. thanks everybody. this is becase CREATE ROLE just can be done by beeline, not HIVE. after grant privileges with beeline, it's ok now.
... View more
10-03-2014
01:31 AM
i have installed Sentry, and tried to combine HIVE and sentry to work together. then i followed the documentation to setup sentry service configuration, and setup LDAP group mapping. from the namenode log and LDAP log i can sure the GROUP mapping is normal. but when i issued create role role_name in HIVE, the it appears the error like "FAILED: SemanticException The current builtin authorization in Hive is incomplete and disabled." anybody has suffered this kind errors? can you give some advises. thanks.
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Sentry
09-30-2014
12:10 PM
i have got to resolve this problem. it's because HADOOP_YARN_HOME or HADOOP_MAPRED_HOME ENV. if i set these two ENV as "*-0.20." manually while i invoked HIVE, then it's ok. as the beginning, i am so strange why there are just two hosts have this kind problem, why others is ok. at last i found other hosts has namenode or resource manager, but these two hosts no any YARN releated nodes, then. so i followed my assumption to install node manager in these two hosts, try again, it's fine. CLOUDERA SUPPORTER, could you explain why this kind situation occered ? i think which hosts to install YARN nodes depends requirement, we couldn't install YARN in every host, if these host didn't install YARN can't execute HIVE, i think this is UNFAIR.
... View more
- « Previous
- Next »