Member since
09-29-2014
224
Posts
11
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
719 | 01-24-2024 10:45 PM | |
3653 | 03-30-2022 08:56 PM | |
2932 | 08-12-2021 10:40 AM | |
7062 | 04-28-2021 01:30 AM | |
3571 | 09-27-2016 08:16 PM |
04-20-2017
10:53 AM
if you fixed the issue, please tell me how do create roles for hive database and grant permissions.
... View more
09-24-2016
12:11 PM
Great, and thanks for posting back your resulotion as it may help others down the road. Thanks Jeff
... View more
06-24-2016
11:12 AM
1 Kudo
Greetings If by chance u are still looking to resolve a return code 2 error while tunning hive, I may have a solution for u if u dont get any information from the log files. Return code 2 is basically a camoflauge for an hadoop/yarn memory problem. Basically, not enough resources configured into hadoop/yarn to run your projects If u are running a single-node cluster ..see the link below http://stackoverflow.com/questions/26540507/what-is-the-maximum-containers-in-a-single-node-cluster-hadoop U may be able to tweak the settings depending on your cluster setup. If this does not cure your problem 100%, then at least the return code 2 or exit code 1 errors would disappear. Hope this helps
... View more
02-17-2016
04:12 AM
1 Kudo
Try this one:
http://www.cloudera.com/documentation/manager/5-1-x/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_enable_security_s8.html
... View more
02-08-2016
05:07 PM
Hi, can we enable the oozie.email.smtp.auth from cloudera manager UI ?
... View more
12-19-2015
02:21 AM
got resolved. just delete hook propery in hive-site.xml
... View more
10-14-2015
08:05 PM
I am able to connect to IBM MQ using the steps mentioned here. But when Flume is trying to consume any messages from the Q, its throwing following exception. com.ibm.msg.client.jms.DetailedMessageFormatException: JMSCC0053: An exception occurred deserializing a message, exception: 'java.lang.ClassNotFoundException: null class'. It was not possible to deserialize the message because of the exception shown. 1) I am using all the ibm mq client jars. Flume is starting with out any exception. But exception is coming when trying to consume the messages . 2) I am putting a custom message [Serializable object] into Q which Flume need to consume. 3) Flume 1.5.0-cdh5.4.1 4) MQ Version 8.x a1.sources=fe_s1 a1.channels=c1 a1.sinks=k1 a1.sources.s1.type=jms a1.sources.s1.channels=c1 a1.sources.s1.initialContextFactory=com.sun.jndi.fscontext.RefFSContextFactory a1.sources.s1.connectionFactory=FLUME_CF a1.sources.s1.destinationName=MY.Q a1.sources.s1.providerURL=file:///home/JNDI-Directory a1.sources.s1.destinationType=QUEUE a1.sources.s1.transportType=1 a1.sources.s1.userName=mqm a1.sources.s1.batchSize=1 a1.channels.c1.type=memory a1.channels.c1.capacity=10000 a1.channels.c1.transactionCapacity=100 a1.sinks.k1.type=logger a1.sinks.k1.channel=c1
... View more
09-04-2015
12:16 PM
Thanks for the tip on the CM API, wasn't aware it had one. Knowing that little bit I was able to find an example that uses Ansible: https://github.com/ymc-geko/ansible-cdh-cluster. There's an article discussing it here as well: http://blog.cloudera.com/blog/2013/08/how-to-install-cloudera-manager-and-search-with-ansible/.
... View more
12-03-2014
09:43 AM
I do apologize for the lack of responses to some of your queries, we do try to assist community members in finding solutions to their questions, but unfortunately there are no guarantees in the forums, as responses are purely voluntary. There are no service level agreements in the community, that is only available for paid support customers and through our actual support portal. We will continue to assist to the best of our ability, but please understand that you may not always get immediate responses, or a solution to a particular thread.
... View more
11-05-2014
06:29 AM
i have got resolve this issue, true be told, about hdfs, yarn or mapred, i know it's kept from submitting jobs in default, but i think you also know, min.user.id and allow user list are for this case, so the issue is not about user or job. i have montiored many times, when just 1 container start, it's dead automaticlly after secs, but when the normal state, basicly it will invoke 3-4 containers in my env. so i can sure this issue is about cotainer can't work normal. but why ? as i said it's just one container has been start normally, so i can check this container log, but can't find nothing, the errors like what i have shown in the above. and i also explain when the sqoop execute normally, it will create a directory in the usercache directory, but when sqoop job failed, it won't, so i guess maybe this directory has some problems, but of course, i don't know the exact reason. then i delete namenode HA, just leave one namenode and one secondary namenode as default, then start sqoop again, it's failed too, but at this time, the log is more readable, "NOT INITALIZE CONTAINER" error show to me. this logs make me more confidential, it's really because job can't invoke container. at last, i stop all the cluster, delete /yarn/* in datanode and namenode, then start all cluster, it works fine now. currently, i still don;t know why hdfs or yarn can't invoke container, but the problem has been resolved.
... View more
- « Previous
- Next »