Member since
01-29-2018
20
Posts
3
Kudos Received
0
Solutions
01-05-2019
09:44 AM
Hi @lwang thank you very much for your kind feedback, I'm preparing the CDH and CM upgrade to 5.13.1, in order to avoid this issue. Thanks again! Regards, Alex
... View more
12-15-2018
01:30 AM
Hi @lwang, Thanks a lot to share this information with us! This is an odd behavior, could you tell me what is the root cause of this, please? Wich CDH version are impacted? At the moment I have found this problem with CDH 5.8 only. Many thanks in advance for the kind cooperation/availability. Regards, Alex
... View more
12-07-2018
07:06 AM
Hi AKR, yes, I set to 7 days the parameter Log Retain Duration, but anything change. I have modified also the parameter yarn.nodemanager.delete.debug-delay-sec to 7 days but I can't find any logs on HDFS or in the local location as well. Maybe is an issue of CDH 5.8.0, because with CDH 5.13.1 I didn't find this problems. Regards, Alex
... View more
07-12-2018
08:32 AM
Hi @Romainr, in which section/configuration file I should set this parameter ( self._client.set_kerberos_auth() ) into Hue Service? Many thanks! Alex
... View more
05-07-2018
06:28 AM
Hi @pdvorak, from today, I'm having the same error, but I can't execute the command: solrctl init --force in Prod enviroment, is there any other solution in order to fix this issue? (2018-05-07 14:47:14,451 WARN org.apache.zookeeper.ClientCnxn: Session 0x1633aa1e11a0014 for server [server name], unexpected error, closing socket connection and attempting reconnect java.io.IOException: Unreasonable length = 1051274 at org.apache.jute.BinaryInputArchive.checkLength(BinaryInputArchive.java:127) at org.apache.jute.BinaryInputArchive.readBuffer(BinaryInputArchive.java:92) at org.apache.zookeeper.proto.GetDataResponse.deserialize(GetDataResponse.java:54) at org.apache.zookeeper.ClientCnxn$SendThread.readResponse(ClientCnxn.java:814) at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:94) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:355) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2018-05-07 14:47:14,451 ERROR org.apache.solr.servlet.SolrDispatchFilter: Could not start Solr. Check solr/home property and the logs 2018-05-07 14:47:14,511 ERROR org.apache.solr.core.SolrCore: null:java.lang.NullPointerException) Many thanks in advance for the kind cooperation. Regards, Alex
... View more
03-31-2018
08:02 AM
ok @pdvorak, just like I thought, thanks a lot for the confirmation! 😉 Kind regards! Alex
... View more
03-30-2018
06:37 AM
Hi all, I'm using Apache Kafka integrated with Cloudera Custer Infrastructure (version 5.8.0), and I'm wondering if the Kafka Brokers servers (hosted in dedicated servers), are compatible with the Jumbo Frames option that could be enable in the network cards that we use to communicate with the Cluster? We wish to enable this new features in order to increase our Network performance between Kafka Brokers nodes and the other nodes inside the Cluster and set the MTU parameter to 9000. At the moment we use Kafka version: 0.10.0. Many thanks in advance for the kind cooperation. Kind regards and happy Easter! Alexander Pena Quijada
... View more
Labels:
- Labels:
-
Apache Kafka
02-15-2018
03:06 AM
1 Kudo
Hi all,
We noticed that when a yarn job fails and the aggregation option is enabled, we can’t find the containers’ logs (about this failed job) in the usual folder into HDFS (/tmp/logs/…). We can see all logs about the jobs finished with success, but nothing about those failed (we have a Log Aggregation Retention set to 7 days and we have had this problem 2 days ago).
We’re wondering if maybe could be a bug in the aggregation process, but we would like have further information about this issue from Cloudera Support, in order to confirm that or have another explanation…
Any tips about this problem?
Many thanks in advance for the kind cooperation.
Regards,
Alex
... View more
Labels:
- Labels:
-
Apache YARN
-
Cloudera Manager
-
HDFS
-
MapReduce
02-02-2018
04:45 AM
Hi all, in order to illustrate to you a full overview about how change the all logs files related to Cloudera Manager Agents (v 5.12), I want to share with you guys my particular situation and how I solved it. I wanted modify the default path logs for the following log files links with Cloudera Manager Agent: - supervisord.out - supervisord.log (these are the supervisor's logs, this service start up with the Cloudera Manager Agent service the first time when the server start up or when we start up manually the cloudera-scm-agent service. If you stop the Cloudera Manager Agent service but the server remain up, this service remain up also). - cmf_listener.log (this is a cmf_listener service's logs, this service start up with the Cloudera Manager Agent service the first time when the server start up or when we start up manually the cloudera-scm-agent service. If you stop the Cloudera Manager Agent service but the server remain up, this service remain up also (managed by supervisord). - cloudera-scm-agent.log - cloudera-scm-agent.out (These are the cloudera-scm-agent logs…) Below the files that I've modified in order to set a new path for these logs with a dedicated file-system: /usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.2-py2.7.egg/lib/cmf/agent.py (set a new path for the logs: supervisord.out; cmf_listener.log and supervisord.log. Furthermore, check if the parameter for cloudera libraries agent is present in order to avoid unexpected errors with the start up of supervisord (default_lib_dir = '/var/lib/cloudera-scm-agent')) /usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.2-py2.7.egg/cmf/agent.py (set a new path for the logs: supervisord.out; cmf_listener.log and supervisord.log. Furthermore, check if the parameter for cloudera libraries agent is present in order to avoid unexpected errors with the start up of supervisord (default_lib_dir = '/var/lib/cloudera-scm-agent')) /etc/default/cloudera-scm-agent(set a new path for the cloudera agent logs as arguments: CMF_AGENT_ARGS="--logdir=/your/custom/cloudera-scm/user/writable/directory/" and set also CMF_AGENT_ARGS="--lib_dir=/var/lib/cloudera-scm-agent", in order to avoid unexpected errors with the start up of cloudera-scm-agent (As suggested by Harsh 😉). /etc/cloudera-scm-agent/config.ini (set a new path for the cloudera agent logs: log_file=/your/custom/cloudera-scm/user/writable/directory/ and set also lib_dir=/var/lib/cloudera-scm-agent, in order to avoid unexpected errors with the start up of cloudera-scm-agent) /etc/init.d/cloudera-scm-agent (set a new path for cloudera-scm-agent.out modifying the following parameter: AGENT_OUT=${CMF_VAR:-/var}/log/cloudera/$prog/$prog.out). In my case I left the logs into /var/log but I mounted a dedicated file-system on the folder /var/log/cloudera/ for all these logs. If you want to stop all the processes related to Cloudera Manager Agent, you should perform the following steps: service cloudera-scm-agent stop (Stop Cloudera Manager Agent process) ps -eaf | grep cmf (Show the supervisord parent process) root 77977 1 0 13:09 ? 00:00:00 /usr/lib64/cmf/agent/build/env/bin/python /usr/lib64/cmf/agent/build/env/bin/supervisord root 77983 77977 0 13:09 ? 00:00:00 python2.7 /usr/lib64/cmf/agent/build/env/bin/cmf-listener -l /var/log/cloudera/cloudera-scm-agent/cmf_listener.log /run/cloudera-scm-agent/events kill -15 77977 (Stop supervisord and cmf_listener processes). I used kill -15 because I didn't found another way to stop these processes… I would like so much if anyone from Cloudera's Engineers can confirm/amend the procedure that I just described above... Cheers! Alex
... View more