Member since
07-31-2013
1924
Posts
462
Kudos Received
311
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2129 | 07-09-2019 12:53 AM | |
| 12448 | 06-23-2019 08:37 PM | |
| 9560 | 06-18-2019 11:28 PM | |
| 10525 | 05-23-2019 08:46 PM | |
| 4895 | 05-20-2019 01:14 AM |
02-12-2018
11:04 PM
It is CDH 5.11.2. I have nearly 2 GB of rolled up logs and not a single FATAL message in there. Is there a way to force these messages? The way I understand that it has crashed is the service hadoop-hdfs-namenode status is FAILED and I need to restart the namenode manually - after which it works as if nothing was wrong.
... View more
02-11-2018
08:18 PM
Where this property needs to be set? There is no core-default.xml file in my deployment. I am using CDH 5.12. Should it be set service wide?
... View more
02-02-2018
04:45 AM
Hi all, in order to illustrate to you a full overview about how change the all logs files related to Cloudera Manager Agents (v 5.12), I want to share with you guys my particular situation and how I solved it. I wanted modify the default path logs for the following log files links with Cloudera Manager Agent: - supervisord.out - supervisord.log (these are the supervisor's logs, this service start up with the Cloudera Manager Agent service the first time when the server start up or when we start up manually the cloudera-scm-agent service. If you stop the Cloudera Manager Agent service but the server remain up, this service remain up also). - cmf_listener.log (this is a cmf_listener service's logs, this service start up with the Cloudera Manager Agent service the first time when the server start up or when we start up manually the cloudera-scm-agent service. If you stop the Cloudera Manager Agent service but the server remain up, this service remain up also (managed by supervisord). - cloudera-scm-agent.log - cloudera-scm-agent.out (These are the cloudera-scm-agent logs…) Below the files that I've modified in order to set a new path for these logs with a dedicated file-system: /usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.2-py2.7.egg/lib/cmf/agent.py (set a new path for the logs: supervisord.out; cmf_listener.log and supervisord.log. Furthermore, check if the parameter for cloudera libraries agent is present in order to avoid unexpected errors with the start up of supervisord (default_lib_dir = '/var/lib/cloudera-scm-agent')) /usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.2-py2.7.egg/cmf/agent.py (set a new path for the logs: supervisord.out; cmf_listener.log and supervisord.log. Furthermore, check if the parameter for cloudera libraries agent is present in order to avoid unexpected errors with the start up of supervisord (default_lib_dir = '/var/lib/cloudera-scm-agent')) /etc/default/cloudera-scm-agent(set a new path for the cloudera agent logs as arguments: CMF_AGENT_ARGS="--logdir=/your/custom/cloudera-scm/user/writable/directory/" and set also CMF_AGENT_ARGS="--lib_dir=/var/lib/cloudera-scm-agent", in order to avoid unexpected errors with the start up of cloudera-scm-agent (As suggested by Harsh 😉). /etc/cloudera-scm-agent/config.ini (set a new path for the cloudera agent logs: log_file=/your/custom/cloudera-scm/user/writable/directory/ and set also lib_dir=/var/lib/cloudera-scm-agent, in order to avoid unexpected errors with the start up of cloudera-scm-agent) /etc/init.d/cloudera-scm-agent (set a new path for cloudera-scm-agent.out modifying the following parameter: AGENT_OUT=${CMF_VAR:-/var}/log/cloudera/$prog/$prog.out). In my case I left the logs into /var/log but I mounted a dedicated file-system on the folder /var/log/cloudera/ for all these logs. If you want to stop all the processes related to Cloudera Manager Agent, you should perform the following steps: service cloudera-scm-agent stop (Stop Cloudera Manager Agent process) ps -eaf | grep cmf (Show the supervisord parent process) root 77977 1 0 13:09 ? 00:00:00 /usr/lib64/cmf/agent/build/env/bin/python /usr/lib64/cmf/agent/build/env/bin/supervisord root 77983 77977 0 13:09 ? 00:00:00 python2.7 /usr/lib64/cmf/agent/build/env/bin/cmf-listener -l /var/log/cloudera/cloudera-scm-agent/cmf_listener.log /run/cloudera-scm-agent/events kill -15 77977 (Stop supervisord and cmf_listener processes). I used kill -15 because I didn't found another way to stop these processes… I would like so much if anyone from Cloudera's Engineers can confirm/amend the procedure that I just described above... Cheers! Alex
... View more
02-02-2018
02:25 AM
1 Kudo
Yes that is precisely correct - it balances by average utilization percentage per node rather than by average byte count.
... View more
01-26-2018
12:00 PM
FWIW I seem to have found a solution. I had added a call to ugi.checkTGTAndReloginFromKeytab() but it hadn't worked. Later in debugging I found that that call was trying to renew the Proxy User, not the underlying principal. I changed the call so that it would get the principal's ugi and call the same method on that and now it seems to work. There are still outstanding questions, though, if anyone cares to investigate further: Why was this only necessary for D.A.R.E. ? All other ops (hdfs, Hive, yarn, etc.) continued working and renewing krbtgt's perpetually Was the upgrade of CDH needed or would it have continued working with the older version?
... View more
01-18-2018
12:16 AM
Hi, i´m trying to use sqoop via hue, but i keep getting this error: 2018-01-18 08:31:52,353 [main] WARN org.kitesdk.data.spi.hive.MetaStoreUtil - Aborting use of local MetaStore. Allow local MetaStore by setting kite.hive.allow-local-metastore=true in HiveConf
2018-01-18 08:31:52,353 [main] ERROR org.apache.sqoop.tool.ImportTool - Import failed: Missing Hive MetaStore connection URI Its not the same but seems to be quite similar. The cluster is useing HA for Hive metastore. I tried to set the Hive metastore uri like this: import -Dhive.metastore.uris=thrift://17.239.167.168:9083 -Dkite.hive.allow-local-metastore=true --connect jdbc:exa:17.239.167.201..205:8563;schema=sks_dp_steuerung --driver com.exasol.jdbc.EXADriver --username sys --password exasol --table dbo_institutspartitionen --hive-import --as-parquetfile --hive-table DBO_INSTITUTSPARTITIONEN --hive-database SKS_DP_STEUERUNG -m 1 but it makes no difference. The Exception tells to set kite.hive.allow-local-metastore=true I tried this via -Dkite.hive.allow-local-metastore=true I have no idea whether this would be the right way to do this. Is there something i might have missed? Or is this really a completely different error?
... View more
01-16-2018
06:20 AM
Hi @cconner I've connected to the Hue database on Mysql. I see all the the tables prefixed with oozie_ however I do not see any meaninful data in these tables. Can you explain where in this DB schema the workflow definitions are stored? Thanks Shak
... View more
01-08-2018
01:58 AM
Hallo, can you help me? I have a problem when importing data into hbase table. I've tried to use importtsv, but the problem is the number of columns in my file very much (1000 columns). Do I have to write all the columns or is there another way that can automatically increase the number of columns according to the file? Thankyou..
... View more
12-14-2017
11:15 AM
Hi @Harsh J, Yesterday, I cleaned 50GB worth files from HDFS using fs -rm. The daily incoming size on HDFS is almost 13-15GB(including replication) however today again the size of dfs has increased almost 30-55 GB more. I dont understand why? Only, on one Datanode the dfs files generated almost 15GB. [root@DataNode1 finalized]# ls -lrt| grep "Dec 13" drwxr-xr-x 208 hdfs hdfs 4096 Dec 13 05:59 subdir130 drwxr-xr-x 195 hdfs hdfs 4096 Dec 13 06:52 subdir132 drwxr-xr-x 210 hdfs hdfs 4096 Dec 13 07:24 subdir134 drwxr-xr-x 188 hdfs hdfs 4096 Dec 13 07:32 subdir135 drwxr-xr-x 187 hdfs hdfs 4096 Dec 13 08:30 subdir138 drwxr-xr-x 210 hdfs hdfs 4096 Dec 13 09:09 subdir139 drwxr-xr-x 234 hdfs hdfs 12288 Dec 13 09:46 subdir173 drwxr-xr-x 234 hdfs hdfs 12288 Dec 13 10:07 subdir174 drwxr-xr-x 258 hdfs hdfs 12288 Dec 13 15:30 subdir211
... View more
12-06-2017
01:17 AM
Hey Harsha, I am facing a similar problem with the CDH 5.13 version.. have shared the details here http://community.cloudera.com/t5/Data-Ingestion-Integration/Problem-in-connecting-to-Hbase-from-scala-code-in-Cloudera/m-p/62519#M2779 Please let me knoe if there is something wrong that I am doing. Thanks
... View more