Member since
02-03-2017
19
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1662 | 11-20-2017 11:28 PM |
08-07-2020
02:48 AM
Hey, have you resolved this issue? I can't find a way to monitor Hive wlm pools either
... View more
12-18-2019
02:13 AM
@Wynner I don't remember exactly but I haven't found any solution to that so I decided not to use the .msi installer version of minifi.
... View more
05-20-2019
06:18 AM
Hi, have you been able to solve this issue? I am having the same problem on Hue 4.4 & Solr 7.4 When you try to open a saved dashboard, the collection name is not included in the Solr call. We checked the database, collection name is saved in dashboard record on database. Might be an UI bug?
... View more
04-22-2019
02:11 PM
I am having the same problem. I want to create a local user for minifi on a Windows server on a domain, so I check the Local User option but it still tries to use Get-ADuser module. HDF 3.4 Windows Server 2012 R2 Have you been able to solve it?
... View more
11-20-2017
11:28 PM
SOLVED: The problem was the CM's mysql database. Navigator index data filled up the disk partition that mysql data is in. Once I cleaned it CDH services worked.
... View more
11-02-2017
09:35 AM
I couldn't figure out what went wrong with our Cloudera cluster, there are multiple errors but most important one is that I can't access Cloudera Manager web UI at https://<cmserverip>:7183/cmf/home. It throws ERR_CONNECTION_CLOSED on Chrome.
If I run netstat -anp | grep 7183, it's 5000+ lines of CLOSED_WAITs with a bunch of ESTABLISHEDs all from the localhost IP
* Selinux is disabled. Iptables are empty.
* hostname -f returns correct name.
* /etc/hosts file are all same and correct on all nodes
* I also can't access any of the web UIs of the Hadoop services (such as Hue) on the cluster. They throw ERR_CONNECTION_TIMED_OUT on Chrome.
* All CDH services are up and accessible through command-line.
* None of logs I did find anything unusual.
* I CAN access non-CDH services on the nodes like RStudio or Jupyter.
I'm suspecting something failed with certification stuff or Kerberos but I don't know how to check those things. Would appreciate any help.
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Cloudera Navigator
04-07-2017
01:21 AM
@bgooley Thanks for the tip. I checked Reports Manager and Navigator Metadata Server logs and got some insights about the problem. Appearently there's an issue with Kerberos authentication. I'll post Cloudera Manager forum if I can't solve it.
... View more
03-29-2017
06:06 AM
After working around my previous problem here, I found out that the File Browser on Cloudera Manager is not up to date, all the files & folders are old, which makes reports to not show anything useful. On the top right corner on HDFS -> File Browser -> Directory Usage page it says "The file system image was last indexed on January 26, 2017 1:55 PM". This is preventing me from generating disk usage reports and also analyzing statistics on Cloudera Navigator. I' ve found this very informing article, but couldn't really helped me. If I'm not wrong there is something wrong with namenodes are not checkpointing or not using the image they extract. Can anyone give more information about this? As it shows in the outputs I posted below, it does checkpoint every 1 hour as in the config but how can I make Cloudera Manager, Navigator, File Browser etc. to see it? I have 1 active 1 stand-by namenodes. Checkpoint directory setting: Though, I have no "/ds1/dfs/snn" folder on stand-by namenode machine. When I search for fsimage* on active namenode: 2015-08-19+17:54: 7878 ./ds1/dfs/snn/current/fsimage_0000000000000000279 2015-08-19+17:54 62 ./ds1/dfs/snn/current/fsimage_0000000000000000279.md5 2015-08-19+17:54 10272 ./ds1/dfs/snn/current/fsimage_0000000000000000510 2015-08-19+17:54 62 ./ds1/dfs/snn/current/fsimage_0000000000000000510.md5 2017-03-29+13:57 281926964 ./ds1/dfs/nn/current/fsimage_0000000000287939457 (today) 2017-03-29+13:57 281926964 ./ds2/dfs/nn/current/fsimage_0000000000287939457 2017-03-29+13:58 62 ./ds1/dfs/nn/current/fsimage_0000000000287939457.md5 2017-03-29+13:58 62 ./ds2/dfs/nn/current/fsimage_0000000000287939457.md5 2017-03-29+14:58 282426827 ./ds1/dfs/nn/current/fsimage_0000000000287980225 2017-03-29+14:58 282426827 ./ds2/dfs/nn/current/fsimage_0000000000287980225 2017-03-29+14:58 62 ./ds1/dfs/nn/current/fsimage_0000000000287980225.md5 2017-03-29+14:58 62 ./ds2/dfs/nn/current/fsimage_0000000000287980225.md5 on stand-by namenode: 2017-03-29+13:57 281926964 ./ds1/dfs/nn/current/fsimage_0000000000287939457 2017-03-29+13:57 62 ./ds1/dfs/nn/current/fsimage_0000000000287939457.md5 2017-03-29+13:57 281926964 ./ds2/dfs/nn/current/fsimage_0000000000287939457 2017-03-29+13:57 62 ./ds2/dfs/nn/current/fsimage_0000000000287939457.md5 2017-03-29+14:58 282426827 ./ds1/dfs/nn/current/fsimage_0000000000287980225 2017-03-29+14:58 282426827 ./ds2/dfs/nn/current/fsimage_0000000000287980225 2017-03-29+14:58 62 ./ds1/dfs/nn/current/fsimage_0000000000287980225.md5 2017-03-29+14:58 62 ./ds2/dfs/nn/current/fsimage_0000000000287980225.md5
... View more
Labels:
- Labels:
-
Cloudera Navigator
-
HDFS
03-24-2017
04:45 AM
An update: I was mistaken on some values. The size values on HDFS file browser and returning from hdfs dfsadmin -report are supposed values. But Cloudera metrics & charts countinue to give increasing values. du -sch output on dfs folders in Linux terminal also gives big numbers. And I noticed the increase have started a couple of days before the upgrade I mentioned, so it's not likely something went wrong with the upgrade. Recently we have been informed by another HDFS user that they have been splitting the large files into smaller ones for computing performance increase(??) which had me thinking if they're splitting the combined size of TBs of data into smaller ones mostly even smaller than Block Size (128MB) and causing usage on the file system grow more than 3x. Am I correct on this estimation?
... View more
03-20-2017
02:16 AM
I've done an upgrade to Cloudera Manager from 5.5.3 to 5.10.0 then upgraded CDH from 5.5.1 to 5.8.4. After these operations, I saw the disk usages of all DataNodes on Hosts->All Hosts page increased. On HDFS file browser and with CLI commands I see almost every directory has double the size before, but I noticed no difference among the file counts, types, names etc.. Same thing when I also check disk usage on Linux terminal. I am a little bit confused and need help to figure out what happened.
... View more
Labels:
- Labels:
-
HDFS
02-03-2017
02:12 AM
I'm guessing I somehow managed to do the same typo twice while changing the password. Tried the combinations but no luck. CDH version is 5.5.1 There is no /var/lib/cloudera-scm-server-db folder. Here's my /etc/cloudera-scm-server/db.properties file: com.cloudera.cmf.db.type=mysql
com.cloudera.cmf.db.host=hds01:3306
com.cloudera.cmf.db.name=scmdb
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=scm_passwd The host I'm on is hdsm. I browsed for a solution and tried these: $ psql -U scm -d scmdb Password for user scm: scm_passwd psql: FATAL: password authentication failed for user "scm" $ psql -h hds01 -U scm -d scmdb psql: could not connec to server: Connection Refused Is the server running on host "hds01" and accepting TCP/IP connections on port 5432? There's mysqld up on hds01. On hds01; $ mysql -u scm -p scm_passwd -h localhost ERROR 1045 (28000): Access denied for user 'scm'@'localhost' (using password: YES) Every other username/password combination fails and I can't seem to connect to the DB whatsoever. I'm not able to reach anyone who knows the credentials, in fact I doubt there is anyone. What's the consequences if I stop mysql, start with skip-grant, edit user passwords, flush privileges then restart? Appreciate any help EDIT (SOLVED): I'm now connected to database (found out scm user is only accepted from hdsm, so installed mysql client on there) and thanks to rufusayeni's comment at this post, I solved my issue. Though, I would wanna know what if I couldn't.
... View more
Labels:
- Labels:
-
Cloudera Manager