Member since
09-25-2018
99
Posts
6
Kudos Received
5
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3350 | 11-03-2021 02:55 AM | |
| 2472 | 09-21-2020 10:04 PM | |
| 3976 | 08-14-2020 03:20 AM | |
| 5386 | 08-20-2019 11:07 PM |
03-30-2021
08:43 AM
@abagal / @PabitraDas Appreciate all your assistance / inputs on this. Thanks Wert
... View more
03-28-2021
09:20 AM
Hello, How do we balance data which are stored on individual disk on a particular datanode, we have 5 disks on a single node and one of the disk is 90% full, running balancer is not fixing the issue. Would like to get some suggestions/ comments to fix this issue. I was going through this article and it says it is not possible to balance disks within a single node, so what other options can we use to fix this until we upgrade to CDH 6.3 (https://community.cloudera.com/t5/Community-Articles/HDFS-Balancer-Balancing-Data-Between-Disks-on-a-DataNode/ta-p/244650) CM & CDH - 5.16.3 Appreciate all inputs. Thanks Wert
... View more
Labels:
02-10-2021
10:14 PM
Hello, I am trying to execute hdfs fsck command and am getting below error: [root@server1 root]# hdfs fsck / > /home/test/fsck_output_2-11-21 Connecting to namenode via http://server1.com:50070/fsck?ugi=hdfs&path=%2F Exception in thread "main" java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474) at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:363) at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72) at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:161) at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:158) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:157) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:406) Thanks wert
... View more
Labels:
01-21-2021
10:38 PM
Hello, We were getting ‘DatabaseError: ORA-01017: invalid username/password; logon denied’ post our db upgrade, to fix that we have reset the hue password, which worked and that issue got fixed. Unfortunately one of our guys deleted a couple of logs from /var/log/hue. We have recreated runcpserver.log with the necessary permission, and restart Hue server, yet we fail to see the file being populated (file is still empty). Request assistance / guidance in fixing this issue. Thanks Wert
... View more
Labels:
09-14-2020
01:35 AM
Hello, We have got our Domain renamed from ABC to XYZ.com. Since then neither Cloudera Manager UI nor any other service UI's are accessible. As per my understanding, we would need to update the /etc/hosts file of each node in the cluster to reflect the new Domain name. however it would be great if anyone could advise on the below 2 things: 1. Do we really need to get /etc/host/file updated, OR is it taken care by Cloudera 2. Where all do we need to update the hostnames with new Domain name to have a fully functional cluster back to operational status Currently if I manually enter the complete address (ip+new doamin name) I can login to Cloudera Manager UI but from Cloudera Manager when I try to open any service UI's like NN UI it fails. Any help is much appreciated Regards Wert
... View more
Labels:
09-11-2020
09:34 AM
Hello,
Would like some assistance/guidance on Kerberos. Our domain name has changed and since then our applications are unable to connect to Hadoop cluster. We are using MIT Kerberos.
Regards
Wert
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
-
Security
09-06-2020
09:02 AM
Hello,
I was using Cloudera Enterprise 5.16 (Trial) which is now downgraded to Cloudera Express, since then I am facing issues starting Reports Manager. I am using the default embedded database.
CM / CDH - 5.16.2
2020-09-06 14:14:29,706 INFO com.cloudera.enterprise.dbutil.DbUtil: Schema version table doesn't exist.
2020-09-06 14:14:29,711 INFO com.cloudera.enterprise.dbutil.DbUtil: Schema version table already exists.
2020-09-06 14:14:29,712 INFO com.cloudera.enterprise.dbutil.DbUtil: DB Schema version 4100.
2020-09-06 14:14:29,712 INFO com.cloudera.enterprise.dbutil.DbUtil: Current database schema version: 4100
2020-09-06 14:14:29,725 INFO com.cloudera.enterprise.ssl.SSLFactory: Using default java truststore for verification of server certificates in HTTPS communication.
2020-09-06 14:14:29,761 WARN com.cloudera.cmf.BasicScmProxy: Exception while getting fetch configDefaults hash: none
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1091)
at com.cloudera.cmf.BasicScmProxy.authenticate(BasicScmProxy.java:276)
at com.cloudera.cmf.BasicScmProxy.fetch(BasicScmProxy.java:596)
at com.cloudera.cmf.BasicScmProxy.getFragmentAndHash(BasicScmProxy.java:686)
at com.cloudera.cmf.DescriptorAndFragments.newDescriptorAndFragments(DescriptorAndFragments.java:64)
at com.cloudera.headlamp.HeadlampServer.<init>(HeadlampServer.java:143)
at com.cloudera.headlamp.HeadlampServer.main(HeadlampServer.java:250)
2020-09-06 14:14:29,772 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 1 tries, sleeping for 2 secs
2020-09-06 14:14:31,793 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 2 tries, sleeping for 2 secs
2020-09-06 14:14:33,794 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 3 tries, sleeping for 2 secs
2020-09-06 14:14:35,795 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 4 tries, sleeping for 2 secs
2020-09-06 14:14:37,797 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 5 tries, sleeping for 2 secs
2020-09-06 14:14:39,797 ERROR com.cloudera.headlamp.HeadlampServer: Could not fetch descriptor after 5 tries, exiting.
Appreciate any help.
... View more
Labels: