Member since
09-25-2018
82
Posts
3
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
612 | 11-03-2021 02:55 AM | |
709 | 09-21-2020 10:04 PM | |
851 | 08-14-2020 03:20 AM | |
2349 | 08-20-2019 11:07 PM | |
5765 | 01-06-2019 07:32 PM |
12-27-2022
07:51 AM
2 Kudos
@asish Thanks!!! that worked.
... View more
12-23-2022
09:36 AM
Hi, We are unable to insert data into hive table, do not see any error in hive / hive on tez logs. Upon trying to insert data it just sits and does nothing. 0: jdbc:hive2://MY_SERVER.com:218> insert into test_table values(1,'aaa');
INFO : Compiling command(queryId=hive_20221223201916_92c0b5ff-fa62-4f28-afb4-0dd88d76c2d8): insert into test_table values(1,'aaa')
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:[FieldSchema(name:col1, type:int, comment:null), FieldSchema(name:col2, type:string, comment:null)], properties:null)
INFO : Completed compiling command(queryId=hive_20221223201916_92c0b5ff-fa62-4f28-afb4-0dd88d76c2d8); Time taken: 0.371 seconds
INFO : Executing command(queryId=hive_20221223201916_92c0b5ff-fa62-4f28-afb4-0dd88d76c2d8): insert into test_table values(1,'aaa')
INFO : Query ID = hive_20221223201916_92c0b5ff-fa62-4f28-afb4-0dd88d76c2d8
INFO : Total jobs = 1
INFO : Launching Job 1 out of 1
INFO : Starting task [Stage-1:MAPRED] in serial mode Cloudera Manager 7.6.1 / Cloudera Runtime 7.1.7 Appreciate any assistance in resolving this. Thanks Wert
... View more
05-17-2022
10:06 PM
Hi,
I am facing an issue with one of our Impala Daemon which continuously has error related to “The health test result for IMPALAD_FRONTEND_CONNECTIONS has become bad: There are 0 (Beeswax pool) 123 (Hive Server 2 pool) active client connections, each pool has a configured maximum of 128”
Have checked CM -> Impala -> Charts Library -> check "Active Frontend API connection" this shows too many Hive Server connections
Also checked server's network side and run netstat -an | grep ESTABLISHED | grep 25000 but I see only 2 active connections.
[root@nucleus ~]$ netstat -an | grep ESTABLISHED | grep 25000 tcp 0 0 10.20.20.41:25000 10.20.20.41:49728 ESTABLISHED
tcp 0 0 10.20.20.41:25000 10.20.20.41:49732 ESTABLISHED
Any assistance / guidance in resolving this issue is appericated.
CM / CDH - 6.3.3
Thanks
Wert.
... View more
Labels:
03-02-2022
08:33 PM
@GangWar Can we run this command while the cluster is in use or do we need downtime, kindly advise Thanks
... View more
02-24-2022
09:36 PM
@araujo Cant do that for securtiy / policy reasons, hope you understand. Thanks Wert
... View more
02-24-2022
01:52 AM
Hi @araujo What are the symptoms of the ticket renewal failure? Are there any error messages anywhere? > We see GSSException on Application logs How did you conclude it's a ticket renewal problem? > We haven’t concluded its a renewal problem however to rule out kerberose issue, we need logs which at the present is not getting written in the location specified in krb5.conf file. Thanks Wert
... View more
02-23-2022
01:40 AM
CM/ CDH 6.3.3 Currently do not the error screenshot etc. Any infomation on where the logs would be? Thanks Wert
... View more
02-23-2022
12:05 AM
Hi @araujo, Yes we are using MIT Kerberose, below is the configs of /var/kerberos/krb5kdc/kdc.conf [kdcdefaults] kdc_ports = 88 kdc_tcp_ports = 88 [realms] MY_COMPANY.COM = { #master_key_type = aes256-cts acl_file = /var/kerberos/krb5kdc/kadm5.acl dict_file = /usr/share/dict/words max_renewable_life = 7d 0h 0m 0s forwardable = true udp_preference_limit = 1 admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal default_principal_flags = +renewable, +forwardable database_name = /opt/localkrb5/krb5kdc/principal } I do not see any infomation about logging in this. Regarding the second issue I have found a solution - https://my.cloudera.com/knowledge/Logs-are-not-updating-in-varloghue-after-upgrading-to-CDH-6?id=87842 Thanks Wert
... View more
02-22-2022
06:19 PM
Hi @araujo Appericate your reply, regarding the first part of the question, I am trying to check the KDC server logs to troubleshoot an issue wherein an application is unable to renew its ticket, hence wanted to check the logs. Now for the second issue, the logs are not getting written in /var/log/hue/ (on the host where the roles are configured) for kt_renewer.log & runcpserver.log as the timestamp on both on them show Nov 24. There is ample Disk Space available. Thanks Wert
... View more
02-22-2022
08:28 AM
Hello, How do I check the location where Kerberos is writing logs? I checked the location which is mentioned in krb5.conf (default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log) however the log files mentioned in this location are empty. Am I checking an incorrect location? Secondly logs on Hue Server & KT renewer is not getting updated / current. Any help/guidance is appreciated.
... View more
Labels:
02-10-2022
06:54 PM
Hi @Koffi Maybe this could be of some use - https://community.cloudera.com/t5/Support-Questions/CDH5-2-yarn-Error-starting-yarn-nodemanagers/td-p/21700
... View more
12-03-2021
09:05 AM
Hi, Wanted clarification/guidance related to finalize process for HDFS. Could we keep using the cluster while the finalize upgrade process is ongoing e.g., data is being written to hdfs while finalize process continues in the background OR would we need downtime for the same? Thanks Wert
... View more
Labels:
12-01-2021
10:26 PM
Hi, I need some assistance with Kudu, We have 2 tables that are consuming more space on our Kudu Cluster however, we could not find these tables under the database. Would need some guidance/assistance on a way to check where these tables are located under which Db. I can see the table in Kudu Web UI, however, when I try to query it via Impala I get the error: [My_server:21000] kwid> show create table ALARMS_COUNTERS_2021_11_01; Query: show create table ALARMS_COUNTERS_2021_11_01 ERROR: AnalysisException: Table does not exist: kwid.ALARMS_COUNTERS_2021_11_01 If I run “show tables” against the DB I get a list of all the tables, but this table is not in the list. Thanks Wert
... View more
Labels:
11-03-2021
02:55 AM
Hi @ChethanYM load_catalog_in_background is unchecked, we were not observing any JVM pauses in Catalogue Logs, however, we were seeing RPC related alerts, we have restarted Impala and the issue seems to have been fixed. Thanks Wert
... View more
10-31-2021
07:42 PM
Hello, We are observing error in Impala Catalouge Logs as below EI1101 14:52:24.768759 8438 jni-util.cc:308] org.apache.impala.common.InternalException: Error updating the catalog due to lock contention.
at org.apache.impala.service.CatalogOpExecutor.updateCatalog(CatalogOpExecutor.java:3474)
at org.apache.impala.service.JniCatalog.updateCatalog(JniCatalog.java:308)
I1101 14:52:24.768983 8438 status.cc:125] InternalException: Error updating the catalog due to lock contention.
@ 0x9812da
@ 0xd37e2f
@ 0x970728
@ 0x960e7f
@ 0xa1f263
@ 0xa1752f
@ 0x948d4a
@ 0xb420c9
@ 0xb39fc9
@ 0xb3adc2
@ 0xdab24f
@ 0xdaba4a
@ 0x13346fa
@ 0x7fb75bf7cea5
@ 0x7fb75bca59fd CM / CDH - 5.16.2 Appericate any assistacne / guidance in fixing this issue. Thanks Wert
... View more
Labels:
09-20-2021
02:50 AM
Hello, Wanted to know what features/components in CM/CDH 6.3.3 would stop working once the enterprise license is expired. Thanks Wert
... View more
Labels:
07-18-2021
08:46 AM
Hello, We have an upcoming OS patching activity for our cluster and we would be bringing the cluster down, in past when we have conducted such activity we have seen that Kudu T-Servers take a long time to come back online/ live around 4-5 hours for all t-servers to be online Upon reviewing the logs we find that it continues to give message ‘Opened block log’ etc. Would like to know if there is a way to quicken this process to reduce the time all t-servers are online CM/ CDH- 5.16.2 T-Servers – 10 / Master – 3 Appreciate any help/guidance. Thanks
... View more
Labels:
06-17-2021
08:43 PM
Hi @kingpin I did execute the script, ran rebalance report & did a rebalance too however the result I was looking for was not archived (space is still over consumed in 1 TS). I think rebalance just distributes tablets evenly to all TS what I am looking to achieve is like HDFS rebalancer and I don’t think it is there in Kudu, correct me if I am wrong. Thanks Wert
... View more
06-16-2021
08:49 PM
Hello, I would like some guidance/ information on data distribution in Kudu T-Servers. We have Kudu cluster of 3 Masters and 9 T-Servers (each t-server has storage of 1TB). We are noticing that space in some t-server is getting consumed rapidly whereas in other its not that much being consumed. Would like to know why this is happening and is there any way this can be overcome, so that data can be distributed evenly across of 9 t-servers. Kudu 1.7.0-cdh5.16.2/ CM 5.16.2 Appreciate any assistance in this regard. Thanks Wert
... View more
Labels:
03-30-2021
08:43 AM
@abagal / @PabitraDas Appreciate all your assistance / inputs on this. Thanks Wert
... View more
03-28-2021
09:20 AM
Hello, How do we balance data which are stored on individual disk on a particular datanode, we have 5 disks on a single node and one of the disk is 90% full, running balancer is not fixing the issue. Would like to get some suggestions/ comments to fix this issue. I was going through this article and it says it is not possible to balance disks within a single node, so what other options can we use to fix this until we upgrade to CDH 6.3 (https://community.cloudera.com/t5/Community-Articles/HDFS-Balancer-Balancing-Data-Between-Disks-on-a-DataNode/ta-p/244650) CM & CDH - 5.16.3 Appreciate all inputs. Thanks Wert
... View more
Labels:
02-10-2021
10:14 PM
Hello, I am trying to execute hdfs fsck command and am getting below error: [root@server1 root]# hdfs fsck / > /home/test/fsck_output_2-11-21 Connecting to namenode via http://server1.com:50070/fsck?ugi=hdfs&path=%2F Exception in thread "main" java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:171) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704) at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647) at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569) at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474) at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:363) at org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:72) at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:161) at org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:158) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:157) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:406) Thanks wert
... View more
Labels:
01-21-2021
10:38 PM
Hello, We were getting ‘DatabaseError: ORA-01017: invalid username/password; logon denied’ post our db upgrade, to fix that we have reset the hue password, which worked and that issue got fixed. Unfortunately one of our guys deleted a couple of logs from /var/log/hue. We have recreated runcpserver.log with the necessary permission, and restart Hue server, yet we fail to see the file being populated (file is still empty). Request assistance / guidance in fixing this issue. Thanks Wert
... View more
Labels:
09-14-2020
01:35 AM
Hello, We have got our Domain renamed from ABC to XYZ.com. Since then neither Cloudera Manager UI nor any other service UI's are accessible. As per my understanding, we would need to update the /etc/hosts file of each node in the cluster to reflect the new Domain name. however it would be great if anyone could advise on the below 2 things: 1. Do we really need to get /etc/host/file updated, OR is it taken care by Cloudera 2. Where all do we need to update the hostnames with new Domain name to have a fully functional cluster back to operational status Currently if I manually enter the complete address (ip+new doamin name) I can login to Cloudera Manager UI but from Cloudera Manager when I try to open any service UI's like NN UI it fails. Any help is much appreciated Regards Wert
... View more
Labels:
09-11-2020
09:34 AM
Hello,
Would like some assistance/guidance on Kerberos. Our domain name has changed and since then our applications are unable to connect to Hadoop cluster. We are using MIT Kerberos.
Regards
Wert
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
-
Security
09-06-2020
09:02 AM
Hello,
I was using Cloudera Enterprise 5.16 (Trial) which is now downgraded to Cloudera Express, since then I am facing issues starting Reports Manager. I am using the default embedded database.
CM / CDH - 5.16.2
2020-09-06 14:14:29,706 INFO com.cloudera.enterprise.dbutil.DbUtil: Schema version table doesn't exist.
2020-09-06 14:14:29,711 INFO com.cloudera.enterprise.dbutil.DbUtil: Schema version table already exists.
2020-09-06 14:14:29,712 INFO com.cloudera.enterprise.dbutil.DbUtil: DB Schema version 4100.
2020-09-06 14:14:29,712 INFO com.cloudera.enterprise.dbutil.DbUtil: Current database schema version: 4100
2020-09-06 14:14:29,725 INFO com.cloudera.enterprise.ssl.SSLFactory: Using default java truststore for verification of server certificates in HTTPS communication.
2020-09-06 14:14:29,761 WARN com.cloudera.cmf.BasicScmProxy: Exception while getting fetch configDefaults hash: none
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:996)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:850)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1091)
at com.cloudera.cmf.BasicScmProxy.authenticate(BasicScmProxy.java:276)
at com.cloudera.cmf.BasicScmProxy.fetch(BasicScmProxy.java:596)
at com.cloudera.cmf.BasicScmProxy.getFragmentAndHash(BasicScmProxy.java:686)
at com.cloudera.cmf.DescriptorAndFragments.newDescriptorAndFragments(DescriptorAndFragments.java:64)
at com.cloudera.headlamp.HeadlampServer.<init>(HeadlampServer.java:143)
at com.cloudera.headlamp.HeadlampServer.main(HeadlampServer.java:250)
2020-09-06 14:14:29,772 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 1 tries, sleeping for 2 secs
2020-09-06 14:14:31,793 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 2 tries, sleeping for 2 secs
2020-09-06 14:14:33,794 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 3 tries, sleeping for 2 secs
2020-09-06 14:14:35,795 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 4 tries, sleeping for 2 secs
2020-09-06 14:14:37,797 WARN com.cloudera.headlamp.HeadlampServer: No descriptor fetched from http://master-1.asia-southeast1-b.c.seismic-kingdom-265805.internal:7180 on after 5 tries, sleeping for 2 secs
2020-09-06 14:14:39,797 ERROR com.cloudera.headlamp.HeadlampServer: Could not fetch descriptor after 5 tries, exiting.
Appreciate any help.
... View more
Labels: