Member since
07-15-2015
43
Posts
1
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4859 | 01-31-2018 02:48 AM | |
1529 | 10-29-2017 10:08 PM | |
4872 | 05-09-2017 06:53 AM | |
2698 | 01-31-2017 10:17 PM |
12-10-2020
02:46 AM
Hi, I am unable to connect the HBase thrift server. Followed this link https://community.cloudera.com/t5/Community-Articles/Start-and-test-HBASE-thrift-server-in-a-kerberised/tac-p/244673 Demo Client Exception in thread "main" java.security. PrivilegedActionException: org.apache.thrift.transport. TTransportException: Invalid status 21 #hbase org.apache.hadoop.hbase. thrift.HttpDoAsClient hdp252 9090 hbase true main" java.security. PrivilegedActionException: org.apache.thrift.transport. TTransportException: java.net.SocketException: Unexpected end of file from server Please help me out
... View more
Labels:
09-20-2020
03:32 AM
@ValiD_M Did you find any solution for this?
... View more
03-10-2020
05:21 AM
When ranger_audits collection got created in solr, solr plugin test connection in ranger UI is throwing the error Authentication Required.
... View more
03-02-2020
11:42 PM
Hi, I'm using cloudera data platform 7.0.3. Getting below error while test connection from ranger solr plugin. Problem accessing /solr/admin/collections. Reason: <pre> Authentication required<pre> Please help me out Thanks in advance!
... View more
01-18-2020
08:08 PM
Hi, We have a scenario that namenode should provide the data node addresses with host names instead of ip addresses to the client for write/read operations to/from datanodes. There are two Ethernets configured for cluster nodes. one is for internal use(cluster-192.x.x.x series) and other is for external use(10.x.x.x series) Tried setting up dfs.client.use.datanode.hostname=true, but no luck. Tried configuring dfs_all_hosts.txt with hostnames, but the entries with datanode hostnames got appended to dfs_all_hosts.txt, still client trying to use datanodes ip addresses instead of hostnames. Note: Client is out of the cluster Please help me out!! Thanks in advance!!
... View more
- Tags:
- cdh
10-18-2019
05:16 AM
HI,
We have set mapreduce. job. redacted-properties=fs.s3a.access.key,fs.s3a.secret.key to redact s3a access keys in job.xml. But still we are able to see access keys exposed in Resource manger UI->Application master-->Job-->Configuration.
Please help me out.
Thanks in advance.
... View more
Labels:
02-26-2018
11:16 PM
bin/zookeeper-shell.sh <zookeeper_host:port>:2181 If you want to delete only topics rmr /brokers/topics for deleting brokers rmr /brokers
... View more
01-31-2018
02:48 AM
1 Kudo
Resolved by deleting brokers data from zookeeper Thanks.
... View more
01-30-2018
10:10 PM
Hi,
I've been getting problems with the consumer in CDH5.13.0. For other CDH-5.11.0 and CDH-5.12.0 versions with Kafka version : 0.10.2-kafka-2.2.0 are working fine. Added advertised.listeners also and i'm running in a non-secure environment. Observed client.id is blank for console consumer whereas it is client.id = console-producer for producer. I need to work with CDH-5.13.0,please help me out
When I run the old API consumer, it works by running
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
However, when I run the new API consumer, I don't get anything when I run
bin/kafka-console-consumer.sh --new-consumer --topic test --from-beginning --bootstrap-server localhost:9092
... View more
10-29-2017
10:08 PM
In my case, Actually the storepass and keypass should be same for solr keystore.
... View more
10-29-2017
09:58 PM
Need to work on hive2.3.0 version. So, Is there any workaround for hive2.3.0?
... View more
10-27-2017
05:04 AM
Hi, We are storing some column names in UPPER CASE while generating an ORC file. When we read the ORC file from hive 2.3.0 in EMR, we are getting NULL values for all UPPER-CASE columns. It is working fine in hive2.1.0 and other versions. Below is the metadata of orc file Type: struct<GRADE:double,LOSAL: double,HISAL:double,purge_seq_ id:double,purge_date: timestamp,aj_retention_ applied:string,aj_pk_column: double> It is a known issue in Hive 0.13 and got fixed in later versions. Please help me out..... Thanks in advance
... View more
Labels:
10-17-2017
01:36 PM
Thanks for your reply @saranvisa. But we are running some jobs through Java code and we should not get the Kerberos login tickets from os level as the different Kerberos users will log into the severs os level.So we are trying to login Kerberos user with keytab through Java for a specific job
... View more
10-16-2017
12:56 AM
Hi, We are unable to renew kerberos user tickets from keytab using java code, while it's working with "kinit -R" code: UserGroupInformation loginUser = UserGroupInformation.getLoginUser(); loginUser.checkTGTAndReloginFromKeytab(); Please help me out.
... View more
Labels:
05-15-2017
02:39 AM
Hey @Harsh J, Please have a look at this topic
... View more
05-14-2017
10:06 PM
I've already tried, but no use
... View more
05-11-2017
11:06 PM
Hi, When i delete files from hdfs and even .Trash , In datanodes /dfs/dn directory not cleaning up blocks and resulting disk full in OS level storage. When i restart HDFS, some blocks/recent data blocks are missing frequently dfs direcory usage in datanodes # du -sh * 705G dfs 16K lost+found hdfs usage # sudo -u hdfs hdfs dfs -du -s -h / 270.3 G 812.7 G / Please help me out.... Thanks in advance
... View more
Labels:
05-09-2017
07:10 AM
Hi,
ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728
Earlier, ipc.maximum.data.length used to be 64MB and got the same error and we changed that to 128MB. Now again it got exceeded and resulting data corruption/missing issues. Is there any maximum configurable value of ipc.maximum.data.length? Can we change this value above 128MB?
Thanks in advance
... View more
Labels:
05-09-2017
06:53 AM
Yes Harsh, it's number of blocks. Block count is 6 Million. Deleted unwanted small files, now the cluster health is good Is there any limit that a datanode should have only x no. of blocks?
... View more
05-09-2017
03:01 AM
Thanks HarshJ for your reply, In namenode log , I am facing this issue.. CDH version 5.7.1 Block count reached to ~6million and how many blocks a datanode can handle/namenode get block report. I saw block count threshold value set 3lakh in cloudera. Can you please explain about block report format and length.
... View more
05-09-2017
12:03 AM
Hi, ISSUE: Requested data length 146629817 is longer than maximum configured RPC length 134217728 Earlier , ipc.maximum.data.length used to be 64MB and got the same error and we changed that to 128MB. Now again it got exceeded and resulting data corruption/missing issues. Is there any maximum configurable value of ipc.maximum.data.length? Can we change this value above 128MB? Thanks in advance
... View more
Labels:
04-04-2017
12:07 AM
Hi, Is there any way to copy solr indexed data from one collection to new collection and search data from new collection?
... View more
Labels:
02-23-2017
08:32 PM
@saranvisa Superuser(hdfs) can delete any file in hdfs. So all i need is to make an hdfs file that cannot be deleted by anyone even superuser like the way chattr command do in linux. With ACLs i cannot make a file undeletable for all users. Thanks
... View more
02-23-2017
06:28 AM
Thanks csguna for you reply. I need put files that are not deletable even with superuser. Like chattr in Linux file system.
... View more
02-23-2017
06:00 AM
Hi guys, How can I make a file undeletable in Hdfs. Any suggestions ? Thanks in advance
... View more
- Tags:
- HDFS
Labels:
01-31-2017
10:17 PM
MR/Hive Jobs Issue resolved by replacing old jars with new version and switched to s3a. Below are the jars that are replaced. jets3t jar aws-java-sdk jars jackson jars
... View more
01-23-2017
04:46 AM
Thanks Miklos.. What about hive MR jobs ?
... View more
01-23-2017
02:43 AM
Hi All, I added fs.s3n.awsAccessKeyId and fs.s3n.awsSecretAccessKey properties in core,hdfs,hive-site.xml. I am able to run select * from tbl which is on s3 and got the resullt with beeline.But, when i run select count(*) from tbl, it's getting failed with following errors Error: java.io.IOException: java.lang.reflect. InvocationTargetException at org.apache.hadoop.hive.io. HiveIOExceptionHandlerChain. handleRecordReaderCreationExce ption( HiveIOExceptionHandlerChain. java:97) ........ Caused by: java.lang.reflect. InvocationTargetException at sun.reflect. NativeConstructorAccessorImpl. newInstance0(Native Method) at org.apache.hadoop.hive.shims. HadoopShimsSecure$ CombineFileRecordReader. initNextRecordReader( HadoopShimsSecure.java:251) ... 11 more Caused by: java.io.IOException: s3n://rakeshs3 : 400 : Bad Request at org.apache.hadoop.fs.s3native. Jets3tNativeFileSystemStore. processException( Jets3tNativeFileSystemStore. java:453) at org.apache.hadoop.fs.s3native. Jets3tNativeFileSystemStore. processException( Jets3tNativeFileSystemStore. java:427) ... 16 more Caused by: org.jets3t.service.impl.rest. HttpException: 400 Bad Request at org.jets3t.service.impl.rest. httpclient.RestStorageService. performRequest( RestStorageService.java:425) at org.jets3t.service.impl.rest. httpclient.RestStorageService. performRequest( RestStorageService.java:279) ... 29 more Impala Errors: Failed to open HDFS file s3n://rakeshs3/tel.txt Error(255): Unknown error 255 Thanks
... View more
11-17-2016
12:37 AM
Thanks MicheleM for you response. I didn't enabled HDFS Sentry synchronization. Before hiveserver2 highavailability config it used to work perfectly. Sentry enabled for only hive now. The problem here is ,create table/query is running by default user hive. It should be an enduser.
... View more