Member since
05-09-2017
107
Posts
7
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2986 | 03-19-2020 01:30 PM | |
15401 | 11-27-2019 08:22 AM | |
8474 | 07-05-2019 08:21 AM | |
14879 | 09-25-2018 12:09 PM | |
5593 | 08-10-2018 07:46 AM |
04-23-2020
06:49 PM
we had the some issue, and I tried this, it seems not work.
... View more
03-25-2020
05:42 AM
@desind I can see the error Authentication is not valid but it seems you didn't use the format super:password->super:DyNYQEQvajljsxlhf5uS4PJ9R28= instead, your input was as below according to the steps you shared. addauth digest super:password And then delete the znode that should work [zk: xxx.unx.sas.com(CONNECTED) 2] deleteall /kafka-acl/Topic Please do that and revert
... View more
03-19-2020
01:30 PM
I was able to resolve this issue after a lot of work. Records in kafka had null values and S3 sink connector cannot write null values to S3 bucket and failed with this error, We were able to dig deeper when we changed the flush.size =1 and then we saw a different error and that made us check for null values. We developed a patch that fixed the issue. now the S3 connector ignores null values . I dont know why confluent SMT did not work.
... View more
11-19-2019
06:23 PM
It happened to me when I was installing cloudera 6.3.1, What solved to me was: 1. run: sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config 2. config /etc/hosts: (just an exemple, set the host of all machines) hostnamectl set-hostname master1.hadoop-test.com
echo "10.99.0.175 master1.hadoop-test.com master1" >> /etc/hosts
sed -i 's/\r//' /etc/hosts
echo "HOSTNAME=master1.hadoop-test.com" >> /etc/sysconfig/network 3. reboot then: 4. wget <a href="https://archive.cloudera.com/cm6/6.3.1/cloudera-manager-installer.bin" target="_blank">https://archive.cloudera.com/cm6/6.3.1/cloudera-manager-installer.bin</a> 5. chmod u+x cloudera-manager-installer.bin 6. ./cloudera-manager-installer.bin
... View more
09-17-2019
09:55 PM
Hi, I am using below sqoop import command. But facing with exception. sqoop import -Dhadoop.security.credential.provider.path=jceks://hdfs/DataDomains/HDPReports/credentials/credentials.jceks --connect "jdbc:jtds:sqlserver://xx.xx.xx.xx:17001;useNTLMv2=true;domain=bfab01.local" --connection-manager org.apache.sqoop.manager.SQLServerManager --driver net.sourceforge.jtds.jdbc.Driver --verbose --query 'Select * from APS_CONN_TEST.dbo.ConnTest WHERE $CONDITIONS' --target-dir /user/admvxb/sqoopimport1 --split-by ConnTestId --username ******* --password '******' -- --schema dbo Exception ======== 19/09/18 14:50:51 ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: Client driver version is not supported. java.sql.SQLException: Client driver version is not supported. at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372) at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988) at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421) at net.sourceforge.jtds.jdbc.TdsCore.login(TdsCore.java:649) at net.sourceforge.jtds.jdbc.JtdsConnection.<init>(JtdsConnection.java:371) at net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:184) Thanks Venkat
... View more
07-12-2019
11:43 AM
> Key: SPARK-23476 > URL: https://issues.apache.org/jira/browse/SPARK-23476 > Project: Spark > Issue Type: Bug > Components: Spark Shell > Affects Versions: 2.3.0 > Reporter: Gabor Somogyi > Priority: Minor > > If spark is run with "spark.authenticate=true", then it will fail to start in local mode. > {noformat} > 17/02/03 12:09:39 ERROR spark.SparkContext: Error initializing SparkContext. > java.lang.IllegalArgumentException: Error: a secret key must be specified via the spark.authenticate.secret config > at org.apache.spark.SecurityManager.generateSecretKey(SecurityManager.scala:401) > at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:221) > at org.apache.spark.SparkEnv$.create(SparkEnv.scala:258) > at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:199) > at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:290) > ... > {noformat} > It can be confusing when authentication is turned on by default in a cluster, and one tries to start spark in local mode for a simple test. > *Workaround*: If {{spark.authenticate=true}} is specified as a cluster wide config, then the following has to be added > {{--conf "spark.authenticate=false" --conf "spark.shuffle.service.enabled=false" --conf "spark.dynamicAllocation.enabled=false" --conf "spark.network.crypto.enabled=false" --conf "spark.authenticate.enableSaslEncryption=false"}} > in the spark-submit command.
... View more
06-06-2019
11:28 PM
yarn logs -applicationId <application master ID> should help. It occurs typically due improper container memory allocation and physical memory availability on the cluster.
... View more
05-19-2019
08:48 PM
- Do you observe this intermittency from only specific client/gateway hosts? - Does your cluster apply firewall rules between the cluster hosts? One probable reason behind the intermittent 'Connection refused' from KMS could be that it is frequently (auto)restarting. Checkout its process stdout messages and service logs to confirm if there's a kill causing it to be restarted by the CM Agent supervisor.
... View more
05-15-2019
06:52 PM
1 Kudo
The Disk Balancer sub-system is local to each DataNode and can be triggered on distinct hosts in parallel. The only time you should receive that exception is if the targeted DN's hdfs-site.xml does not carry the property that enables disk balancer, or when the DataNode is mid-shutdown/restart. How have you configured disk balancer for your cluster? Did you follow the configuration approach presented at https://blog.cloudera.com/blog/2016/10/how-to-use-the-new-hdfs-intra-datanode-disk-balancer-in-apache-hadoop/? What is your CDH and CM version?
... View more
12-13-2018
07:13 AM
Hi noticing this error "Caused by: java.io.EOFException: read past EOF: MMapIndexInput(path="/var/lib/you_metadata_path")" When errors like this appear in the logs its usually a sign that the internal solr core index has become corrupted somehow from missing files, and the index will need to be regenerated again over time as fresh metadata comes in. Stopping the Navigator Metadat Server, Clearing the metadata directory and starting a fresh index like this is the best option if this occurs.. It seems to occur most commonly from disks filling up, or a hard server crash of some sort. -T
... View more