Member since
05-16-2016
783
Posts
112
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1394 | 06-12-2019 09:27 AM | |
2416 | 05-27-2019 08:29 AM | |
4327 | 05-27-2018 08:49 AM | |
3762 | 05-05-2018 10:47 PM | |
2405 | 05-05-2018 07:32 AM |
10-28-2019
10:40 AM
Since Hadoop 2.8, it is possible to make a directory protected and so all its files cannot be deleted, using : fs.protected.directories property. From documentation: "A comma-separated list of directories which cannot be deleted even by the superuser unless they are empty. This setting can be used to guard important system directories against accidental deletion due to administrator error." It does not exactly answer the question but it is a possibility.
... View more
10-28-2019
04:45 AM
Hi @AmitD , I did the same steps that worked for you. But I am getting the below error. Any idea what can be the reason ? 19/10/28 13:58:16 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.11.1 19/10/28 13:58:16 INFO teradata.TeradataManagerFactory: Loaded connector factory for 'Cloudera Connector Powered by Teradata' on version 1.7c6 19/10/28 13:58:16 ERROR tool.BaseSqoopTool: Got error creating database manager: java.lang.ClassCastException: com.cloudera.connector.teradata.TeradataManagerFactory cannot be cast to com.cloudera.sqoop.manager.ManagerFactory at org.apache.sqoop.ConnFactory.instantiateFactories(ConnFactory.java:98) at org.apache.sqoop.ConnFactory.<init>(ConnFactory.java:63) at com.cloudera.sqoop.ConnFactory.<init>(ConnFactory.java:36) at org.apache.sqoop.tool.BaseSqoopTool.init(BaseSqoopTool.java:270) at org.apache.sqoop.tool.EvalSqlTool.run(EvalSqlTool.java:56) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
... View more
10-21-2019
08:37 AM
I had exactly the same issue and turned out that the count includes also snapshot. To check if that's the case one can add -x option in the count, e.g.: hdfs dfs -count -v -h -x /user/hive/warehouse/my_schema.db/*
... View more
10-13-2019
01:15 PM
In my terminal instead of showing cloudera@quickstart it showing bash-4.1$. may be by unknowingly i have changed but now i am not able to change it to cloudera@quickstart . How i cn change the default value to cloudera@quickstart
... View more
10-10-2019
03:35 AM
This is really a nice article. Kudos to you.
... View more
10-09-2019
05:37 PM
09-17-2019
09:55 PM
Hi, I am using below sqoop import command. But facing with exception. sqoop import -Dhadoop.security.credential.provider.path=jceks://hdfs/DataDomains/HDPReports/credentials/credentials.jceks --connect "jdbc:jtds:sqlserver://xx.xx.xx.xx:17001;useNTLMv2=true;domain=bfab01.local" --connection-manager org.apache.sqoop.manager.SQLServerManager --driver net.sourceforge.jtds.jdbc.Driver --verbose --query 'Select * from APS_CONN_TEST.dbo.ConnTest WHERE $CONDITIONS' --target-dir /user/admvxb/sqoopimport1 --split-by ConnTestId --username ******* --password '******' -- --schema dbo Exception ======== 19/09/18 14:50:51 ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: Client driver version is not supported. java.sql.SQLException: Client driver version is not supported. at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372) at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988) at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421) at net.sourceforge.jtds.jdbc.TdsCore.login(TdsCore.java:649) at net.sourceforge.jtds.jdbc.JtdsConnection.<init>(JtdsConnection.java:371) at net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:184) Thanks Venkat
... View more
09-03-2019
08:16 PM
Could you share the error ? Do you have sqoop client being installed on the node ? whats you mysql cnf file looking if you have this skip-networking just comment it out and restart the mysql i assume it your poc box.
... View more
09-03-2019
11:09 AM
Hi, To view the spark logs of an completed application you can view the logs by running the below command yarn logs -applicationId application_xxxxxxxxxxxxx_yyyyyy -appOwner <userowner> > application_xxxxxxxxxxxxx_yyyyyy.log Thanks AKR
... View more
06-12-2019
11:21 PM
Hi, Couple of questions: 1. Have you checked HS2 log and see if it complained anything or did beeline reach HS2 at all? I suspect not, but just want to be sure. 2. Based on the code here: https://github.com/cloudera/hive/blob/cdh6.1.0/beeline/src/java/org/apache/hive/beeline/BeeLine.java#L1035-L1048 It looks like that beeline failed to get the connection string. Have you tried to quote the connection string just in case? beeline -u 'jdbc:hive2://hostname.domain.dom:10000' Cheers Eric
... View more