Member since
05-16-2016
785
Posts
114
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1834 | 06-12-2019 09:27 AM | |
3046 | 05-27-2019 08:29 AM | |
5073 | 05-27-2018 08:49 AM | |
4444 | 05-05-2018 10:47 PM | |
2763 | 05-05-2018 07:32 AM |
02-02-2020
10:14 PM
[quickstart.cloudera:21000] > history; [1]: help; [2]: version; [3]: history; [4]: exit; [5]: profile; [6]: help; [7]: profile; [8]: history; [9]: version; [10]: profile; [11]: CREATE DATABASE IF NOT EXISTS my_database; [12]: history; [quickstart.cloudera:21000] > sudo service impala-state-store start; Query: sudo service impala-state-store start Query submitted at: 2020-02-02 22:11:26 (Coordinator: http://quickstart.cloudera:25000) ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore. [quickstart.cloudera:21000] > sudo service impala-catalog start > [quickstart.cloudera:21000] > sudo service impala-catalog start; Query: sudo service impala-catalog start Query submitted at: 2020-02-02 22:11:48 (Coordinator: http://quickstart.cloudera:25000) ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore. Hi , I'm getting an error when fire both the below commands. kindly help sudo service impala-state-store start; sudo service impala-catalog start; ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore.
... View more
12-26-2019
10:55 PM
Hi mike, did you try that? I'm also going to upgrade Hive with CDH 6.2
... View more
12-12-2019
10:22 PM
Could you try performing the "Validate hivemetastore schema " from Cloudera manager - > Hive service then Let us know if you are able to create the same table.
... View more
10-28-2019
10:40 AM
Since Hadoop 2.8, it is possible to make a directory protected and so all its files cannot be deleted, using : fs.protected.directories property. From documentation: "A comma-separated list of directories which cannot be deleted even by the superuser unless they are empty. This setting can be used to guard important system directories against accidental deletion due to administrator error." It does not exactly answer the question but it is a possibility.
... View more
10-28-2019
04:45 AM
Hi @AmitD , I did the same steps that worked for you. But I am getting the below error. Any idea what can be the reason ? 19/10/28 13:58:16 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.11.1 19/10/28 13:58:16 INFO teradata.TeradataManagerFactory: Loaded connector factory for 'Cloudera Connector Powered by Teradata' on version 1.7c6 19/10/28 13:58:16 ERROR tool.BaseSqoopTool: Got error creating database manager: java.lang.ClassCastException: com.cloudera.connector.teradata.TeradataManagerFactory cannot be cast to com.cloudera.sqoop.manager.ManagerFactory at org.apache.sqoop.ConnFactory.instantiateFactories(ConnFactory.java:98) at org.apache.sqoop.ConnFactory.<init>(ConnFactory.java:63) at com.cloudera.sqoop.ConnFactory.<init>(ConnFactory.java:36) at org.apache.sqoop.tool.BaseSqoopTool.init(BaseSqoopTool.java:270) at org.apache.sqoop.tool.EvalSqlTool.run(EvalSqlTool.java:56) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
... View more
10-21-2019
08:37 AM
I had exactly the same issue and turned out that the count includes also snapshot. To check if that's the case one can add -x option in the count, e.g.: hdfs dfs -count -v -h -x /user/hive/warehouse/my_schema.db/*
... View more
10-13-2019
01:15 PM
In my terminal instead of showing cloudera@quickstart it showing bash-4.1$. may be by unknowingly i have changed but now i am not able to change it to cloudera@quickstart . How i cn change the default value to cloudera@quickstart
... View more
10-10-2019
03:35 AM
This is really a nice article. Kudos to you.
... View more
10-09-2019
05:37 PM
09-17-2019
09:55 PM
Hi, I am using below sqoop import command. But facing with exception. sqoop import -Dhadoop.security.credential.provider.path=jceks://hdfs/DataDomains/HDPReports/credentials/credentials.jceks --connect "jdbc:jtds:sqlserver://xx.xx.xx.xx:17001;useNTLMv2=true;domain=bfab01.local" --connection-manager org.apache.sqoop.manager.SQLServerManager --driver net.sourceforge.jtds.jdbc.Driver --verbose --query 'Select * from APS_CONN_TEST.dbo.ConnTest WHERE $CONDITIONS' --target-dir /user/admvxb/sqoopimport1 --split-by ConnTestId --username ******* --password '******' -- --schema dbo Exception ======== 19/09/18 14:50:51 ERROR manager.SqlManager: Error executing statement: java.sql.SQLException: Client driver version is not supported. java.sql.SQLException: Client driver version is not supported. at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372) at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988) at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421) at net.sourceforge.jtds.jdbc.TdsCore.login(TdsCore.java:649) at net.sourceforge.jtds.jdbc.JtdsConnection.<init>(JtdsConnection.java:371) at net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:184) Thanks Venkat
... View more