Member since
05-16-2016
783
Posts
111
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
878 | 06-12-2019 09:27 AM | |
1713 | 05-27-2019 08:29 AM | |
3227 | 05-27-2018 08:49 AM | |
2896 | 05-05-2018 10:47 PM | |
1939 | 05-05-2018 07:32 AM |
02-21-2022
09:31 PM
Try running invalidate metadata; At end clearing browser cache worked for me.
... View more
01-21-2022
03:08 PM
Hi Nagaraj/ Anyone Can you please share the steps if you remember ? ERROR org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Exception in doCheckpoint java.io.IOException: Exception during image upload: java.lang.NoClassDefFoundError: org/apache/http/client/utils/URIBuilder Caused by: java.lang.NoClassDefFoundError: org/apache/http/client/utils/URIBuilder
... View more
12-07-2021
03:57 AM
yes you can update
... View more
10-30-2021
02:56 AM
sqoop-list-databases \
> --connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
> --username retail_dba \
> --password cloudera \
... View more
10-18-2021
10:59 AM
Usually, Exception: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out is caused by communication issues among Hadoop cluster nodes. To resolve this issue, check the following: a) Whether there are any communication problems among the Hadoop cluster nodes. b) Whether SSL certificate of any data node has expired (If Hadoop cluster is SSL enabled). c) If the SSL changes were made and services that are using the SSL is not restarted after the activity the issue will occur, need to restart the services in the cluster which are using the SSL.
... View more
10-15-2021
07:43 AM
how to solve this issue. Input path does not exist
... View more
10-13-2021
11:11 PM
How did you resolve the issue? I am facing a similar problem with datanode not starting
... View more
05-24-2021
06:46 PM
Hi below are some pointers just in case not tried ( which I believe you using them ) set the column value to TIMESTAMP -- ( map-column-hive <cols_name>=TIMESTAMP) Then please keep in mind the column should be bigint The main issue is the format. Parquet represent time in "msec" where as impala will interpret BigInt as "sec" so to get correct value you need to take care at query level. ( divide by 1000) Regards Jay
... View more
03-13-2021
09:35 AM
Hello Everyone, As far as I know, Hive does not support async SQL call to the hive server. Every query session should wait until the queries are executed and then only the result can be captured. If we use the multithreaded query execution using java thread then only first few queries will be accepted by the YARN Job depends on the resource availability. Correct me if someone has a better idea to deal with such a scenario. Waiting for others response. Regards, Dhirendra Pandit
... View more
03-10-2021
12:00 AM
@Venkat_ as this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
03-07-2021
03:01 AM
@Girish1980: Please follow the below steps : 1. Create the file with timestamp : > touch timestamp_`date +"%s"` 2. upload the file in hdfs location. > hdfs dfs -put timestamp_1615114389 /data 3. To save the latest fsimage add the namenode in safemode > hdfs dfsadmin -safemode enter 4. save the latest fsimage > hdfs dfsadmin -saveNamespace 5. After saving the latest fsimage safemode disable >hdfs dfsadmin -safemode leave 6. download the latest fsimage to local system directory >hdfs dfsadmin –fetchImage /data 7. Read the fsimage file to check timestamp_file by converting into xml file > hdfs oiv -p XML -i fsimage_0000000000000055633 -o fsimage.xml
... View more
02-25-2021
02:56 AM
Please mark and give us kudos so it encourages the fellow users to contribute more !
... View more
02-18-2021
11:32 PM
Hi, I got the same error in hue 3.1.0. The sql works fine with beeline but will get same error with hue. And we're also using impala hive HA. Tried [Set use_get_log_api=true in the beeswax section ], and it's not working. Do you have any other solution?
... View more
12-20-2020
10:47 PM
I add manually kafka account and grant root privilege to kafka account, but it doesn't still work.
... View more
12-11-2020
05:50 AM
The titled SocketTimeoutException occurs when the thrift-client in hiveConnection object is in the process of actively reading sql results from hive server2 ( thrift server), and is not able to receive anything until the TSocket's time out occurs. You can check the source code from:HiveConnection.setupLoginTimeout and HiveAuthFactory.getSocketTransport. So you need to either tuning hiveserver2, or increase the TSocket's timeout setting. And for now, the only way to increase Tsocket's time out setting is via: DriverManager.setLoginTimeout() you can check below jira for more information: https://issues.apache.org/jira/browse/HIVE-22196 https://issues.apache.org/jira/browse/HIVE-6679 https://issues.apache.org/jira/browse/HIVE-12371
... View more
07-16-2020
05:29 PM
When you add the node, there is a script called allkeys.sh will generate a key bundle, it contains the GPG key info and key bundle is called allkeys.asc DEFAULT_CLOUDERA_KEY_BUNDLE_NAME = " allkeys.asc " The key bundle will get the key for each of the flavor, so in your case it is archive.key which is located here : https://archive.cloudera.com/cdh5/ubuntu/lucid/amd64/cdh/archive.key If it is RHEL, then it uses this: https://archive.cloudera.com/cdh5/redhat/7/x86_64/cdh/RPM-GPG-KEY-cloudera Once all these keys are downloaded, it will be signed by the master key. Finally gpg command is used to export the keys to bundle called allkeys gpg --export -a > allkeys So I would check what is the repo that is being used
... View more
06-27-2020
12:40 PM
Is ther any solution for this issue. I am facing same issue. Table is holding very huge data and while doing insert overwrite , files are getting placed in my user directory, /user/anjali/.Trash, causing hive action in oozie to fail after 1.5 hr long run. Please help. The table is external and ev even I changed it to internal table, auto purge = true is not working.
... View more
03-03-2020
09:30 PM
1 Kudo
Could you run the administrator cmd not a normal one? From your error, I think you have not enough privilege to run these commands.
... View more
02-02-2020
10:14 PM
[quickstart.cloudera:21000] > history; [1]: help; [2]: version; [3]: history; [4]: exit; [5]: profile; [6]: help; [7]: profile; [8]: history; [9]: version; [10]: profile; [11]: CREATE DATABASE IF NOT EXISTS my_database; [12]: history; [quickstart.cloudera:21000] > sudo service impala-state-store start; Query: sudo service impala-state-store start Query submitted at: 2020-02-02 22:11:26 (Coordinator: http://quickstart.cloudera:25000) ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore. [quickstart.cloudera:21000] > sudo service impala-catalog start > [quickstart.cloudera:21000] > sudo service impala-catalog start; Query: sudo service impala-catalog start Query submitted at: 2020-02-02 22:11:48 (Coordinator: http://quickstart.cloudera:25000) ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore. Hi , I'm getting an error when fire both the below commands. kindly help sudo service impala-state-store start; sudo service impala-catalog start; ERROR: AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore.
... View more
12-26-2019
10:55 PM
Hi mike, did you try that? I'm also going to upgrade Hive with CDH 6.2
... View more
12-12-2019
10:22 PM
Could you try performing the "Validate hivemetastore schema " from Cloudera manager - > Hive service then Let us know if you are able to create the same table.
... View more
12-04-2019
04:38 AM
thanks. its worked nicely
... View more
12-02-2019
07:35 AM
Hi can you help me out how to write a shell script to backup fsimage.
... View more
11-01-2019
06:31 AM
1 Kudo
Hey, CSD Version: 2.3 & higher, I think. Regards, Ankit.
... View more
10-28-2019
10:40 AM
Since Hadoop 2.8, it is possible to make a directory protected and so all its files cannot be deleted, using : fs.protected.directories property. From documentation: "A comma-separated list of directories which cannot be deleted even by the superuser unless they are empty. This setting can be used to guard important system directories against accidental deletion due to administrator error." It does not exactly answer the question but it is a possibility.
... View more
10-28-2019
04:45 AM
Hi @AmitD , I did the same steps that worked for you. But I am getting the below error. Any idea what can be the reason ? 19/10/28 13:58:16 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.11.1 19/10/28 13:58:16 INFO teradata.TeradataManagerFactory: Loaded connector factory for 'Cloudera Connector Powered by Teradata' on version 1.7c6 19/10/28 13:58:16 ERROR tool.BaseSqoopTool: Got error creating database manager: java.lang.ClassCastException: com.cloudera.connector.teradata.TeradataManagerFactory cannot be cast to com.cloudera.sqoop.manager.ManagerFactory at org.apache.sqoop.ConnFactory.instantiateFactories(ConnFactory.java:98) at org.apache.sqoop.ConnFactory.<init>(ConnFactory.java:63) at com.cloudera.sqoop.ConnFactory.<init>(ConnFactory.java:36) at org.apache.sqoop.tool.BaseSqoopTool.init(BaseSqoopTool.java:270) at org.apache.sqoop.tool.EvalSqlTool.run(EvalSqlTool.java:56) at org.apache.sqoop.Sqoop.run(Sqoop.java:147) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243) at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
... View more
10-21-2019
08:37 AM
I had exactly the same issue and turned out that the count includes also snapshot. To check if that's the case one can add -x option in the count, e.g.: hdfs dfs -count -v -h -x /user/hive/warehouse/my_schema.db/*
... View more
10-13-2019
01:15 PM
In my terminal instead of showing cloudera@quickstart it showing bash-4.1$. may be by unknowingly i have changed but now i am not able to change it to cloudera@quickstart . How i cn change the default value to cloudera@quickstart
... View more
10-10-2019
03:35 AM
This is really a nice article. Kudos to you.
... View more