Member since
02-27-2020
173
Posts
42
Kudos Received
48
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2414 | 11-29-2023 01:16 PM | |
| 2956 | 10-27-2023 04:29 PM | |
| 2417 | 07-07-2023 10:20 AM | |
| 4934 | 03-21-2023 08:35 AM | |
| 1671 | 01-25-2023 08:50 PM |
07-20-2020
08:59 AM
Ok, I get your granularity point. Thanks for clarifying. Unfortunately we don't have a Cloudera supported tool that can do a simple backup of the Kafka cluster. I can only speculate on the reason, but this is likely a rare case where a backup (rather than replication) is required.
... View more
07-19-2020
09:47 PM
There's an open source tool kafka-backup that sounds like what you are looking for. I'm not sure I follow your granularity point though.
... View more
07-09-2020
11:30 AM
Starting from Ambari 2.7.5 the repositories require username and password that you get from Cloudera if you have a required support contract. Existing users can file a non-technical case within the support portal (https://my.cloudera.com) to obtain credentials. You can find more information on Accessing Ambari Repositories page.
... View more
06-29-2020
09:29 AM
1 Kudo
This problem is typically solved by either (a) clearing cookies and restarting your browser; and/or (b) logging out and back into CDSW. Let me know if that works for you.
... View more
06-27-2020
10:48 PM
1 Kudo
Hi Guy, Please try adjusting your command to the following: ozone sh volume create --quota=1TB --user=hdfs o3://ozone1/tests Note the documentation states that the last parameter is a URI in the format <prefix>://<Service ID>/<path>. Service Id is what you found in ozone-site.xml.
... View more
06-12-2020
09:48 AM
Hi @Maria_pl , generally speaking the approach is as follows: 1. Generate a dummy flow file that will trigger (GenerateFlowFile processor) 2. Next step is UpdateAttribute processor that sets the start date and end date as attributes in the flow file 3. ExecuteScript is next. This can be a python script, or whichever language you prefer, that will use the start and end attributes to list out all the dates in between. 4. If your script produces single file output of dates, you can then use SplitText processor to cut each row into its own flow file and from there each file will have its own unique date in your range. Hope that makes sense.
... View more
05-26-2020
07:48 PM
Ok, so regarding single quotes vs. double quotes, you have to use double quotes in shell every time. Text in single quotes is treated as liternal (see p.271 of HBase Definitive Guide). After some more research I came across this post which seems to describe your problem exactly, along with two solutions on how to modify your Java code. To summarize, Java client for HBase expects row keys to be in human readable format, not their hexadecimal representation. Solution is to read your args as Double type, not String. Hope that finally resolves it.
... View more
05-26-2020
02:52 PM
Perhaps something about how Java interprets the args you pass to it when you run your code? It may be different from how shell client interprets them (relevant discussion here). Can you show how the command that executes your Java code, complete with the arguments passed to it? Also, include printed arguments (e.g. System.out.println(rowId)) in your code. Execute the code for the same key as you did in shell (i.e. \x00\x0A@E\xFFn[\x18\x9F\xD4-1447846881#5241968320)
... View more
05-26-2020
02:16 PM
The issue is that the DROP TABLE statement doesn't seem to remove the data from HDFS. This is usually caused by the table being an external table that doesn't allow Hive to perform all operations on it. Another thing you can try is what's suggested in this thread (i.e. before you drop the table, change its property to be EXTERNAL=FALSE). Does that work for you?
... View more