Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2619 | 08-06-2019 07:09 PM | |
2854 | 07-19-2019 01:57 PM | |
4046 | 02-25-2019 04:47 PM | |
4030 | 10-11-2018 02:47 PM | |
1343 | 09-26-2018 02:49 PM |
09-06-2017
04:19 PM
"as per above my understanding is any user needs to have full permissions on the system tables while connecting to sqlline for the first time and then just granting read access on the system tables should help him re-establish the session." -- correct. "Also can you please point me to document that can provide information around restricting access via Ranger for Phoenix." -- I'd suggest you ask a new question for help on using Ranger. I am not familiar with the project.
... View more
09-05-2017
02:45 PM
1 Kudo
Unless you have explicitly written the data the first time in the byte-representation which Phoenix requires, you cannot treat the data as any other type than VARCHAR. This is not simply a "best practice" -- this fundamentally does not work when you write the data in a different representation than Phoenix expects.
... View more
09-01-2017
03:33 PM
The Phoenix client is trying to move the SYSTEM tables to the SYSTEM namespace as a result of you enabling the namespace mapping. You have insufficient permissions to do so.
... View more
08-30-2017
03:43 PM
Have you looked at Ambari Blueprints? https://cwiki.apache.org/confluence/display/AMBARI/Blueprints curl -H "X-Requested-By: ambari" -X GET -u ambariuser:ambaripassword "http://YOUR_AMBARI_SERVER:8080/api/v1/clusters/YOUR_CLUSTER_NAME?format=blueprint" > blueprint.json All of the configuration information is contained in that JSON file which you can then parse/manipulate to re-construct XML files.
... View more
08-11-2017
03:39 PM
1 Kudo
Please read the error message. The HBase client has a number of dependencies that it requires to function. You have not included them on the classpath of your application. Please use the output of `hbase classpath` to see what all is required.
... View more
08-10-2017
06:40 PM
1 Kudo
You likely need to increase the number of open files past 4K. This is a client application -- the limits on the server processes are not relevant in this case. You need to ensure that the user running your client application has sufficient resources to run.
... View more
08-10-2017
04:14 PM
1 Kudo
Is NiFi running as root? Perhaps it does not have the permission to read the JAR file. You could try copying the JAR file to /tmp/ and reference it there.
... View more
08-04-2017
07:26 PM
You're exactly right that you shouldn't delete things by hand 🙂 If you're on >=HDP-2.5.x, make sure to disable the HBase backup feature. This can hold on to archived WALs. You'd want to set hbase.backup.enable=false in hbase-site.xml. If you have HBase replication set up, that's also another potential candidate for why those files are not being automatically removed. Lots of HBase snapshots are another candidate (like Sergey suggested already) -- drop the old snapshots you don't need anymore). Turning on DEBUG in the HBase master should give you some insight to the various "Chores" that run inside the Master to automatically remove (or retain) data.
... View more
08-04-2017
02:35 PM
You would still use the HBase Java API within a coprocessor. Ankit was saying that it did not work because you did not specify the binary data correctly in the HBase shell.
... View more
08-03-2017
04:27 PM
You would need to implement a custom Coprocessor (likely, a RegionObserver would be sufficient) to notify any external system of changes to HBase. This is tricky to do correctly, so I would recommend that you re-think your architecture and make sure this is the way you want to implement this. https://hbase.apache.org/book.html#_types_of_coprocessors (For example, you may find it easier to push all of your data through Kafka and instead send events to HBase and HDFS per your business rules)
... View more