Member since
07-17-2019
738
Posts
433
Kudos Received
111
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2546 | 08-06-2019 07:09 PM | |
2801 | 07-19-2019 01:57 PM | |
3921 | 02-25-2019 04:47 PM | |
3967 | 10-11-2018 02:47 PM | |
1298 | 09-26-2018 02:49 PM |
12-24-2018
05:38 PM
I followed this with HDP 2.6.5 and the HBaseUI became accessible in the given URL but has many errors and links not working inside. I posted a question on how to fix this and then the answer resolving most of these issues here: https://community.hortonworks.com/questions/231948/how-to-fix-knox-hbase-ui.html You are welcome to test this and include these fixes in your article if you find it appropriate. Best regards
... View more
04-16-2017
01:28 AM
Use `jstack` to identify why the init process is hanging. Most likely you do not have correct accumulo-site.xml or ZooKeeper or HDFS are not running.
... View more
01-20-2017
03:33 AM
@Sergey Soldatov Add it in ambari at the end of "Advanced zeppelin-env" -> "zeppelin_env_content" worked perfectly.
... View more
01-10-2017
07:45 PM
1 Kudo
When executing Step 3 of the Ambari installation wizard "Confirm Hosts", Ambari will (by default) SSH to each node and start an instance of the Ambari Agent process. In some cases, it is possible that the local RPM database is corrupted and this registration process will fail. The error message in Ambari would look something like: INFO:root:Executing parallel bootstrap
ERROR:root:ERROR: Bootstrap of host myhost.mydomain fails because previous action finished with non-zero exit code (1)
ERROR MESSAGE: tcgetattr: Invalid argumentConnection to myhost.mydomain closed.
STDOUT: Error: database disk image is malformed
Error: database disk image is malformedDesired version (2.5.0.0) of ambari-agent package is not available.
tcgetattr: Invalid argumentConnection to myhost.mydomain closed. In this case, the local RPM database is malformed and all actions to alter the installed packages on the system will fail until the database is rebuilt. This can be done by the following commands as root on the host reporting the error: [root@myhost ~] # mv /var/lib/rpm/__db* /tmp
[root@myhost ~] # rpm --rebuilddb Then, click the "Retry Failed Hosts" button in Ambari and the registration should succeed.
... View more
Labels:
09-24-2018
12:21 PM
How to connect remote EC2 HDP Phoenix DB from local Spring Boot Application?
... View more
11-10-2016
06:25 PM
Nice writeup @wsalazar. I think you can simplify your classpath setup by only including the /usr/hdp/current/phoenix-client/phoenix-client.jar and the XML configuration files (core-site, hdfs-site, hbase-site). The phoenix-client.jar will contain all of the classes necessary to connect to HBase using the Phoenix (thick) JDBC driver.
... View more
10-17-2016
12:39 PM
@Josh Elser @srinivas padala The "Read/Write" stats on the processor have nothing to do with writing to your SQL end-point. This particular stat is all about reads from and writes to the NIFi content Repository. This helps identify where in your flow you may have disk high disk I/O in the form of either reads or more expensive writes. From the screenshot above, I see that this processor brought in off inbound connections 35,655 FlowFiles in the past 5 minutes. It read 20.87 MB of content from the content repository in that same timeframe. The processor then output 0 FlowFiles to any outbound connection (This indicates all files where either routed to a an auto-terminated relationship). Assuming only the "success" relationship was auto-terminated, all data was sent successfully. If the "failure" relationship (which should not be auto-terminated here) is routed to another processor, the 0 "out" indicates that in the past 5 minutes 0 files failed. The Tasks shows a cumulative total CPU usage reported over the past 5 minutes. A high "Time" value indicates a cpu intensive processor. Thanks, Matt
... View more
09-11-2016
07:08 PM
Odd that saving it as a text file doesn't cause an error, but glad you got to the bottom to it. If you want/need any more context, I tried to capture some information on maxClientCnxns recently in https://community.hortonworks.com/articles/51191/understanding-apache-zookeeper-connection-rate-lim.html
... View more
09-06-2016
06:05 PM
For HDP 2.3 (Apache 1.1.2), ./hbase-common/src/main/java/org/apache/hadoop/hbase/HBaseConfiguration.java calls HeapMemorySizeUtil.checkForClusterFreeMemoryLimit(conf); There is no HBaseConfiguration.checkForClusterFreeMemoryLimit Can you double check your classpath to see which hbase related jars are present. Please pastebin those jars Thanks
... View more
02-13-2017
12:20 AM
I'm getting the same error after installation. I grepped the /var/log/accumulo/tserver_hostname.log and found a report of: ERROR: Exception while checking mount points, halting process
java.io.FileNotFoundException: /proc/mounts (Too many files open) After looking the open files I discovered 136K java open files and 106K jsvc open files, given I set a descriptor limit of 20K I think this might be my problem $> lsof | awk '{print $1}' | sort | uniq -c | sort -n -k1
...
106000 jsvc
136000 java I'm digging into this now too. This cluster has no jobs running I'm surprised to see so many open files...
... View more