Member since
10-01-2015
3933
Posts
1150
Kudos Received
374
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3574 | 05-03-2017 05:13 PM | |
| 2945 | 05-02-2017 08:38 AM | |
| 3196 | 05-02-2017 08:13 AM | |
| 3158 | 04-10-2017 10:51 PM | |
| 1632 | 03-28-2017 02:27 AM |
07-16-2016
02:20 AM
1 Kudo
the install_maven.sh script is there for your convenience. You may install maven manually using directions from https://maven.apache.org/install.html
... View more
07-16-2016
02:08 AM
5 Kudos
Update: apparently when you initiate a support case resolution capture, let's say for HBase service, it will pull HDFS namenode logs in addition to the HBase logs. You may be faced with the same issue and may have to apply the approach below to overcome timeouts. In SmartSense 1.3.0 this will no longer be an issue. Until then, this is a way to avoid capture time outs. Firstly, lets discuss the difference between capture for analysis and support case resolution. Analysis bundles do not collect service logs. For support cases, you're going to fetch configuration and logs. Then based on how much anonymization you will want to apply, large log files will take a long time to collect. This is especially prominent with HDFS namenode logs. They tend to be big and this is exactly the scenario we're trying to address. Firstly, increase the threshold for agent time out in Ambari. In my case it was 30min. Feel free to raise it up to 2hrs on the Ambari SmartSense Operations page. Then, we're going to exclude anything but hadoop-hdfs-namenode-*.log logs. That leaves .out and .out.* and .log.* files out of the collection. On the HST server host, where HST is analogous to SmartSense, go to /var/lib/smartsense/hst-agent/resources/scripts directory. Notice we're accessing hst-agent not hst-server directory. The collection scripts exist on agent hosts not on hst-server. Edit hdfs-scripts.xml file and go to line 100, it may be 10 lines give or take depending on which version of SmartSense you're running. On 1.2.2, it is line 100. Change the following lines if [ `hostname -f` == "${MASTER}" ] && [ `echo "${SLAVES}" | grep -o ',' | wc -l` -gt 1 ] ; then
find $LOG 2>/dev/null -type f -mtime -2 -iname '*' -exec cp '{}' ${outputdir} \;
find $LOG 2>/dev/null -type f -mtime -2 -iname '*' -exec cp '{}' ${outputdir} \;
else
for file in `find $LOG 2>/dev/null -type f -mtime -2 -iname '*' ;
find $LOG 2>/dev/null -type f -mtime -2 -iname '*' ; `
to if [ `hostname -f` == "${MASTER}" ] && [ `echo "${SLAVES}" | grep -o ',' | wc -l` -gt 1 ] ; then
# find $LOG 2>/dev/null -type f -mtime -2 -iname '*' -exec cp '{}' ${outputdir} \;
find $LOG 2>/dev/null -type f -mtime -2 -iname '*.log' -exec cp '{}' ${outputdir} \;
else
for file in `find $LOG 2>/dev/null -type f -mtime -2 -iname '*.log' ;
find $LOG 2>/dev/null -type f -mtime -2 -iname '*.log' ;
It is hard to see the difference, what we changed is actually comment out first find command, in 2nd find command, we replaced '*' to '*.log' and repeated the same in the for loop and again in the last find command. So for every iteration of '*', replace that with '*.log'. As the last step, let's restart SmartSense service and agents to propagate the changes to every agent; we only care about namenode nodes but depending on service and host components, I don't see why we couldn't restart all of them. One other thing I'd like to point out is that that same directory /var/lib/smartsense/hst-agent/resources/scripts contains scripts for other services, so essentially you can apply the same steps for any other service. Granted this is a pretty corner use case but when you're investigating a high severity issue and you have no means of uploading logs besides going at it the hard way, this may be a good approach. Finally, let's verify this approach. Go to SmartSense view and initiate a capture. At this point, when capture is complete, go to the SmartSense server node and navigate to the local storage directory. in that directory, you will find your latest bundle, uncompress it and cd into the new directory In that directory, there will be another compressed file, uncompress that as well. Finally CD into that directory and then into services directory. At this point you will see various services. We care about HDFS. Go inside it and finally into logs directory. There you will find your *.logs I want to highlight the fact that this is a hack and use it at your own risk. At the very least, notify your support engineer of the approach. I'd like to thank @Paul Codding and @sheetal for showing me the inner-workings of SmartSense. Your feedback is welcome.
... View more
Labels:
07-16-2016
12:06 AM
Rita, please see my example https://github.com/dbist/oozie/tree/master/apps/hcatalog
... View more
07-15-2016
11:31 PM
Please restore a backup of the database to a new larger partition and restart Ambari server. cleanup of db concerns me.
... View more
07-15-2016
12:58 PM
Glad that worked. Please accept the answer.
... View more
07-12-2016
09:04 AM
Please see this for example https://dzone.com/articles/using-libjars-option-hadoop And http://stackoverflow.com/questions/6890087/problem-with-libjars-in-hadoop
... View more
07-12-2016
08:54 AM
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.4/bk_dataintegration/content/ch_running-pig-tez.html You forgot to put semicolon at the end. SET exectype=tez;
... View more
07-11-2016
02:09 AM
It is up on maven central, try to confirm each of them manually <!-- https://mvnrepository.com/artifact/org.apache.calcite/calcite-avatica -->
<dependency>
<groupId>org.apache.calcite</groupId>
<artifactId>calcite-avatica</artifactId>
<version>0.9.2-incubating</version>
</dependency>
... View more
07-11-2016
01:59 AM
Was pre-prod using same mirror? Take a look at Ambari db see if anything is still stuck on old Ambari version?
... View more
07-10-2016
12:38 PM
Orlando, you need to make sure agent is running and then you can execute the command, you won't see this script on the FS but once agent is up you may execute the command.
... View more