Member since
09-25-2015
17
Posts
20
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6608 | 11-09-2015 04:29 PM | |
2513 | 09-30-2015 02:59 PM |
02-03-2016
01:55 AM
2 Kudos
Ambari has a handy wizard for helping move a NameNode from one machine to another. In the event something were to go wrong with the move in, say, a NameNode HA enviornment are there any recommendations on how to recover and restore the NameNode setup back to its original server? Would the move NameNode wizard be the best approach to putting the server back or would the Ambari API be a better approach?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
11-24-2015
03:30 PM
3 Kudos
I'm looking into the possibility of performing an 'online' backup of HDFS metadata without having to take down HDFS or NameNodes and wanted to find out if the following plan is doable or not. General assumptions:
Regardless of the solution, we'll never have a full, up-to-date continuous backup of the namespace – we’ll always loose some of the most recent data. It’s not an OLTP system, most of the data can be easily recreated (re-run ETL or processing jobs).
Normal NN failures are handled by the Standby NN. The goal here is to have a procedure in place for a very unlikely case where both master nodes fail.
In the case of both NN failures, the NN service can be started up with the most recent image of the namespace we have. The understanding of how the Name Nodes maintain the namespace, in short, is:
Standby NN keeps a namespace image in memory based on edits available in a storage ensemble in Journal Nodes.
Based on pre-conditions (No of transactions or period), standby NN makes a namespace checkpoint and saves a “fsimage_*” to disk.
Standby NN transfers the fsimage to the primary NN over http. The understanding is that both NN write fsimages to disk in the following sequence:
NN writes the namespace to a file “fsimage.ckpt_*” on disk
NN creates a “fsimage_*.md5” file
NN moves the file “fsimage.ckpt_*” to “fsimage_.*” The above means that:
The most recent namespace image on disk in in a “fsimage_*” file is on the Standby NN.
Any “fsimage_*” file on disk is finalized and won’t receive more updates. Based on the above, a proposed, simple procedure what won’t affect the availability of NN is as follows:
Make sure the Standby NN checkpoints the namespace to “fsimage_” once per hour.
Backup the most recent “fsimage_*” and “fsimage_*.md5” from the Standby NN periodically. We can try to keep the latest version of the file on another machine in the cluster. Are there any issues or potential pitfalls with this approach that anyone can see?
... View more
Labels:
- Labels:
-
Apache Hadoop
11-09-2015
04:29 PM
@Alex Miller I was able to reproduce and personally had luck with screen as well as placing the "&" inside my test script itself at the end of the sqoop command rather than trying to background the script at invocation time (i.e. ./sqoop.sh &). The /dev/null thing was also successful for me as well with Accumulo in place. The customer apparently had gone ahead and removed the Accumulo bits before they had a chance to test my suggestions since any further they weren't using it, anyway. So I really think there isn't a bug and we are hitting some bash-isms here more than anything else. Thanks, all, for the tips.
... View more
11-06-2015
10:12 PM
@Artem Ervits Turns out that the factor was having the Accumulo Client installed on the machine alongside sqoop. With Accumulo client in the mix, the sqoop script, if invoked to run in the background, would go into a Stopped state and could only resume if the script were foregrounded using the "fg" command. Uninstalling the Accumulo client was what ultimately worked-around / fixed the issue. Not sure if this is a bug or due to the fact that sqoop is self is a bash script that calls another script that sources the configure-sqoop script. Thanks for your help.
... View more
11-06-2015
12:58 AM
@Artem Ervits The reason behind the backgrounding is there are quite a few tables with 2+ million records and they would like to run start a sqoop job in the background after hours (there has to be a better way to do this, in my opinion). Foregrounded: -bash-4.1$ ./sample_sqoop.sh whoami sboddu hostname node2.example.com 2015-11-05 19:15:03,027 INFO - [main:] ~ Running Sqoop version: 1.4.6.2.3.0.0-2557 (Sqoop:92)
2015-11-05 19:15:03,043 WARN - [main:] ~ Setting your password on the command-line is insecure. Consider using -P instead. (BaseSqoopTool:1021) 2015-11-05 19:15:03,248 INFO - [main:] ~ Using default fetchSize of 1000 (SqlManager:98)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.0.0-2557/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.0.0-2557/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.0.0-2557/accumulo/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] master tempdb model
msdb ReportServer Idistrict_Distributer idistrict
iDistrict_Audit
iDistrict_SlimDB iDistrict_Reports ReportServerTempDB Idistrict_Replication iDistrict_Attachment
FTL Backgrounded: -bash-4.1$ ./sample_sqoop.sh&
[2] 33320
-bash-4.1$ whoami sboddu hostname node2.example.com [2]+ Stopped ./sample_sqoop.sh
... View more
11-03-2015
09:41 PM
@Neeraj - No other jobs are waiting to finish and we can run this pretty much at-will in the foreground without things getting seemingly stuck.
... View more
11-03-2015
09:12 PM
Consdering the following bash script for Sqoop: #!/bin/sh connection_string="jdbc:sqlserver://remoteserver.somehwere.location-asp.com:1433;database=idistrict" user_name="OPSUser" db_password="OPSReader" sqoop_cmd="list-databases" sqoop $sqoop_cmd --connect $connection_string --username $user_name --password $db_password We can run this just fine in the foreground, i.e.: ./sqoop_test.sh But running it in the background like so: ./sqoop_test.sh & The script appears to 'hang' when kicking off the actual sqoop command...i.e. nothing happens at all. Using -x on the #!/bin/sh line shows that we end up at the last line of the script and then nothing... We have tried all kinds of iterations of different commands like: nohup bash sqoop.sh > results.txt 2>&1 & ./sqoop.sh &> /dev/null & switched to #!/bin/bash Any ideas? The odd thing is that the same exact script works fine both foregrounded and backgrounded on a different cluster. /etc/profile, and .bash_profile don't look to have any major differences.
... View more
Labels:
- Labels:
-
Apache Sqoop
10-28-2015
09:03 PM
I know the HDP-2.3 security guide is in progress, but, had two questions on enabling SSL for HiveServer2. In this use case, there are no plans to deploy Kerberos or Knox at this time, but, the desire is to have some sort of encrypted traffic for Hiveserver2: 1) Will these docs suffice?: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.5/bk_Security_Guide/content/ch_wire-hiveserver2.html 2) Does Hiveserver2 SSL work only in http mode? Or it can be turned on for binary mode too?
... View more
Labels:
- Labels:
-
Apache Hive
10-08-2015
10:56 PM
1 Kudo
re: security audit on the following items for Knox and Ambari Webservers: Report snippet is below. Do we have a way of disabling these things for the given components? Issue Types that this task fixes Browser Exploit Against SSL/TLS (a.k.a. BEAST) RC4 cipher suites were detected GeneralBrowser Exploit Against SSL/TLS (a.k.a. BEAST) Remove support of SSLv3/TLS1.0 cipher suites with CBC. For more information, see:http://disablessl3.com/ RC4 cipher suites were detected Adapt your server so that it supports the following ciphersuites ([1]): ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCMSHA384:\ ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:\
ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:\
ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:\
ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:\
DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:\
DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:\
AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:\
!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:\
!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
[1] https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Knox
10-08-2015
10:50 PM
A customer asked if the stats and information provided out of the box in Ambari Metrics provides all the things needed to represent cluster health and performance on a process level ( example - hive/hbase/hdfs, etc). I figured the stock metrics we display out of the box are 'good enough' and I know that Ambari Metrics will allow us to add even more custom metrics for a given service, if required. Does anyone have recommendations / suggestions on any additional metrics that they've found useful or helpful for customers? If there are, I figured Ambari Metrics would be the place to add those metrics rather than build out some sort of external dashboard. Thanks.
... View more
Labels:
- Labels:
-
Apache Ambari