Member since
09-25-2015
17
Posts
20
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6663 | 11-09-2015 04:29 PM | |
2523 | 09-30-2015 02:59 PM |
02-05-2016
10:36 PM
2 Kudos
As per a Support note: "You can use the Move NameNode wizard in Ambari.
This will move the NameNode but only according to Ambari. After
this has been successfully completed (with the NameNode down) then you
should move all the files in the old namenode edits directory
(dfs.namenode.name.dir) to the new NameNode in the directory configured.
The permissions of these files will be hdfs:hadoop (by default) but the
owner should be the user who runs your NameNode & the group will be
the hadoop primary group. After this is done, then the NameNode is ready to start." The most important thing is to ensure that you have a backup of all the images and edits in dfs.namenode.name.dir. Then if anything happens you can you can revert back to that.
... View more
04-24-2017
04:44 PM
Hi @Kent Baxley, Looks like the doc is missing the plan for backing up the VERSION file.
... View more
11-01-2017
03:37 AM
I too had this problem. My bash script worked fine in my DEV environment as a background job and as a foreground job. However, in my TEST environment the job would only run as a foreground job. In TEST, running as a nohup job would seem to stop at the point where my Sqoop step was called. Ultimately I came across this thread which pointed me in the right direction. Essentially you can emulate nohup by "daemonizing" your script.
setsid ./sqoop.sh </dev/null &>myLog.out &
... View more
11-16-2015
03:12 PM
@Deepesh Thank you!--I'm creating a Jira to make sure this info gets incorporated into the new Security guide.
... View more
10-13-2015
03:12 PM
3 Kudos
For Knox, sslv3 is disabled by default and this can be further configured to disable more or "none" through the ssl.exclude.protocols parameter in gateway-site.xml. This can be done directly in the file or from within Ambari. Knox does not have a configurable means to disable specific algorithms - however you can use the Java JSSE networking properties to do this. In fact, this will work for all applications being run in that particular JVM which is better than having to track it down for each application. You should be able to find this in $JRE_HOME/lib/security/java.security in others. # Algorithm restrictions for Secure Socket Layer/Transport Layer Security # (SSL/TLS) processing # # In some environments, certain algorithms or key lengths may be undesirable # when using SSL/TLS. This section describes the mechanism for disabling # algorithms during SSL/TLS security parameters negotiation, including cipher # suites selection, peer authentication and key exchange mechanisms. # # For PKI-based peer authentication and key exchange mechanisms, this list # of disabled algorithms will also be checked during certification path # building and validation, including algorithms used in certificates, as # well as revocation information such as CRLs and signed OCSP Responses. # This is in addition to the jdk.certpath.disabledAlgorithms property above. # # See the specification of "jdk.certpath.disabledAlgorithms" for the # syntax of the disabled algorithm string. # # Note: This property is currently used by Oracle's JSSE implementation. # It is not guaranteed to be examined and used by other implementations. # # Example: # jdk.tls.disabledAlgorithms=MD5, SHA1, DSA, RSA keySize < 2048
... View more
10-10-2015
02:03 PM
Kent, there are numerous metrics collected in AMS beyond what's exposed at the top level. There is also an option to add derived metric charts (e.g. sum, avg, math, expression). E.g. try HBase -> Add widget (big plus sign in the UI) and play with available metrics in drop-down/search.
... View more
05-25-2016
10:40 AM
I haven't seen a full document that covers sanity checking the entire cluster. This is often performed by the PS team at customer engagements. Side note: the most important common individual component test I use to smoke test a cluster is Hive-TestBench.
... View more
01-27-2017
06:31 PM
@dstreever , I am using this oozie.wf.workflow.notification.url property but I am facing an issue : https://myknoxurl:8443/gateway/default/myproject/services/job/?user.name=tejwinder_singh I am using knox to connect to remote tomcat (which will capture oozie's statuses) How do I add my password to the URL ? Or Is there a better solution to do this via Knox ?
... View more
07-06-2016
09:33 AM
I don't understand to config HA-mode when i use hdfsbolt. You can give me some inform to load hdfs-site.xml or core-site.xml or other way to understand HA mode. HdfsBolt bolt = new HdfsBolt()
.withFsUrl("hdfs://ha-cluster")
.withFileNameFormat(fileNameFormat) .withRecordFormat(format)
.withRotationPolicy(rotationPolicy)
.withSyncPolicy(syncPolicy); When i use .withFsUrl("hdfs://pn2:9000") my topology understand but i used .withFsUrl("hdfs://ha-cluster") is not work. java.lang.RuntimeException: Error preparing HdfsBolt: java.net.UnknownHostException: ha-cluster
at org.apache.storm.hdfs.bolt.AbstractHdfsBolt.prepare(AbstractHdfsBolt.java:109) ~[stormjar.jar:na]
at backtype.storm.daemon.executor$fn__3439$fn__3451.invoke(executor.clj:699) ~[storm-core-0.9.6.jar:0.9.6]
at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.6.jar:0.9.6]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: ha-cluster
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378) ~[stormjar.jar:na]
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310) ~[stormjar.jar:na]
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) ~[stormjar.jar:na]
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678) ~[stormjar.jar:na]
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619) ~[stormjar.jar:na]
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149) ~[stormjar.jar:na]
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653) ~[stormjar.jar:na]
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92) ~[stormjar.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687) ~[stormjar.jar:na]
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669) ~[stormjar.jar:na]
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371) ~[stormjar.jar:na]
at org.apache.storm.hdfs.bolt.HdfsBolt.doPrepare(HdfsBolt.java:86) ~[stormjar.jar:na]
at org.apache.storm.hdfs.bolt.AbstractHdfsBolt.prepare(AbstractHdfsBolt.java:105) ~[stormjar.jar:na]
... 4 common frames omitted
... View more
09-28-2015
02:23 PM
1 Kudo
PROBLEM: The dfs.namenode.accesstime.precision value cannot be set from Ambari 2.1.0 when using the HDP-2.2 stack. Symptoms include: Searching for this config using the 'Filter...' box from Ambari shows 'No properties to display.' Adding the property to 'Custom hdfs-site' (in the 'Add Property' popup) tells the user that 'This property is already defined'. Attempts at clicking on the 'Find property' link again says 'No properties to display.' The property is visible on all cluster nodes in /etc/hadoop/conf/hdfs-site.xml but it is set to 0. Trying to edit this property manually on each node only reverts the setting back to 0 whenever the HDFS is restarted. ROOT CAUSE: Known bug: https://issues.apache.org/jira/browse/AMBARI-13006 https://hortonworks.jira.com/browse/BUG-43478 Ambari 2.1.0 added the ability to manage HDFS NFS Gateway settings, but, this only works for HDP stacks running at version 2.3.0 and higher.
RESOLUTION: A fix will be included in Ambari 2.1.2.
WORKAROUND: Ambari's configs.sh script will allow users to adjust the dfs.namenode.accesstime.precision property. The syntax is below: /var/lib/ambari-server/resources/scripts/configs.sh -u AMBARI_USER -p AMBARI_PASS set AMBARI_HOST CLUSTER_NAME hdfs-site dfs.namenode.accesstime.precision "<VALUE>" Substitute AMBARI_USER, AMBARI_PASS, AMBARI_HOST, CLUSTER_NAME and VALUE with appropriate values for the environment in use. For example: /var/lib/ambari-server/resources/scripts/configs.sh -u admin -p admin set ambarinode.example.com examplecluster hdfs-site dfs.namenode.accesstime.precision "360000"
KEYWORDS / TAGS:
ambari dfs.namenode.accesstime.precision configs.sh
... View more
Labels: