Member since
09-29-2015
286
Posts
601
Kudos Received
60
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
12684 | 03-21-2017 07:34 PM | |
3663 | 11-16-2016 04:18 AM | |
2084 | 10-18-2016 03:57 PM | |
4983 | 09-12-2016 03:36 PM | |
8178 | 08-25-2016 09:01 PM |
02-10-2016
12:46 AM
1 Kudo
I found the documentation on how to do this without downtime: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#DataNode_Hot_Swap_Drive The only challenge that I encountered was the :port: in the command. It is the dfs.datanode.ipc.address parameter from hdfs-site.xml. My full command looked like this su - hdfs -c "hdfs dfsadmin -reconfig datanode sandbox.hortonworks.com:8010 start"
... View more
10-27-2015
05:38 AM
Refer following documentation for Host Config Groups http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.2.0/bk_Ambari_Users_Guide/content/_using_host_config_groups.html
... View more
10-23-2015
12:58 AM
1 Kudo
You have generated a certificate file: openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout test.key -out test.pem
After deploying the key, you try to ssh into the instance but get prompted for a password. ssh -vvv -i test.pem <user>@<host> ————— This is an issue with an updated openssl version, > openssl versionOpenSSL 1.0.1k 8 Jan 2015 This is a new version of openssl. The new version does not create the key with RSA at the begin and end. So you have to use a separate command to convert the key file to old version of ssh openssl rsa -in test.key -out test_new.key Once that is done, use the new file for ssh. ssh -vv -i test_new.key <user>@<host>
... View more
03-10-2016
05:34 PM
@Sean Roberts Good idea. Let me convert into an article
... View more
05-07-2017
04:04 PM
I faced the same problem when trying to run hive query on both hive.execution.engine=mr or hive.execution.engine=tez. The error looks like: Vertex failed, vertexName=Map 1, vertexId=vertex_1494168504267_0002_2_00, diagnostics=[Task failed, taskId=task_1494168504267_0002_2_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1494168504267_0002_2_00_000000_0:org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/attempt_1494168504267_0002_2_00_000000_0_10002_0/file.out at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150) at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131) at org.apache.tez.runtime.library.common.task.local.output.TezTaskOutputFiles.getSpillFileForWrite(TezTaskOutputFiles.java:207) at org.apache.tez.runtime.library.common.sort.impl.PipelinedSorter.spill(PipelinedSorter.java:544) The problem was solved by setting the following parameters: In file hadoop/conf/core-site.xml parameter hadoop.tmp.dir In file hadoop/conf/tez-site.xml parameter tez.runtime.framework.local.dirs In file hadoop/conf/yarn-site.xml parameter yarn.nodemanager.local-dirs In file hadoop/conf/mapred-site.xml parameter mapreduce.cluster.local.dir Set a valid directory with sufficient free space available and the query will execute.
... View more
09-05-2016
02:39 PM
1 Kudo
As of HDP 2.5 Safenet Luna is supported... https://cwiki.apache.org/confluence/display/RANGER/Ranger+KMS+Luna+HSM+Support
... View more
12-14-2018
02:43 PM
Hi @Kishore Jannu, Its better to create a new thread on this one. this original question used to be for very old ambari version. When you are raising a new thread Please post the exception in code format I am code format
... View more
11-04-2015
06:12 PM
Sounds right. @rvenkatesh@hortonworks.com @bdurai@hortonworks.com can you confirm?
... View more
01-07-2019
12:01 PM
Hi @Ancil McBarnett Can you Please help us with right documentation for Ranger KMS High Availbility Setup?
... View more
02-01-2018
11:18 PM
Nevermind! Found a method to pull the hiveserver2 out of the zookeeper ensemble by deleting its znode using zookeeper client. Here's the info from MapR's website.
... View more