Member since
09-25-2015
109
Posts
36
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2879 | 04-03-2018 09:08 PM | |
4060 | 03-14-2018 04:01 PM | |
11274 | 03-14-2018 03:22 PM | |
3237 | 10-30-2017 04:29 PM | |
1643 | 10-17-2017 04:49 PM |
06-07-2016
09:06 PM
For misconfiguration like the cases above, you will find INFO level log like below: "The configured checkpoint interval is 0 minutes. Using an interval of XX (e.g., 60) minutes that is used for deletion instead"
... View more
04-06-2016
11:04 PM
1 Kudo
Steps to build a release version of Hortonworks component. Component details in question. Component Name = Hadoop Release Version = 2.3.4.0 Tools Needed: Maven = 3.1.1 Java = 1.7+ Steps: 1. Ensure Java and Maven is in path. 2. Get the code to your local linux machine, say workspace. $ git clone git@github.com:hortonworks/hadoop-release.git -b HDP-2.3.4.0-tag Hortonworks maintain <component>-release tags for all the public releases. 3. Point maven settings.xml to http://repo.hortonworks.com/content/groups/public/ 4. Go to the synced workspace and issue below command $ maven clean -DskipTests -DskipITs install Cheers, -Vijay.
... View more
03-04-2016
12:03 AM
3 Kudos
@Artem Ervits is right. There is no purge of snapshots by design. Snapshots are used for referring to data at a later time. Taking a snapshot and keeping a snapshot is an explicit decision that depends only on operations. HBase in no case will delete a snapshot without an explicit command.
... View more
04-05-2016
02:01 AM
This can be solved by appending "oozie.launcher." to the properties. <configuration>
<property>
<name>oozie.launcher.dfs.nameservices</name>
<value>${nameService1},${nameService2}</value>
</property>
<property>
<name>oozie.launcher.dfs.ha.namenodes.${nameService2}</name>
<value>${nn21},${nn22}</value>
</property>
<property>
<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn21}</name>
<value>${nn21_fqdn}:8020</value>
</property>
<property>
<name>oozie.launcher.dfs.namenode.rpc-address.${nameService2}.${nn22}</name>
<value>${nn22_fqdn}:8020</value>
</property>
<property>
<name>oozie.launcher.dfs.client.failover.proxy.provider.${nameService2}</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>oozie.launcher.mapreduce.job.hdfs-servers</name>
<value>${defaultFS1},${defaultFS2}</value>
</property>
</configuration>
... View more
06-05-2017
04:24 PM
Do we have a way to generate the DDLs for hive databases ?
... View more
11-04-2015
05:06 PM
1 Kudo
I don't see a supported, permanent way of doing this in Oozie. If you want to make a temporary change, you can configure Tomcat to run on a specific IP address. I made a change to /usr/hdp/2.3.0.0-2557/etc/oozie/tomcat-deployment.http/conf/server.xml and added address="#.#.#.#" to <Connector /> to get this to work. Source documentation
... View more
10-14-2015
06:27 PM
ODBC Driver v2.0.5 (released with HDP 2.3) is tested and compatible with Windows 2012 and also backward compatible with HDP 2.2. The driver supports Hive 0.11, 0.12, 0.13, 0.14, 1.0, 1.1, and 1.2. http://hortonworks.com/products/releases/hdp-2-3/#add_ons
... View more
10-09-2015
09:14 AM
Looking at the source; it's almost possible to do it, but the checks are hidden in some private/protected code. It's something that could be made public -why not file a JIRA on the Apache server?
... View more
08-02-2016
06:59 PM
I found this,thread when I am looking to create the 2 HS2 Instance on same Host, with different Authentication. I can create it Manually, but there will be Maintainability issues. Any trick through which we can do it through Ambari2.2.1?
... View more
10-29-2015
05:08 PM
1 Kudo
Does this question refer to Hadoop Service Level Authorization? http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/ServiceLevelAuth.html If so, then there is no need to restart the NameNode to make changes in service-level ACLs take effect. Instead, an admin can run this command: hdfs dfsadmin -refreshServiceAcl More documentation on this command is available here: http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#dfsadmin There is similar functionality for YARN too: http://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YarnCommands.html#rmadmin Another way to manage this is to declare a single "hadoopaccess" group for use in the service-level ACL definitions. Whenever a new set of users needs access, they would be added to this group. This shifts the management effort to an AD/LDAP administrator. Different IT shops would likely make a different trade-off between managing it that way or managing it in the service-level authorization policy files. Both approaches are valid, and it depends on the operator's preference.
... View more
- « Previous
-
- 1
- 2
- Next »