Member since
09-23-2015
70
Posts
87
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4378 | 09-20-2016 09:11 AM | |
3619 | 05-17-2016 11:58 AM | |
2474 | 04-18-2016 07:27 PM | |
2391 | 04-14-2016 08:25 AM | |
2523 | 03-24-2016 07:16 PM |
04-18-2016
08:46 PM
@Benjamin Leonhardi Try this https://github.com/hkropp/vagrant-hdp/blob/master/bin/ambari-shell.jar and run it with % java -jar ambari-shell.jar --ambari.host=
... View more
04-18-2016
07:27 PM
1 Kudo
No, Ranger does not have a shell. But that is an interesting idea. If you are interested, Ambari does have a shell for it's REST API. It uses Spring Shell. Checkout this https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Shell and this http://docs.spring.io/spring-shell/docs/current/reference/htmlsingle/
... View more
04-14-2016
08:25 AM
1. First you need to make sure, that your none root user has sufficient sudoers rights. Please check this docuement: http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_Ambari_Security_Guide/content/_configuring_ambari_for_non-root.html 2. Next I suspect you setup password less SSH for the root user not the none-root user? I always prefer manual agent registration, but that is just me. Please check: http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.1.1/bk_ambari_reference_guide/content/_install_the_ambari_agents_manually.html A simple sed helps to do the manual registration: $ sed -i 's/hostname=localhost/hostname=<ambari_server_fqdn>/' /etc/ambari-agent/conf/ambari-agent.ini
... View more
04-12-2016
05:52 PM
The provided workaround by @Alessio Ubaldi seems to work. But you should also try to first upgrade to 2.3.7.
... View more
04-11-2016
02:40 PM
1 Kudo
Can I configure two authentication provider for Knox and if how would that work?
... View more
Labels:
- Labels:
-
Apache Knox
04-06-2016
09:51 AM
3 Kudos
The Error message in /var/log/knox/gateway.log says that the certificate used by Knox will be valid starting in the future: Failed to start gateway: org.apache.hadoop.gateway.services.ServiceLifecycleException: Gateway SSL Certificate is not yet valid. Server will not start. -> "not yet valid" Knox refuses to start, because using such a certificate will result in an SSL exception for almost any client. You will need to check the certificate your are using for Knox. This is stored as gateway-identity in gateway.jks under /var/lib/knox/data*/keystore Please refer to this: http://knox.apache.org/books/knox-0-6-0/user-guide.html#Management+of+Security+Artifacts What also should work is, if you simply remove the gateway-identity from the keystore, upon start Knox should create a self-signed certificate for you. Could you share how the certificate was generated? Did you change it after the install? Are you using ntp?
... View more
04-05-2016
02:24 PM
1 Kudo
Can you also provide what you find in /var/log/knox/gateway.log
... View more
04-02-2016
09:32 PM
1 Kudo
Actually this does not quite answer the question, but gives a good hint to dfs.internal.nameservices. The parameter is needed to distinguish between local namservice and other nameservices configured, but does not support distcp between two HA clusters. dfs.internal.nameservices for example is relevant for DNs so they don't register with the other cluster.
To support distcp between multiple HA clusters you simply have to define multiple nameservices like this for an example: <configuration>
<!-- services -->
<property>
<name>dfs.nameservices</name>
<value>serviceId1,serviceId2</value>
</property>
<!-- serviceId2 properties -->
<property>
<name>dfs.client.failover.proxy.provider.nameservices2</name>
<value>org.apache.hadoop.hdfs.server
.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.serviceId2</name>
<value>nn201,nn202</value>
</property>
<property>
<name>dfs.namenode.rpc-address.serviceId2.nn201</name>
<value>nn201.pro.net:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.serviceId2.nn201</name>
<value>nn201.pro.net:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.serviceId2.nn201</name>
<value>nn201.pro.net:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.serviceId2.nn201</name>
<value>nn201.prod.com:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.serviceId2.nn202</name>
<value>nn202.pro.net:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.serviceId2.nn202</name>
<value>nn202.pro.net:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.serviceId2.nn202</name>
<value>nn202.pro.net:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.serviceId2.nn202</name>
<value>nn202.prod.net:50470</value>
</property>
<!—- serviceId1 -->
<property>
<name>dfs.client.failover.proxy.provider.nameservices1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.
ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.namenodes.nameservices1</name>
<value>nn101,nn102</value>
</property>
<property>
<name>dfs.namenode.rpc-address.serviceId1.nn101</name>
<value>nn101.poc.net:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.serviceId1.nn101</name>
<value>nn101.poc.net:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.serviceId1.nn101</name>
<value>nn101.poc.net:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.serviceId1.nn101</name>
<value>nn101.poc.net:50470</value>
</property>
<property>
<name>dfs.namenode.rpc-address.serviceId1.nn102</name>
<value>nn102.poc.net:8020</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.serviceId1.nn102</name>
<value>nn102.poc.net:54321</value>
</property>
<property>
<name>dfs.namenode.http-address.serviceId1.nn102</name>
<value>nn102.poc.net:50070</value>
</property>
<property>
<name>dfs.namenode.https-address.serviceId1.nn102</name>
<value>nn102.poc.net:50470</value>
</property>
</configuration>
Adding this to the hdfs-site config makes both nameservices serviceId1,serviceId2 available.
... View more
03-24-2016
07:16 PM
1 Kudo
HCatalog does not support writing into a bucketed table. HCat explicitly checks if a table is bucketed, and if so disable storing to it to avoid writing to the table in a destructive way. From HCatOutputFormat: if (sd.getBucketCols() != null && !sd.getBucketCols().isEmpty()) {
throw new HCatException(ErrorType.ERROR_NOT_SUPPORTED, "Store into a partition with bucket definition from Pig/Mapreduce is not supported");
}
... View more
03-22-2016
07:22 PM
1 Kudo
+1 for the aspect to reuse Spark code itself
... View more