Member since
09-15-2015
294
Posts
764
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1589 | 07-27-2017 04:12 PM | |
4328 | 06-29-2017 10:50 PM | |
2015 | 06-21-2017 06:29 PM | |
2274 | 06-20-2017 06:22 PM | |
2064 | 06-16-2017 06:46 PM |
03-15-2017
04:38 AM
3 Kudos
@Abhishek Kalekar - I think you accidentally posted the same question twice: https://community.hortonworks.com/questions/88673/cosmetic-what-it-should-be-exactly-fit-or-it-kindl.html It should be 'fit' as - Ambari already sets up the recommended configurations and It asks user to customize the services, as user deem 'fit'
... View more
03-15-2017
03:50 AM
1 Kudo
Please involve Hortonworks support if you are a customer; Hortonworks can release a patch for you if it's technically possible based on specifics of the JIRAs. If you don't have support, then I will suggest to get the Hortonworks Support. Thanks!
... View more
03-15-2017
03:22 AM
2 Kudos
Yeah, I can see that this is not yet back ported into HDP 2.3.
... View more
03-15-2017
03:09 AM
6 Kudos
@Xiaojie Ma - What is the version of HDP you are using. I can see that this is already part of the latest HDP 2.5
... View more
03-14-2017
10:21 PM
1 Kudo
Please try as below post suggests: https://community.hortonworks.com/questions/9142/getting-virtualbox-error-while-importing-virtual-a.html
... View more
03-14-2017
08:57 PM
1 Kudo
Below post has a pretty good explanation: https://community.hortonworks.com/questions/2408/ranger-implementation-hive-impersonation-false.html Hope this helps.
... View more
03-14-2017
08:50 PM
13 Kudos
Proxy user - Superusers Acting On Behalf Of Other Users
A superuser with username ‘super’ wants to submit job and access hdfs on behalf of a user joe. The superuser has kerberos credentials but user joe doesn’t have any. The tasks are required to run as user joe and any file accesses on namenode are required to be done as user joe. It is required that user joe can connect to the namenode or job tracker on a connection authenticated with super’s kerberos credentials. In other words super is impersonating the user joe. Some products such as Apache Oozie need this. Configurations You can configure proxy user using properties hadoop.proxyuser.$superuser.hosts along with either or both of hadoop.proxyuser.$superuser.groups and hadoop.proxyuser.$superuser.users. By specifying as below in core-site.xml, the superuser named super can connect only from host1 and host2 to impersonate a user belonging to group1 and group2. <property>
<name>hadoop.proxyuser.super.hosts</name>
<value>host1,host2</value>
</property>
<property>
<name>hadoop.proxyuser.super.groups</name>
<value>group1,group2</value>
</property> If these configurations are not present, impersonation will not be allowed and connection will fail. If more lax security is preferred, the wildcard value * may be used to allow impersonation from any host or of any user. For example, by specifying as below in core-site.xml, user named oozie accessing from any host can impersonate any user belonging to any group. <property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.groups</name>
<value>*</value>
</property> More details in below Apache Documentation: https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/Superusers.html
... View more
03-14-2017
04:17 PM
1 Kudo
Yes, as Ayub mentioned below as of now Sqoop exports are not automatic.
... View more
03-14-2017
04:12 PM
1 Kudo
You can follow doc to Install Falcon using Ambari UI or through the command line: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-movement-and-integration/content/ch_falcon_install_upgrade.html
... View more
03-14-2017
06:27 AM
1 Kudo
As per the documentation: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0-Win/bk_HDP_Install_Win/content/ref-d4ba8d91-cfe7-4655-8181-0168cc6d2681.1.html Safemode: Safemode is a state where no changes can be made to the blocks. HDFS cluster is in safemode state during start up because the cluster needs to validate all the blocks and their locations. Once validated, safemode is then disabled. The options for safemode command are: hdfs dfsadmin -safemode [enter | leave | get] Please see the following commands: root@mycluster:~# su - hdfs
hdfs@mycluster:~$ hdfs dfsadmin -safemode enter
Safe mode is ON
hdfs@mycluster:~$ hdfs dfsadmin -safemode get
Safe mode is ON
hdfs@mycluster:~$ hdfs dfsadmin -safemode leave
Safe mode is OFF
hdfs@mycluster:~$
... View more