Member since
05-17-2016
46
Posts
22
Kudos Received
13
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3242 | 06-01-2018 11:40 AM | |
1279 | 06-30-2017 10:12 AM | |
1540 | 06-30-2017 10:09 AM | |
943 | 06-30-2017 10:04 AM | |
960 | 06-30-2017 10:03 AM |
03-20-2017
05:11 PM
Yes, One way SSL is possible from HDP-2.5.0 https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_release-notes/content/patch_ranger.html
... View more
03-20-2017
04:57 PM
1 Kudo
Labels:
- Labels:
-
Apache YARN
01-16-2017
01:02 PM
1 Kudo
Hi @Karan Alang , you might be having multiple topology files under '/etc/knox/conf/topologies/' , you can have multiple topology files under that location in the form <topology_name>.xml, and knox will automatically pick up these new topology definition files. Then the URL for each service (curl call) will change from /gateway/default/HIVE to /gateway/<topologu_name>/HIVE for the topology named <topology_name>.xml May be, you only have configured default.xml topology file and hence its working. NOTE : These custom topology files need to be managed and maintained outside of Ambari, since Ambari only manages the default topology. Because of this, any changes made to the default topology via Ambari will need to be manually propagated to the customized topology files. Hope this helps !
... View more
01-03-2017
01:47 PM
@nyadav
You can change this default value of 300 seconds in the [libdefaults] section of the krb5.conf file. But for security reasons, do not increase the clock skew beyond 300 seconds.
... View more
01-03-2017
12:35 PM
2 Kudos
The default value for maximum clock skew is 300 seconds, or five minutes. So by default, the kerberos server will refuse to issue tickets only if the clocks are out of sync by more than 5 minutes, hence you are allowed to access the service or renew tickets till the acceptable clock skew time.
MIT ClockSkew
... View more
12-31-2016
07:06 AM
PROBLEM : Currently in Ranger Audit UI (HDP 2.4) , we do not have a feature or a search filter so that one can pull out a report which answers the question "who made a change to a particular policy" without scrolling through all the pages of audit. A search by Policy Id / Policy Name should solve that. RESOLUTION : We have a Internal Feature request raised to track the same
... View more
Labels:
12-22-2016
03:13 PM
PROBLEM
Running sqoop import command in direct mode for accessing a Netezza data warehouse appliance hangs at 100% map sqoop import --options-file sqoop_opts_file.opt
.
.
.
INFO mapreduce.Job: Running job: job_1465914632244_0005
INFO mapreduce.Job: Job job_1465914632244_0005 running in uber mode : false
INFO mapreduce.Job: map 0% reduce 0%
INFO mapreduce.Job: map 25% reduce 0%
INFO mapreduce.Job: map 50% reduce 0%
INFO mapreduce.Job: map 100% reduce 0%
The sqoop_opts_file.opt had the following options : -connect
jdbc:netezza://xxxxxxxxxxxxxxxxxxxxxx:5480/
--username
XXXX
--password
***************
--direct
--direct-split-size
1000000
--compress
--table
table_name
--target-dir
/user/root/table_name
--verbose
Yarn logs show the below errors ERROR [Thread-14] org.apache.sqoop.mapreduce.db.netezza.NetezzaJDBCStatementRunner: Unable to execute external table export
org.netezza.error.NzSQLException: ERROR: found delim ',' in a data field, specify escapeChar '\' option in the external table definition
RESOLUTION :
Add --input-escaped-by '\' parameter to sqoop command and then run the command
... View more
Labels:
12-22-2016
03:09 PM
PROBLEM When setting Bind Anonymous in Ambari's Ranger usersync tab it complains that ranger.usersync.ldap.binddn can not be empty in ambari usersync service logs, with the below stack trace : File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 44, in ranger
setup_usersync(upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 319, in setup_usersync
password_validation(params.ranger_usersync_ldap_ldapbindpassword)
File "/var/lib/ambari-agent/cache/common-services/RANGER/0.4.0/package/scripts/setup_ranger_xml.py", line 249, in password_validation
raise Fail("Blank password is not allowed for Bind user. Please enter valid password.")
resource_management.core.exceptions.Fail: Blank password is not allowed for Bind user. Please enter valid password.
ROOT CAUSE :
Currently 'Bind Anonymous' option is not supported in Ranger.
Hortonworks Internal BUG-68578 has been filed to disable this option in Ambari. RESOLUTION
Use a binddn and the bind password for Ranger Usersync.
... View more
Labels:
12-22-2016
03:05 PM
PROBLEM : The example workflow submitted by users was failing with the below ClassNotFound exceptions java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.oozie.example.SampleMapper not found
ROOT CAUSE :
These classes are found in oozie-examples-<version>.jar file, which is expected to be present in the lib folder of the job.
RESOLUTION : Add the oozie-examples-<version>.jar file to the lib folder of the job and then submit the job.
... View more
Labels:
12-22-2016
01:58 PM
2 Kudos
PROBLEM When we query hbase tables through hive, it always creates a fetch task instead of running a MR task. The parameterhive.fetch.task.conversion.threshold controls whether a fetch task would run or a Map Reduce. If hive.fetch.task.conversion.thresholdis less than the table size, it will use MapReduce Job.
The default value of the above parameter is 1GB. Create a 'hbase_hive' external table in hive, make sure the hbase table is more than 1GB. [root@node1 ~]# hadoop fs -du -s -h /apps/hbase/data/data/default/hbase-hive
3.4 G /apps/hbase/data/data/default/hbase-hive From beeline analyze the explain plan, which launches a fetch task instead of Map Reduce job, even when the size of the table is more than 1GB 0: jdbc:hive2://node1.hwxblr.com:10000/> explain select * from hbase_hive where key = '111111A111111' ;
+----------------------------------------------------------------------------------------------------------+--+
| Explain |
+----------------------------------------------------------------------------------------------------------+--+
| STAGE DEPENDENCIES: |
| Stage-0 is a root stage |
| |
| STAGE PLANS: |
| Stage: Stage-0 |
| Fetch Operator |
| limit: -1 |
ROOT CAUSE The reason for this behavior is that the fetch task conversion means initiate a local task (inside the client itself) instead of submitting a job to the cluster. For Hive on Hbase table, it does not have any stats and hence the return value would always be less than the fetch task conversion size and would launch the local task at client side.
RESOLUTION Query the table by setting the hive.fetch.task.conversion to 'minimal' before executing the query for Hive hbase tables. Do not set this property permanently in hive-site.xml to 'minimal'.
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »