Member since
04-04-2016
166
Posts
168
Kudos Received
29
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3448 | 01-04-2018 01:37 PM | |
5862 | 08-01-2017 05:06 PM | |
1989 | 07-26-2017 01:04 AM | |
9608 | 07-21-2017 08:59 PM | |
3023 | 07-20-2017 08:59 PM |
05-18-2017
03:27 PM
Hi All, Facing an issue with knox gateway. When we pass a url like: https://app1.company.com:9443/company/opt/opentext/common/jslib/themes/default/athena.css;jsessionid=6F4614AEAE088D5DD74915C45DDAC39D The rule urlencodes it before sending to tomcat server. Tomcat then throws not found error: I have two questions on this: 1. Is there a way to tell the rewrite rule to not encode the url or is there another way to not encode the incoming URL before forwarding to tomcat server? I noticed ‘;’ and ‘=’ are getting converted in their ASCII formats. ;jsessionid= à %3Bjsessionid%3D And the url passed becomes: http://ip-172-1-1-1:9090/opentext/common/jslib/themes/default/athena.css%3Bjsessionid%3D6F4614AEAE088D5DD74915C45DDAC39D 2. We were trying to specify another dispatch (PassAllHeadersDispatch) in the service.xml But it never gets picked up as shown in the debug logs. How can we change the default dispatch apart from putting in the service.xml? We did checked that the PassAllHeadersDispatch are supported in the knox version (0.9) we are using. Given below are the topology, service.xml, rewrite.xml and the relevant portion of the log. knox topology cat opt.xml <topology> <gateway> <provider> <role>webappsec</role> <name>AppWebAppSec</name> <enabled>false</enabled> <param> <name>ve.bodylength.enabled</name> <value>true</value> </param> <param> <name>ve.gzip.enabled</name> <value>true</value> </param> <param> <name>ve.header.enabled</name> <value>true</value> </param> </provider> <provider> <role>identity-assertion</role> <name>Default</name> <enabled>false</enabled> </provider> </gateway> <service> <role>OPENTEXT</role> <url>http://ip-172-1-1-1:9090</url> </service> </topology> cat rewrite.xml <rules> <rule dir="IN" name="OPENTEXT/opentext/inbound" pattern="*://*:*/**/opentext/{**}"> <rewrite template="{$serviceUrl[OPENTEXT]}/opentext/{**}"/> </rule> <rule dir="OUT" name="OPENTEXT/opentext/outbound" pattern="/opentext/{**}"> <rewrite template="{$frontend[url]}/opentext/{**}"/> </rule> </rules> cat service.xml <service role="OPENTEXT" name="opentext" version="0.0.1"> <routes> <route path="/opentext/**"/> <route path="/opentext/"/> </routes> <dispatch classname="org.apache.hadoop.gateway.dispatch.PassAllHeadersDispatch"/> Logs 2017-05-11 15:34:25,692 DEBUG hadoop.gateway (GatewayFilter.java:doFilter(116)) - Received request: GET /opentext/common/jslib/themes/default/athena.css 2017-05-11 15:34:25,697 DEBUG hadoop.gateway (UrlRewriteProcessor.java:rewrite(164)) - Rewrote URL: https://app1.company.com:9443/company/opt/opentext/common/jslib/themes/default/athena.css;jsessionid=6F4614AEAE088D5DD74915C45DDAC39D, direction: IN via implicit rule: OPENTEXT/opentext/inbound1 to URL: http://ip-172-1-1-1:9090/opentext/common/jslib/themes/default/athena.css;jsessionid=6F4614AEAE088D5DD74915C45DDAC39D 2017-05-11 15:34:25,698 DEBUG hadoop.gateway (DefaultDispatch.java:executeOutboundRequest(120)) - Dispatch request: GET http://ip-172-1-1-1:9090/opentext/common/jslib/themes/default/athena.css%3Bjsessionid%3D6F4614AEAE088D5DD74915C45DDAC39D 2017-05-11 15:34:25,708 DEBUG hadoop.gateway (DefaultDispatch.java:executeOutboundRequest(133)) - Dispatch response status: 404 2017-05-11 15:34:25,708 DEBUG hadoop.gateway (DefaultDispatch.java:getInboundResponseContentType(202)) - Using explicit character set UTF-8 for entity of type text/html 2017-05-11 15:34:25,708 DEBUG hadoop.gateway (DefaultDispatch.java:getInboundResponseContentType(210)) - Inbound response entity content type: text/html; charset=utf-8 2017-05-11 15:34:25,709 DEBUG hadoop.gateway (UrlRewriteProcessor.java:rewrite(164)) - Rewrote URL: /opentext/common/jslib/themes/default/athena.css;jsessionid=6F4614AEAE088D5DD74915C45DDAC39D, direction: OUT via implicit rule: OPENTEXT/opentext/outbound2 to URL: https://app1.company.com:9443/company/opt/opentext/common/jslib/themes/default/athena.css;jsessionid=6F4614AEAE088D5DD74915C45DDAC39D Appreciate your help and time. Thanks & regards, Raj
... View more
Labels:
- Labels:
-
Apache Knox
05-08-2017
04:51 PM
1 Kudo
Scenario: Trying to add new columns to an already partitioned Hive table. Problem: The newly added columns will show up as null values on the data present in existing partitions. Solution:
One of the workaround can be copying/moving the data in a temporary location,dropping the partition, adding back the data and then adding back the partition. It works and the new column picks up the values. But for big tables this is not a viable solution. Best approach: We need to construct the alter statement to add columns with CASCADE option as follows: ALTER TABLE default.test_table ADD columns (column1 string,column2 string) CASCADE; From the Hive documentation:
“ALTER TABLE CHANGE COLUMN with CASCADE command changes the columns of a table's metadata, and cascades the same change to all the partition metadata. RESTRICT is the default, limiting column change only to table metadata.”
I found out that this option is not at all in wide use and can help who face this situation. Thanks
... View more
Labels:
04-04-2017
05:20 PM
1 Kudo
Problem: Trying to use sudo to run commands(as root) in a cluster gives error "
Stderr: Pseudo-terminal will not be allocated because stdin is not a terminal. sudo: sorry, you must have a tty to run sudo" This is a common situation where you are login as some other user in a terminal and then sudo to root to perform the task.
And you can only ssh from the user you logged in and not from root. Solution: Note: You have to install pssh if it is not already installed. in your cluster. Keep all host names in a file say all_hosts
This kind of execution will fail: pssh -i -h all_hosts "sudo whoami;hostname -f"
Will fail to execute and give error:
Stderr: sudo: sorry, you must have a tty to run sudo OR pssh -i -h all_hosts -x "-t" "sudo whoami;hostname -f"
Will fail to execute and give error:
Stderr: Pseudo-terminal will not be allocated because stdin is not a terminal. sudo: sorry, you must have a tty to run sudo The correct command should be run like this: pssh -i -h all_hosts -x "-t -t" "sudo whoami;hostname -f"
And you can safely ignore this error:
"Stderr: tcgetattr: Invalid argument" Your command will run in the way you intent it and you will be able to manage your cluster from one terminal Hope this helps. Thanks
... View more
03-15-2017
07:34 PM
1 Kudo
PROBLEM: Hive queries were getting stuck/hang on both mr and tez engines while selecting from a table containing few csv files. While the query was working fine for few csv files, for others it just hangs. Nothing in the logs also.
I was using Hive 1.2.1 on HDP 2.5.3.0 After some investigation, I found out that those files have some empty values '' in the fields where an rpad function was getting used. You can easily reproduce the issue by firing:
select rpad('',1,''); You will see that the query just hangs. The reason is it goes to an infinite loop. More details here: HIVE-15792 RESOLUTION:
nvl will not work in this case. That is select nvl('','D'); --will return '' I resolved using a query like this:
SELECT rpad(CASE WHEN LENGTH(nvl(COLUMN_NAME,null)) > 0 THEN COLUMN_NAME ELSE null END, 1, ''); In this case the query will return null for both null and empty string values occurring in COLUMN_NAME. Hope this helps. Thanks,
Rajdeep
... View more
Labels:
03-05-2017
06:17 PM
Found in version Ambari 2.4.2.0
Generation of SmartSense bundle fails with following error: ERROR 2017-02-24 10:52:53,284 shell.py:95 - Execution of command returned 1. Exception in thread "main" java.lang.IllegalArgumentException: Illegal group reference
at java.util.regex.Matcher.appendReplacement(Matcher.java:857)
at java.util.regex.Matcher.replaceAll(Matcher.java:955)
at java.lang.String.replaceAll(String.java:2223)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.handleApplyGroupPatternKeyValue(BundleAnonymizer.java:690)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.applyGroupPattern(BundleAnonymizer.java:673)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.applyPropertyRule(BundleAnonymizer.java:612)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.applyRules(BundleAnonymizer.java:393)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymizeFolder(BundleAnonymizer.java:291)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymizeFolder(BundleAnonymizer.java:259)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymizeFolder(BundleAnonymizer.java:224)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymize(BundleAnonymizer.java:160)
at com.hortonworks.smartsense.anonymization.Main.run(Main.java:82)
at com.hortonworks.smartsense.anonymization.Main.start(Main.java:210)
at com.hortonworks.smartsense.anonymization.Main.main(Main.java:294)
ERROR 2017-02-24 10:52:53,284 anonymize.py:67 - Execution of script /usr/java/default/bin/java -Xmx2048m -Xms1024m -Dlog.file.name=anonymization.log -Djava.io.tmpdir=/hadoop/smartsense/hst-agent/data/tmp -cp :/etc/hst/conf/:/usr/hdp/share/hst/hst-common/lib/* com.hortonworks.smartsense.anonymization.Main -m /hadoop/smartsense/hst-agent/data/tmp/master001.dev.company.com-a-00027129-c-00065260_comhdpdev_0_2017-02-24_10-52-04 -c /etc/hst/conf/hst-agent.ini failed
ERROR 2017-02-24 10:52:53,284 anonymize.py:68 - Execution of command returned 1. Exception in thread "main" java.lang.IllegalArgumentException: Illegal group reference
at java.util.regex.Matcher.appendReplacement(Matcher.java:857)
at java.util.regex.Matcher.replaceAll(Matcher.java:955)
at java.lang.String.replaceAll(String.java:2223)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.handleApplyGroupPatternKeyValue(BundleAnonymizer.java:690)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.applyGroupPattern(BundleAnonymizer.java:673)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.applyPropertyRule(BundleAnonymizer.java:612)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.applyRules(BundleAnonymizer.java:393)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymizeFolder(BundleAnonymizer.java:291)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymizeFolder(BundleAnonymizer.java:259)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymizeFolder(BundleAnonymizer.java:224)
at com.hortonworks.smartsense.anonymization.BundleAnonymizer.anonymize(BundleAnonymizer.java:160)
at com.hortonworks.smartsense.anonymization.Main.run(Main.java:82)
at com.hortonworks.smartsense.anonymization.Main.start(Main.java:210)
at com.hortonworks.smartsense.anonymization.Main.main(Main.java:294)
ERROR 2017-02-24 10:52:53,285 AnonymizeBundleCommand.py:62 - Anonymization failed. Please check logs.
Traceback (most recent call last):
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/command/AnonymizeBundleCommand.py", line 58, in execute
context['bundle_dir'] = anonymizer.anonymize(bundle_dir)
File "/usr/hdp/share/hst/hst-agent/lib/hst_agent/anonymize.py", line 69, in anonymize
raise Exception("Anonymization failed.")
Exception: Anonymization failed.
Cause:
A group Regex exception is happening when the constant REPLACE_PROPERTY_VALUE_PATTERN regex pattern is not able to properly group search the parameter patternStr; Resolution for the above error: Option1 (preferred): Preferred way to upgrade is upgrade Ambari and follow post upgrade procedures as per hortonworks docs NOTE: Make sure that current Ambari version is lower than 2.4.2.8-2 Option2: 1. Stop Smartsense from Ambari
2. Uninstall smartsense-hst rpm on all nodes
rpm -e smartsense-hst
3. Install smartsense-hst rpm on all nodes
For Centos/Redhat 7:
rpm -ivh http://private-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.2.8-2/smartsense/smartsense-hst-1.3.1.0-2.x86_64.rpm
For Centos/Redhat 6:
rpm -ivh http://private-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.4.2.8-2/smartsense/smartsense-hst-1.3.1.0-2.x86_64.rpm
4. On Ambari Server host run
hst add-to-ambari
5. Restart Ambari Server
6. Delete SmartSense service from Ambari if already there
7. Add SmartSense service through Ambari's add service wizard
A great many thanks to @sheetal for providing the solution. Thanks.
... View more
Labels:
12-16-2016
04:10 PM
@Eyad Garelnabi Thank you. One question did you tested the GetHDFS processor for fetching from an encrypted zone?
... View more
12-14-2016
10:53 PM
@vperiasamy
Thank you. I am going to accept this answer, since really there is no inbuilt way to bulk load it. I had to do some custom scripting to call the rest api to get the policies, change repository and post it back. Not complicated but not the cleanest way either since you have to hop from one cluster to the other:)
... View more
12-12-2016
09:13 PM
@Prashobh Balasundaram this thread is really interesting discussion and it works. I am looking for bulk dump not one by one.
... View more
12-12-2016
09:12 PM
@Prashobh Balasundaram this thread is really interesting discussion and it works. I am looking for bulk dump not one by one.
... View more
12-12-2016
08:51 PM
1 Kudo
Hi, Is there a way to bulk dump ranger policies using rest api? The used case here is from PROD to DR one time copy. I understand the process of copying it one by one using rest. Thanks, Raj
... View more
Labels:
- Labels:
-
Apache Ranger