Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2712 | 04-27-2020 03:48 AM | |
| 5269 | 04-26-2020 06:18 PM | |
| 4440 | 04-26-2020 06:05 PM | |
| 3563 | 04-13-2020 08:53 PM | |
| 5366 | 03-31-2020 02:10 AM |
04-07-2017
06:20 PM
1 Kudo
@Shigeru Takehara From your logs we see that it is OutOfMemory of type "GC overhead limit exceeded" .
The detail message "GC overhead limit exceeded" indicates that the garbage collector is running all the time and Java program is making very slow progress. After a garbage collection, if the Java process is spending more than approximately 98% of its time doing garbage collection and if it is recovering less than 2% of the heap and has been doing so far the last 5 (compile time constant) consecutive garbage collections, then a java.lang.OutOfMemoryError is thrown. This exception is typically thrown because the amount of live data barely fits into the Java heap having little free space for new allocations. So initially you should try increasing the "-Xmx" (Heap Size) of Zeppelin from Ambari UI.
... View more
04-07-2017
06:11 PM
1 Kudo
@Shigeru Takehara There are various kind of OutOfMemory errors. (Example: OutOfMemory in Native Space, OurOfMemory in Heap Space, OutOfMemory due to GC overhead....etc) As you said "I see the out of memory error with -XX:MaxPermSize=512m", Can you please let us know which kind of OutOfMemory error are you getting? Sharing log will be more useful. Even if you are using Java8 and still if the "-XX:MaxPermSize=512m" is applied then java8 will simply ignore it. . If you want to make any changes in the Zeppelin memory settings then you can login to ambari UI and the navigate to
"Zeppelin Notebook" --> "Configs" (Tab) --> "Advanced zeppelin-env" 'and then find the "zeppelin_env_content" there you will find "export ZEPPELIN_MEM" that you can edit.
... View more
04-07-2017
02:03 PM
@Jay Goebel In addition to my previous comment, f you are using Ambari 2.4 or later then you might want to refer to the following option as well : https://community.hortonworks.com/articles/90768/how-to-fix-ambari-hive-view-15-result-fetch-timed.html
In which for Ambari 1.5 Hive Views we can set the read timeout more effeciently pre instance basis using views.ambari.hive.<HIVE_VIEW_INSTANCE_NAME>.result.fetch.timeout=120000 .
... View more
04-07-2017
01:58 PM
1 Kudo
@Jay Goebel Additionally if you find that your HiveView queries are taking longer time to read the data from Hive then you might want to increase the Amabri View Read Timeout to see if it helps in fixing the "read Timeout". You can increase the values of the following parameters in your "/etc/ambari-server/conf/ambari.properties" and see if it helps. Then restart ambari server.
views.ambari.request.read.timeout.millis = 300000
views.request.read.timeout.millis = 300000 .
... View more
04-07-2017
07:21 AM
@rajdip chaudhuri In that case please do not use "-i" option in your curl GET command and redirect the output to a file as following using "-o" option: curl -u admin:admin -H "Content-Type: application/json" -X GET http://xx.xx.xx.207:6080/service/plugins/policies/10 -o /tmp/10_2.json . So that you only get the desired data not the response metadata.
... View more
04-07-2017
07:09 AM
@rajdip chaudhuri I am suspecting that your JSON file "/tmp/10_2.json" has the
following line as well in it which is not right ... you should remove
it. HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Set-Cookie: RANGERADMINSESSIONID=EDCBDAFF124C9802A79BFD945662BC1A; Path=/; HttpOnly X-Frame-Options: DENY Content-Type: application/json Transfer-Encoding: chunked Date: Fri, 07 Apr 2017 07:00:49 GMT . Your modified JSON file "/tmp/10_2.json" should contain only the JSON data part. Looks like you have some additional data in it. (No other extra lines). {"id":10,"guid":"c8afaae2-a4cc-4c25-b4b2-75ae9b0227eb","isEnabled":false,"createdBy":"Admin","updatedBy":"Admin","createTime":1491448221000,"updateTime":1491448221000,"version":1,"service":"TCSGEINTERNALCLUSTER_hive","name":"tcs_ge_user data masking test 2","policyType":1,"description":"tcs_ge_user data masking test 2","resourceSignature":"2cb6661609e66abfd9fbceaeac2be9d0","isAuditEnabled":true,"resources":{"database":{"values":["wells_fargo_poc"],"isExcludes":false,"isRecursive":false},"column":{"values":["card_number"],"isExcludes":false,"isRecursive":false},"table":{"values":["test_masked_2"],"isExcludes":false,"isRecursive":false}},"policyItems":[],"denyPolicyItems":[],"allowExceptions":[],"denyExceptions":[],"dataMaskPolicyItems":[{"accesses":[{"type":"select","isAllowed":true}],"users":["tcs_ge_user"],"groups":["tcs_ge_user"],"conditions":[],"delegateAdmin":false,"dataMaskInfo":{"dataMaskType":"MASK_HASH"}}],"rowFilterPolicyItems":[]} .
... View more
04-07-2017
03:06 AM
1 Kudo
@Jeffrey Barr
The following steps that you performed is good and suggested as per the doc. http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.18/bk_ambari-reference/content/_using_hive_with_postgresql.html ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql94-jdbc.jar . However if you are still facing any issue then please check the output of "lsof" command to verify which JAR is being loaded by your hive. lsof -p $PID | grep postgresql . Still if you find that old jar is being used then try restarting the hive and see if it works. Also perform the following steps as last option.
On the HiveServer2/Metastore Server - find / -name 'postgre*.jar' -ls - Remove old postgrexxx.jar from Agent's tmp directory and /usr/hdb/<version>/hive/lib - Remove old postgrexxx.jar from Agent's cache directory - Replace /usr/hdp/<version>/hadoop/lib/postgresql-jdbc.jar with newer version if exists. - Restart ambari-agent
- Restart Hive (hiveserver2/metastore) from Ambari - Run find command again to make sure the version is correct by checking the file size . .
... View more
04-07-2017
02:29 AM
@Rahul Jain
Great to hear that your issue is resolved as per suggestion. It will be very helpful for other as well if you can mark this HCC thread as "Accepted" (Answered).
... View more
04-06-2017
05:36 PM
@Shashant Panwar Even with newer Ambari 2.5.0.3 you can regenerate the keytab for whole cluster services/ Or for selected hosts that are missing keytabs. Texts from 2.5.0.3 Doc: You can regenerate key tabs for only those hosts that are missing key tabs: for example, hosts that were not online or available from Ambari when enabling Kerberos. http://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-operations/content/how_to_regenerate_keytabs.html .
... View more
04-06-2017
04:49 PM
1 Kudo
@Theyaa Matti I do not see such option with Ambari Blueprints. The whole purpose and aim of having blueprint is that it provides a declarative definition of a cluster. With a Blueprint, you specify a Stack, the Component layout and the Configurations to materialize a Hadoop cluster instance (via a REST API) without having to use the Ambari Cluster Install Wizard. The requirement that you have like "few hdfs directories" is something Post cluster setup. Please refer to the following link to know more about what all things can be achieved using ambari blueprints: https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Introduction
... View more