Member since
12-14-2018
11
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
417 | 06-14-2019 06:26 PM |
10-20-2020
01:20 PM
I'm working with a client that needs to limit the map reduce job's port range due to security constraints. I'm running ambari 2.7.3.0 and HDP 3.1.0 (MapReduce2 - 3.0.0.3.1). I created a custom setting through Ambari > MapReduce2 > Configs > Advanced > custom mapred-site: yarn.app.mapreduce.am.job.client.port-range with the value set to 50000-55000. It doesn't seem like the setting has any impact. I still observe the m/r jobs trying to connect using a port outside of that range. Eventually the job does succeed but only after many retries. I've seen other posts on this that indicate to use this setting, but for some reason this configuration setting doesn't seem to work for me. Therefore I decided to repost in the hopes that maybe I'm overlooking something? I gather this issue has been fixed in the version I'm running. Any ideas or suggestions are very much appreciated, thank you! https://issues.apache.org/jira/browse/MAPREDUCE-6338 https://github.com/apache/hadoop/blob/release-3.1.1-RC0/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java#L150 https://hadoop.apache.org/docs/r3.1.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
... View more
Labels:
07-21-2020
08:52 AM
For anyone interested. I found the debug statements in the /var/log/hbase/hbase-hbase-regionserver*.log files.
... View more
07-20-2020
12:53 PM
Thanks this is helpful, but I was curious. If there are versions of postgres installed on the system, how does ambari-server choose which one to run with? I actually have another system where I'm seeing the opposite. In this case postgresql-10 is running and postgresql (9.2) is not active after a restart. I'm just trying better understand how it choose which postgres service to start?
... View more
07-20-2020
08:10 AM
Hello, I'm running into an issue related to a system that has two different versions of postgres installed. The system is running ambari 2.7.3.0 and my intention is for to run ambari server with postgres v10. However I recently noticed it was running with verision 9.2. I could tell b/c the postgres 9.2 service was active while the postgresql-10 service was inactive. Back when I upgraded the system to ambari 2.7.3.0 I confirmed it was up and running with postgres10 but after rebooting the machine I noticed it's now running with version 9.2. Is there an ambari configuration/setting that tells it which version of postgres to choose/run with?
... View more
Labels:
04-22-2020
11:23 AM
I have a customer filter that extends: org.apache.hadoop.hbase.filter.FilterBase I have overridden the: public boolean filterRowKey( byte [] data, int offset, int length) { ...} I have INFO level log statements in this method and I'm trying to find the log location where these logs would be written to when the map/reduce job is running/executed. Does anyone happen to know where these logs would appear. Typically I look at the job history server logs for the mapper/reducers (syslog), but I don't see my log entries there. I think that makes sense b/c they are logs written when the map(), reduce() methods are called. The logs I'm looking for are occurring before this point where it filters out records in the table. I'm trying to understand why my filter isn't working and was hoping to see some debug logs to help clarify what's going on. Any thoughts are appreciated!
... View more
- Tags:
- Filter Logs
- HBase
Labels:
02-27-2020
07:19 AM
Thanks for your response. Sorry for my the late response, but I wanted to pass this along in case someone runs into this issue. So I did some more testing and what I found is that I had an incompatible postgresql-jdbc driver. So I was upgrading from 9.6 to 10.x and it appears the jdbc driver I was using was pretty old and incompatible with version 10.x. Once I upgraded the postgresql-jdbc driver everything worked just fine. Here is what I did to resolve my issue: wget https://jdbc.postgresql.org/download/postgresql-42.2.9.jar mv /usr/share/java/postgresql-jdbc.jar /usr/share/java/postgresql-jdbc.jar.old (backing up the old driver) cp ~/postgresql-42.2.9.jar /usr/share/java/postgresql-jdbc.jar ambari-server setup --jdbc-db=postgres --jdbc-driver=/usr/share/java/postgresql-jdbc.jar
... View more
12-02-2019
01:11 PM
I've upgraded to ambari to 2.7.3.0. In addition to upgrading ambari, I upgraded postgres from v9 to v10. After upgrading postgres, I can no longer start the ranger admin service within ambari. I've been seeing the notification: "Ranger admin service is not reachable, please restart the service." When I try to restart the service I observe the following log in /var/log/ranger/admin/catalina.out (see stack trace below):
Any ideas on what is going on here? I have a feeling there is a conflict between v9/v10 postgres db possibly, but I'm not entirely sure on this.
Dec 02, 2019 2:11:13 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kmsKeyMgr': Injection of autowired dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Could not autowire field: org.apache.ranger.biz.ServiceDBStore org.apache.ranger.biz.KmsKeyMgr.svcStore; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'serviceDBStore': Invocation of init method failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20131113-a7346c6): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Connections could not be acquired from the underlying database! Error Code: 0 at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:287) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:605) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:925) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:472) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:383) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:283) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5068) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5584) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1572) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1562) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: org.apache.ranger.biz.ServiceDBStore org.apache.ranger.biz.KmsKeyMgr.svcStore; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'serviceDBStore': Invocation of init method failed; nested exception is org.springframework.transaction.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20131113-a7346c6): org.eclipse.persistence.exceptions.DatabaseException Internal Exception: java.sql.SQLException: Connections could not be acquired from the underlying database! Error Code: 0
... View more
Labels:
06-14-2019
06:26 PM
For those might find this down the road. I just realized I wasn't connecting to the rangerkms db with the correct user. To backup the database I used the rangerkms user (instead of rangeradmin as in my original post.) sudo pg_dump -U rangerkms rangerkms > rangerkms.sql
... View more
06-13-2019
09:32 PM
Hi, Can you provide the procedure for backing up ranger kms? I've backed up the ranger and ranger audit databases using the postgres dump command (pg_dump), but I'm running into an issue when I try the same procedure for the rangerkms database. I tried the following command and got the error shown below. To access the ranger kms I have to login using the keyadmin user. I was thinking that this is my issue, that I'm trying to create a backup of this database using the wrong user? Any thoughts are suggestions are appreciated as I haven't been able to find any support articles on this. Thanks in advance... pg_dump -U rangeradmin rangerkms > rangerkms.sql Password: pg_dump: [archiver (db)] query failed: ERROR: permission denied for relation ranger_masterkey pg_dump: [archiver (db)] query was: LOCK TABLE public.ranger_masterkey IN ACCESS SHARE MODE
... View more
- Tags:
- ranger-kms
- solutions
Labels:
12-14-2018
06:25 PM
I'm interested in determining the correct YARN memory settings for our Hadoop cluster configured with nodes containing 32GB RAM. We have applied the recommended settings (via Ambari) using yarn-utils script referenced in the "Determining HDP Memory Configuration Settings" article (see link below). It seems to be working fine (most of the time). However I started to question the settings a bit when one of my more experienced team members thought the settings looked too high. I noticed there are no values recommended for nodes with 32GB of RAM in the Reserved Memory Allocations Table. Is there a specific reason for this? Also, I was looking at the yarn-utils python script. It has two tables that are used to calculate reserved memory for OS/HBase etc. Although there are no values for 32GB in those tables either. I was wondering if anyone else has come across this and what settings they applied for 32GB or RAM? Any suggestions are very much appreciated! 🙂 Thanks! https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/determine-hdp-memory-config.html Here are the outputs from running the yarn-utils script (Note I modified it slightly to add some log statements): $ python yarn-utils.py -c 4 -m 32 -d 1 -k True Using cores=4 memory=32GB disks=1 hbase=True Min Container Size:2048 Reserved Stack Memory:1 Reserved Hbase Memory:2 Profile: cores=4 memory=29696MB reserved=3GB
usableMem=29GB disks=1 Num Container=3 Container Ram=9728MB Used Ram=28GB Unused Ram=3GB yarn.scheduler.minimum-allocation-mb=9728 yarn.scheduler.maximum-allocation-mb=29184 yarn.nodemanager.resource.memory-mb=29184 mapreduce.map.memory.mb=9728 mapreduce.map.java.opts=-Xmx7782m mapreduce.reduce.memory.mb=9728 mapreduce.reduce.java.opts=-Xmx7782m yarn.app.mapreduce.am.resource.mb=9728 yarn.app.mapreduce.am.command-opts=-Xmx7782m mapreduce.task.io.sort.mb=3891
... View more
Labels: