Member since
02-18-2021
13
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
7152 | 04-15-2021 06:35 AM |
11-07-2021
10:37 PM
When I use shell or spark action things work as expected this is the contents of the workflow.xml <workflow-app name="Test Java Action" xmlns="uri:oozie:workflow:0.5">
<start to="java-b4b4"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="java-b4b4">
<java>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<main-class>za.co.sanlam.App</main-class>
<file>/user/g983797/test-1.0-SNAPSHOT.jar#test-1.0-SNAPSHOT.jar</file>
</java>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app> yes agreed, that's what I expected, I expected to see some logging that would show the main class was invoked. Given how simple the setup of the action is I am very confused about how to make it do anything else. It makes no sense what its actually doing or why its not running
... View more
11-04-2021
12:58 PM
I am trying to use the oozie java action to run a hello world java program to test the functionality but no matter what I do it seems oozie does something and passed but does not run my java program I confirmed this by entering garbage in the 'Main class' field and it still passes and doesn't do anything useful? This passes?? So what is oozie doing? and how do i get it to run the jar? (it is uploaded to hdfs) These are the logs which show no attempt at running my jar Log Type: stdout
Log Upload Time: Thu Nov 04 21:57:01 +0200 2021
Log Length: 3809
log4j: Trying to find [container-log4j.properties] using context classloader sun.misc.Launcher$AppClassLoader@5e2de80c.
log4j: Using URL [jar:file:/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p48.16116877/jars/hadoop-yarn-server-nodemanager-3.1.1.7.1.4.48-1.jar!/container-log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL jar:file:/opt/cloudera/parcels/CDH-7.1.4-1.cdh7.1.4.p48.16116877/jars/hadoop-yarn-server-nodemanager-3.1.1.7.1.4.48-1.jar!/container-log4j.properties
log4j: Hierarchy threshold set to [ALL].
log4j: Parsing for [root] with value=[INFO,CLA, EventCounter].
log4j: Level token is [INFO].
log4j: Category root set to INFO
log4j: Parsing appender named "CLA".
log4j: Parsing layout options for "CLA".
log4j: Setting property [conversionPattern] to [%d{ISO8601} %p [%t] %c: %m%n].
log4j: End of parsing for "CLA".
log4j: Setting property [containerLogFile] to [syslog].
log4j: Setting property [totalLogFileSize] to [1048576].
log4j: Setting property [containerLogDir] to [/data/06/yarn/container-logs/application_1635807677896_29253/container_e33_1635807677896_29253_01_000001].
log4j: setFile called: /data/06/yarn/container-logs/application_1635807677896_29253/container_e33_1635807677896_29253_01_000001/syslog, true
log4j: setFile ended
log4j: Parsed "CLA" options.
log4j: Parsing appender named "EventCounter".
log4j: Parsed "EventCounter" options.
log4j: Parsing for [org.apache.hadoop.mapreduce.task.reduce] with value=[INFO,CLA].
log4j: Level token is [INFO].
log4j: Category org.apache.hadoop.mapreduce.task.reduce set to INFO
log4j: Parsing appender named "CLA".
log4j: Appender "CLA" was already parsed.
log4j: Handling log4j.additivity.org.apache.hadoop.mapreduce.task.reduce=[false]
log4j: Setting additivity for "org.apache.hadoop.mapreduce.task.reduce" to false
log4j: Parsing for [org.apache.hadoop.mapred.Merger] with value=[INFO,CLA].
log4j: Level token is [INFO].
log4j: Category org.apache.hadoop.mapred.Merger set to INFO
log4j: Parsing appender named "CLA".
log4j: Appender "CLA" was already parsed.
log4j: Handling log4j.additivity.org.apache.hadoop.mapred.Merger=[false]
log4j: Setting additivity for "org.apache.hadoop.mapred.Merger" to false
log4j: Finished configuring.
Launcher AM configuration loaded
Executing Oozie Launcher with tokens:
Kind: YARN_AM_RM_TOKEN, Service: , Ident: (appAttemptId { application_id { id: 29253 cluster_timestamp: 1635807677896 } attemptId: 1 } keyId: -581583983)
Kind: RM_DELEGATION_TOKEN, Service: 192.168.80.67:8032,192.168.80.68:8032, Ident: (owner=g983797, renewer=yarn, realUser=oozie/srv009066.mud.internal.co.za@ANDROMEDA.CLOUDERA, issueDate=1636055816551, maxDate=1636660616551, sequenceNumber=455083, masterKeyId=395)
Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for g983797: owner=g983797, renewer=yarn, realUser=oozie/srv009066.mud.internal.co.za@ANDROMEDA.CLOUDERA, issueDate=1636055816526, maxDate=1636660616526, sequenceNumber=23258625, masterKeyId=1399)
Kind: MR_DELEGATION_TOKEN, Service: 192.168.80.68:10020, Ident: (owner=g983797, renewer=yarn, realUser=oozie/srv009066.mud.internal.co.za@ANDROMEDA.CLOUDERA, issueDate=1636055816543, maxDate=1636660616543, sequenceNumber=5124, masterKeyId=4)
Oozie Launcher, uploading action data to HDFS sequence file: hdfs://nameservice1/user/g983797/oozie-oozi/0485072-211028195417634-oozie-oozi-W/java-b4b4--java/action-data.seq
Stopping AM
Callback notification attempts left 0
Callback notification trying http://cloudera.sanlam.co.za:11000/oozie/callback?id=0485072-211028195417634-oozie-oozi-W@java-b4b4&status=SUCCEEDED
Callback notification to http://cloudera.sanlam.co.za:11000/oozie/callback?id=0485072-211028195417634-oozie-oozi-W@java-b4b4&status=SUCCEEDED succeeded
Callback notification succeeded My java program does this for now package za.co.sanlam;
/**
* Hello world!
*
*/
public class App
{
public static void main( String[] args )
{
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
System.out.println( "Hello World!" );
}
}
... View more
Labels:
- Labels:
-
Apache Oozie
06-23-2021
01:28 AM
I am using hive metatore version 3.1.0 in remote mode with Postgres as the underlying database. I recently upgraded from using embedded mode and noticed the following errors: 2021-06-22 11:44:04
Caused by: org.postgresql.util.PSQLException: ERROR: insert or update on table "KEY_CONSTRAINTS" violates foreign key constraint "KEY_CONSTRAINTS_FK3" .. 2021-06-22 09:44:08,978 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_databases: #
2021-06-22 09:44:08,978 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_databases: #
2021-06-22 09:44:08,952 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_foreign_keys : parentdb=null parenttbl=null foreigndb=default foreigntbl=hiveserver_runtime_stats
2021-06-22 09:44:08,951 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_foreign_keys : parentdb=null parenttbl=null foreigndb=default foreigntbl=hiveserver_runtime_stats
2021-06-22 09:44:08,945 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_unique_constraints : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,945 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_unique_constraints : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,940 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_primary_keys : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,939 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_primary_keys : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,933 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_primary_keys : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,933 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_primary_keys : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,927 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_not_null_constraints : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,926 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_not_null_constraints : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,870 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_table : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,870 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_table : tbl=hive.default.hiveserver_runtime_stats
2021-06-22 09:44:08,855 INFO [pool-6-thread-84] txn.TxnHandler (TxnHandler.java:openTxns(644)) - Added entries to MIN_HISTORY_LEVEL for current txns: ([1210814]) with min_open_txn: 1210814
2021-06-22 09:44:08,719 INFO [pool-6-thread-84] metastore.ObjectStore (ObjectStore.java:setConf(396)) - Initialized ObjectStore
2021-06-22 09:44:08,719 INFO [pool-6-thread-84] metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:<init>(186)) - Using direct SQL, underlying DB is POSTGRES
2021-06-22 09:44:08,705 INFO [pool-6-thread-84] metastore.ObjectStore (ObjectStore.java:initializeHelper(481)) - ObjectStore, initialize called
2021-06-22 09:44:08,704 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:newRawStoreForConf(717)) - 84: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
2021-06-22 09:44:08,704 INFO [pool-6-thread-84] HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(347)) - ugi=hadoop ip=10.1.5.132 cmd=source:10.1.5.132 get_database: #default
2021-06-22 09:44:08,704 INFO [pool-6-thread-84] metastore.HiveMetaStore (HiveMetaStore.java:logInfo(895)) - 84: source:10.1.5.132 get_database: #default
2021-06-22 09:44:04,989 INFO [pool-6-thread-110] txn.TxnHandler (TxnHandler.java:commitTxn(1015)) - Removed committed transaction: (1210813) from MIN_HISTORY_LEVEL
2021-06-22 09:44:04,982 INFO [pool-6-thread-110] txn.TxnHandler (TxnHandler.java:commitTxn(1000)) - Expected to move at least one record from txn_components to completed_txn_components when committing txn! txnid:1210813
... 43 more
at org.datanucleus.store.rdbms.table.TableImpl.validateConstraints(TableImpl.java:394)
at org.datanucleus.store.rdbms.table.TableImpl.validateForeignKeys(TableImpl.java:468)
at org.datanucleus.store.rdbms.table.TableImpl.createForeignKeys(TableImpl.java:522)
at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:879)
at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264)
at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
Detail: Key (PARENT_CD_ID)=(1450) is not present in table "CDS".
Caused by: org.postgresql.util.PSQLException: ERROR: insert or update on table "KEY_CONSTRAINTS" violates foreign key constraint "KEY_CONSTRAINTS_FK3" at java.lang.Thread.run(Thread.java:748)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
at javax.security.auth.Subject.doAs(Subject.java:422)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:15052)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:15068)
at com.sun.proxy.$Proxy26.drop_table_with_environment_context(Unknown Source)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:2697)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:2526)
at com.sun.proxy.$Proxy25.dropTable(Unknown Source)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at org.apache.hadoop.hive.metastore.ObjectStore.dropTable(ObjectStore.java:1407)
at org.apache.hadoop.hive.metastore.ObjectStore.listAllTableConstraintsWithOptionalConstraintName(ObjectStore.java:1487)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:228)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:368)
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1744)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1816)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:347)
at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:865)
at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:425)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:672)
at org.datanucleus.store.rdbms.RDBMSStoreManager.manageClasses(RDBMSStoreManager.java:1627)
at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:119)
at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2896)
at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3471)
at org.datanucleus.store.rdbms.table.ClassTable.validateConstraints(ClassTable.java:3576)
at org.datanucleus.store.rdbms.table.TableImpl.validateConstraints(TableImpl.java:395)
at org.datanucleus.store.rdbms.table.TableImpl.validateIndices(TableImpl.java:566)
at org.datanucleus.store.rdbms.table.TableImpl.getExistingIndices(TableImpl.java:1116)
at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getSchemaData(RDBMSSchemaHandler.java:333)
at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getRDBMSTableIndexInfoForTable(RDBMSSchemaHandler.java:783)
at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getRDBMSTableIndexInfoForTable(RDBMSSchemaHandler.java:813)
at org.apache.commons.dbcp.DelegatingDatabaseMetaData.getIndexInfo(DelegatingDatabaseMetaData.java:327)
at org.postgresql.jdbc.PgDatabaseMetaData.getIndexInfo(PgDatabaseMetaData.java:2401)
at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:224)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:270)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:293)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:307)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block
Nested Throwables StackTrace:
... 43 more
at org.datanucleus.store.rdbms.table.TableImpl.validateConstraints(TableImpl.java:394)
at org.datanucleus.store.rdbms.table.TableImpl.validateForeignKeys(TableImpl.java:468)
at org.datanucleus.store.rdbms.table.TableImpl.createForeignKeys(TableImpl.java:522)
at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:879)
at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264)
at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:266) An analysis of the error shows that some process is writing to the table "KEY_CONSTRAINTS" but not adding the corresponding primary key to the table "CDS" I attempted to manually add the missing keys to the CDS table, but a couple of hours later the problem returns only with a new set of keys. At this stage, I'm not sure what is causing this issue? What is perhaps the upgrade to using remote mode or a previous upgrade from 3.0.0 to 3.1.0 and how do I fix this? Here we can see the tables affected My questions are: Is there any documentation to help understand each of these tables? More specifically what is the "KEY_CONSTRAINTS" and "CDS" tables function? What are the processes that write to these two tables? I need to try understand why there is a discrepancies in the keys between these two tables I would really appreciate any comments and answers to the above questions or any useful information to help me debug further
... View more
Labels:
- Labels:
-
Apache Hive
06-16-2021
11:00 PM
Thank you. This worked. I have another question, so this particular job now failed again for OOM but what's weird is that if I run this with the same driver memory settings via spark-submit it uses considerably less memory, and using oozie i keep getting failures even though I continue to increase the AM container size. is there any particular reason why the AM container uses much more memory using oozie?
... View more
06-15-2021
08:39 AM
Hi, I have been struggling to get my oozie jobs that run a single spark action to run through without an error. I am using client mode. When I run the spark job via the command line with spark-submit there is no issue and from what I understand this is because I am able to set the driver memory settings but via Oozie the launcher hosts the driver and it so this new container now needs to match the memory settings of my the driver or it will fail. There are many blogs and articles I have read but none of these have helped: https://stackoverflow.com/questions/24262896/oozie-shell-action-memory-limit https://community.cloudera.com/t5/Support-Questions/Oozie-Spark-Action-getting-out-java-lang-OutOfMemoryError/td-p/39732 http://www.openkb.info/2016/07/memory-allocation-for-oozie-launcher-job.html https://stackoverflow.com/questions/42785649/oozie-workflow-with-spark-application-reports-out-of-memory Where the issue comes in partly is I don't know where in the XML these settings should go and then because we create the xml via the UI where in the UI should I enter these settings? Possible places are: here in the spark settings? or here perhaps here under the workflow settings? Or maybe the settings are wrong? The oozie docs are not clear So, when the oozie workflow fails the error looks like this: Application application_1623355676175_49420 failed 2 times due to AM Container for appattempt_1623355676175_49420_000002 exited with exitCode: -104
Failing this attempt.Diagnostics: [2021-06-15 16:38:17.747]Container [pid=1475386,containerID=container_e09_1623355676175_49420_02_000001] is running 5484544B beyond the 'PHYSICAL' memory limit. Current usage: 2.0 GB of 2 GB physical memory used; 4.4 GB of 4.2 GB virtual memory used. Killing container. .... i tried to make sense of the error: and my oozie XML file looks like this: <workflow-app name="WF - Spark Matching Job" xmlns="uri:oozie:workflow:0.5">
<global>
<configuration>
<property>
<name>oozie.launcher.yarn.app.mapreduce.am.resource.mb</name>
<value>5120</value>
</property>
<property>
<name>oozie.launcher.mapreduce.map.memory.mb</name>
<value>5120</value>
</property>
<property>
<name>oozie.launcher.mapreduce.map.java.opts</name>
<value>-Xmx5120m</value>
</property>
</configuration>
</global>
<credentials>
<credential name="hcat" type="hcat">
<property>
<name>hcat.metastore.uri</name>
<value>thrift://XXXXXX:9083</value>
</property>
<property>
<name>hcat.metastore.principal</name>
<value>hive/srv006121.mud.internal.co.za@ANDROMEDA.CLOUDERA</value>
</property>
</credential>
<credential name="hive2" type="hive2">
<property>
<name>hive2.jdbc.url</name>
<value>jdbc:hive2://XXXXXXXX:10000/default</value>
</property>
<property>
<name>hive2.server.principal</name>
<value>XXXXX</value>
</property>
</credential>
</credentials>
<start to="spark-9223"/>
<kill name="Kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="spark-9223" cred="hive2,hcat">
<spark xmlns="uri:oozie:spark-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<master>yarn</master>
<mode>client</mode>
<name></name>
<class>za.co.sanlam.custhub.spark.SparkApp</class>
<jar>party-matching-rules-1.0.0-SNAPSHOT-jar-with-dependencies.jar</jar>
<spark-opts>--driver-memory 6g --driver-cores 2 --executor-memory 16g --executor-cores 5 --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=file:custom-spark-log4j.properties" --py-files hive-warehouse-connector-assembly-1.0.0.7.1.4.20-2.jar</spark-opts>
<arg>-c</arg>
<arg>hdfs:///user/g983797/spark-tests/andromeda/runs/sprint_12_1/metadata.json</arg>
<arg>-e</arg>
<arg>dev</arg>
<arg>-v</arg>
<arg>version-1.0.0</arg>
<arg>-r</arg>
<arg>ruleset1</arg>
<arg>-dt</arg>
<arg>1900-01-01</arg>
<file>/user/g983797/spark-tests/andromeda/runs/sprint_12_1/party-matching-rules-1.0.0-SNAPSHOT-jar-with-dependencies.jar#party-matching-rules-1.0.0-SNAPSHOT-jar-with-dependencies.jar</file>
<file>/user/g983797/spark-tests/hive-warehouse-connector-assembly-1.0.0.7.1.4.20-2.jar#hive-warehouse-connector-assembly-1.0.0.7.1.4.20-2.jar</file>
<file>/user/g983797/spark-tests/andromeda/runs/sprint_12_1/spark-defaults.conf#spark-defaults.conf</file>
<file>/user/g983797/spark-tests/andromeda/runs/sprint_12_1/custom-spark-log4j.properties#custom-spark-log4j.properties</file>
</spark>
<ok to="email-3828"/>
<error to="email-879a"/>
</action>
<action name="email-3828">
<email xmlns="uri:oozie:email-action:0.2">
<to>XXXXXX</to>
<subject>Matching Job Finished Successfully</subject>
<body>The matching job finished</body>
<content_type>text/plain</content_type>
</email>
<ok to="End"/>
<error to="Kill"/>
</action>
<action name="email-879a">
<email xmlns="uri:oozie:email-action:0.2">
<to>XXXXXXX</to>
<subject>Matching Job Failed</subject>
<body>Matching Job Failed</body>
<content_type>text/plain</content_type>
</email>
<ok to="End"/>
<error to="Kill"/>
</action>
<end name="End"/>
</workflow-app> I have tried a number of different configuration settings and values in different places but it seems no matter what the oozie launcher is launched with a memory limit of 2gb Could somebody who has been able to get around this please assist
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Spark
04-15-2021
06:35 AM
For anybody facing this issue, I found that it was the corporate firewall/proxy blocking access to the Cloudera repository via maven. The solution is to add the cloudera repository to a corporate nexus or artifactory proxy group as it likely one is already used and it's trusted by maven. You must remove the cloudera repository reference from your pom and from the settings.xml or it still fails.
... View more
04-12-2021
08:20 AM
I have done a test and pulling the normal spark dependencies from maven central works fine So this works: <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>XXXXX</groupId>
<artifactId>XXXXXXXX</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.8</java.version>
<scala.version>2.12</scala.version>
<spark.version>3.0.0</spark.version>
<hwc.version>1.0.0.7.1.4.9-11</hwc.version>
<maven-compiler-plugin.version>3.8.0</maven-compiler-plugin.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_${scala.version}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
</dependencies>
<repositories>
<repository>
<id>central</id>
<name>Maven Plugin Repository</name>
<url>https://repo1.maven.org/maven2</url>
<layout>default</layout>
<snapshots>
<enabled>false</enabled>
</snapshots>
<releases>
<updatePolicy>never</updatePolicy>
</releases>
</repository>
</repositories>
<!-- Build -->
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
</configuration>
</plugin>
</plugins>
</build>
</project> But this gives an error: <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>XXXXX</groupId>
<artifactId>XXXXXXXX</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<java.version>1.8</java.version>
<scala.version>2.11</scala.version>
<spark.version>2.4.0.7.1.4.9-1</spark.version>
<spark.scope>provided</spark.scope>
<hwc.version>1.0.0.7.1.4.9-11</hwc.version>
<maven-compiler-plugin.version>3.8.0</maven-compiler-plugin.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
<scope>${spark.scope}</scope>
</dependency>
<!--We are using HIVE 3.1.3-->
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>3.1.3000.7.1.4.9-1</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>${spark.version}</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>com.hortonworks.hive</groupId>
<artifactId>hive-warehouse-connector_2.11</artifactId>
<version>1.0.0.7.1.4.9-1</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
</dependencies>
<repositories>
<repository>
<id>cloudera</id>
<name>Cloudera public repo</name>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
<repository>
<id>central</id>
<name>Maven Plugin Repository</name>
<url>https://repo1.maven.org/maven2</url>
<layout>default</layout>
</repository>
</repositories>
<!-- Build -->
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
</configuration>
</plugin>
</plugins>
</build>
</project> The error again is: [INFO] Building XXXXXXXX 1.0.0-SNAPSHOT
[INFO] --------------------------------[ jar ]---------------------------------
Downloading from internal-repository: https://repo1.maven.org/maven2/org/apache/spark/spark-core_2.11/2.4.0.7.1.4.9-1/spark-core_2.11-2.4.0.7.1.4.9-1.pom
Downloading from internal-repository: https://repo1.maven.org/maven2/org/apache/spark/spark-sql_2.11/2.4.0.7.1.4.9-1/spark-sql_2.11-2.4.0.7.1.4.9-1.pom
Downloading from internal-repository: https://repo1.maven.org/maven2/org/apache/hive/hive-jdbc/3.1.3000.7.1.4.9-1/hive-jdbc-3.1.3000.7.1.4.9-1.pom
Downloading from internal-repository: https://repo1.maven.org/maven2/org/apache/spark/spark-hive_2.11/2.4.0.7.1.4.9-1/spark-hive_2.11-2.4.0.7.1.4.9-1.pom
Downloading from internal-repository: https://repo1.maven.org/maven2/com/hortonworks/hive/hive-warehouse-connector_2.11/1.0.0.7.1.4.9-1/hive-warehouse-connector_2.11-1.0.0.7.1.4.9-1.pom
[INFO] ------------------------------------------------------------------------
/repo1.maven.org/maven2): Transfer failed for https://repo1.maven.org/maven2/org/apache/spark/spark-core_2.11/2.4.0.7.1.4.9-1/spark-core_2.11-2.4.0.7.1.4.9-1.pom ProxyInfo{host='proxysouth.mud.internal.co.za', userName='null', port=8080, type='http', nonProxyHosts='null'}: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
... View more
03-31-2021
10:42 AM
I am trying to build a new maven spark project but I cannot seem to pull the spark maven dependencies. I keep getting the following error: Could not resolve dependencies for project za.co.sanlam.custhub.spark:party-matching-rules:jar:1.0.0-SNAPSHOT: Failed to collect dependencies at org.apache.spark:spark-core_2.11:jar:2.4.0.7.1.4.9-1 -> org.apache.avro:avro:jar:1.8.2.7.1.4.9-1: Failed to read artifact descriptor for org.apache.avro:avro:jar:1.8.2.7.1.4.9-1: Could not transfer artifact org.apache.avro:avro:pom:1.8.2.7.1.4.9-1 from/to cloudera (https://repository.cloudera.com/artifactory/cloudera-repos/): Transfer failed for https://repository.cloudera.com/artifactory/cloudera-repos/org/apache/avro/avro/1.8.2.7.1.4.9-1/avro-1.8.2.7.1.4.9-1.pom ProxyInfo{host='10.0.0.132', userName='null', port=8080, type='http', nonProxyHosts='null'}: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> This happens if I do a mvn clean compile or any maven command as it fails downloading the dependencies In my pom I have: (nothing fancy) <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>za.co.my.project</groupId>
<artifactId>My-Project</artifactId>
<version>1.0.${revision}</version>
<name>My-Project</name>
<url>http://party-matching-rules</url>
<properties>
<revision>0-SNAPSHOT</revision> <!--Default if not given via ci-->
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<java.version>1.8</java.version>
<scala.version>2.11</scala.version>
<spark.version>2.4.0.7.1.4.9-1</spark.version>
<spark.scope>provided</spark.scope>
<hwc.version>1.0.0.7.1.4.9-11</hwc.version>
<maven-compiler-plugin.version>3.8.0</maven-compiler-plugin.version>
<maven-shade-plugin.version>3.2.3</maven-shade-plugin.version>
</properties>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>${spark.version}</version>
<scope>${spark.scope}</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>3.1.3000.7.1.4.9-1</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.11</artifactId>
<version>${spark.version}</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>com.hortonworks.hive</groupId>
<artifactId>hive-warehouse-connector_2.11</artifactId>
<version>1.0.0.7.1.4.9-1</version>
<scope>${spark.scope}</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.11</artifactId>
<version>${spark.version}</version>
<scope>${spark.scope}</scope>
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
</configuration>
</plugins>
</build>
<repositories>
<repository>
<id>cloudera</id>
<name>Cloudera public repo</name>
<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
<repository>
<id>central</id>
<name>Maven Plugin Repository</name>
<url>https://repo1.maven.org/maven2</url>
<layout>default</layout>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>central</id>
<name>Central Repository</name>
<url>https://repo.maven.apache.org/maven2</url>
<layout>default</layout>
</pluginRepository>
</pluginRepositories>
</project> I have read this means that java doesn't trust the SSL certificate from https://repository.cloudera.com. I am not quite sure how to proceed? I cant get past this point Any help would be appreciated
... View more
Labels:
- Labels:
-
Apache Spark
02-22-2021
06:28 AM
If i add the hive-jdbc jar at compile time <dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>3.1.3000.7.1.4.9-1</version>
<scope>provided</scope>
</dependency> I can get around the class Error vut then I get the following error 21/02/22 16:08:21 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.RuntimeException: java.lang.NullPointerException
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartition.createPartitionReader(JdbcInputPartition.java:34)
at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:42)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1289)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NullPointerException
at scala.collection.immutable.StringLike$class.stripPrefix(StringLike.scala:156)
at scala.collection.immutable.StringOps.stripPrefix(StringOps.scala:29)
at com.hortonworks.spark.sql.hive.llap.JDBCWrapper.getConnector(HS2JDBCWrapper.scala:423)
at com.hortonworks.spark.sql.hive.llap.DefaultJDBCWrapper.getConnector(HS2JDBCWrapper.scala)
at com.hortonworks.spark.sql.hive.llap.util.QueryExecutionUtil.getConnection(QueryExecutionUtil.java:68)
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartitionReader.getConnection(JdbcInputPartitionReader.java:60)
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartitionReader.<init>(JdbcInputPartitionReader.java:39)
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartition.createPartitionReader(JdbcInputPartition.java:32)
... 20 more
21/02/22 16:08:21 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): java.lang.RuntimeException: java.lang.NullPointerException
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartition.createPartitionReader(JdbcInputPartition.java:34)
at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:42)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1289)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.NullPointerException
at scala.collection.immutable.StringLike$class.stripPrefix(StringLike.scala:156)
at scala.collection.immutable.StringOps.stripPrefix(StringOps.scala:29)
at com.hortonworks.spark.sql.hive.llap.JDBCWrapper.getConnector(HS2JDBCWrapper.scala:423)
at com.hortonworks.spark.sql.hive.llap.DefaultJDBCWrapper.getConnector(HS2JDBCWrapper.scala)
at com.hortonworks.spark.sql.hive.llap.util.QueryExecutionUtil.getConnection(QueryExecutionUtil.java:68)
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartitionReader.getConnection(JdbcInputPartitionReader.java:60)
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartitionReader.<init>(JdbcInputPartitionReader.java:39)
at com.hortonworks.spark.sql.hive.llap.JdbcInputPartition.createPartitionReader(JdbcInputPartition.java:32)
... 20 more I can perform a show table now which ais a win hive.showDatabases().show(false); But I cannot read from an existing hive table
... View more
02-22-2021
03:33 AM
Thanks for the response If I use your code exactly as is and just change the URLs and config to reflect your configs set I get the exact same error as I previously reported. This is when I debug the java program locally. When I build the jar and submit I get a permission error which I expect and thus I want to say it works. Is this perhaps an issue because I am trying to debug my code as I write it and I'm executing it with a java debug session? How do I get this to work without having to package and submit to be able to test?
... View more