Member since
04-11-2016
174
Posts
29
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3386 | 06-28-2017 12:24 PM | |
2556 | 06-09-2017 07:20 AM | |
7094 | 08-18-2016 11:39 AM | |
5270 | 08-12-2016 09:05 AM | |
5422 | 08-09-2016 09:24 AM |
12-04-2017
10:05 AM
That worked 🙂 Never faced that issue in the previous versions of the sandbox, is this a new post-installation step or a sporadic, package-related error or something else?
... View more
12-04-2017
08:13 AM
For the host machine config. and other screenshots, please refer the background thread . The network is 'NAT'. Note that the 'Bridged Adapter' network setting doesn't work - I can neither access Ambari or the VM via putty. I am able to log-in Ambari at http://localhost:8080, also, via PuttY: [root@sandbox-hdp ~]#
[root@sandbox-hdp ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:741159 errors:0 dropped:0 overruns:0 frame:0
TX packets:535534 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:288135779 (274.7 MiB) TX bytes:351577113 (335.2 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:40787371 errors:0 dropped:0 overruns:0 frame:0
TX packets:40787371 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:27903918067 (25.9 GiB) TX bytes:27903918067 (25.9 GiB) On Ambari log-in, all the services were stopped, including HDFS. I tried starting the HDFS but received errors: stderr: /var/lib/ambari-agent/data/errors-489.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 73, in <module>
HdfsClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 35, in install
import params
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params.py", line 25, in <module>
from params_linux import *
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py", line 391, in <module>
lzo_packages = get_lzo_packages(stack_version_unformatted)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_lzo_packages.py", line 45, in get_lzo_packages
lzo_packages += [script_instance.format_package_name("hadooplzo_${stack_version}"),
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 538, in format_package_name
raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hadooplzo_${stack_version}. Available packages: ['atlas-metadata_2_6_3_0_235', 'atlas-metadata_2_6_3_0_235-falcon-plugin', 'atlas-metadata_2_6_3_0_235-hive-plugin', 'atlas-metadata_2_6_3_0_235-sqoop-plugin', 'atlas-metadata_2_6_3_0_235-storm-plugin', 'bigtop-jsvc', 'bigtop-tomcat', 'datafu_2_6_3_0_235', 'falcon_2_6_3_0_235', 'flume_2_6_3_0_235', 'hadoop_2_6_3_0_235', 'hadoop_2_6_3_0_235-client', 'hadoop_2_6_3_0_235-hdfs', 'hadoop_2_6_3_0_235-libhdfs', 'hadoop_2_6_3_0_235-mapreduce', 'hadoop_2_6_3_0_235-yarn', 'hbase_2_6_3_0_235', 'hdp-select', 'hive2_2_6_3_0_235', 'hive2_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235', 'hive_2_6_3_0_235-hcatalog', 'hive_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235-webhcat', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka_2_6_3_0_235', 'knox_2_6_3_0_235', 'livy2_2_6_3_0_235', 'oozie_2_6_3_0_235', 'oozie_2_6_3_0_235-client', 'oozie_2_6_3_0_235-common', 'oozie_2_6_3_0_235-sharelib', 'oozie_2_6_3_0_235-sharelib-distcp', 'oozie_2_6_3_0_235-sharelib-hcatalog', 'oozie_2_6_3_0_235-sharelib-hive', 'oozie_2_6_3_0_235-sharelib-hive2', 'oozie_2_6_3_0_235-sharelib-mapreduce-streaming', 'oozie_2_6_3_0_235-sharelib-pig', 'oozie_2_6_3_0_235-sharelib-spark', 'oozie_2_6_3_0_235-sharelib-sqoop', 'oozie_2_6_3_0_235-webapp', 'phoenix_2_6_3_0_235', 'pig_2_6_3_0_235', 'ranger_2_6_3_0_235-admin', 'ranger_2_6_3_0_235-atlas-plugin', 'ranger_2_6_3_0_235-hbase-plugin', 'ranger_2_6_3_0_235-hdfs-plugin', 'ranger_2_6_3_0_235-hive-plugin', 'ranger_2_6_3_0_235-kafka-plugin', 'ranger_2_6_3_0_235-kms', 'ranger_2_6_3_0_235-knox-plugin', 'ranger_2_6_3_0_235-solr-plugin', 'ranger_2_6_3_0_235-storm-plugin', 'ranger_2_6_3_0_235-tagsync', 'ranger_2_6_3_0_235-usersync', 'ranger_2_6_3_0_235-yarn-plugin', 'shc_2_6_3_0_235', 'slider_2_6_3_0_235', 'spark2_2_6_3_0_235', 'spark2_2_6_3_0_235-python', 'spark2_2_6_3_0_235-yarn-shuffle', 'spark_2_6_3_0_235', 'spark_2_6_3_0_235-python', 'spark_2_6_3_0_235-yarn-shuffle', 'spark_llap_2_6_3_0_235', 'sqoop_2_6_3_0_235', 'storm_2_6_3_0_235', 'storm_2_6_3_0_235-slider-client', 'tez_2_6_3_0_235', 'tez_hive2_2_6_3_0_235', 'zeppelin_2_6_3_0_235', 'zookeeper_2_6_3_0_235', 'zookeeper_2_6_3_0_235-server', 'extjs'] stdout: /var/lib/ambari-agent/data/output-489.txt 2017-12-04 08:01:38,824 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-04 08:01:38,825 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:38,826 - Group['livy'] {}
2017-12-04 08:01:38,829 - Group['spark'] {}
2017-12-04 08:01:38,830 - Group['ranger'] {}
2017-12-04 08:01:38,830 - Group['hdfs'] {}
2017-12-04 08:01:38,830 - Group['zeppelin'] {}
2017-12-04 08:01:38,830 - Group['hadoop'] {}
2017-12-04 08:01:38,830 - Group['users'] {}
2017-12-04 08:01:38,831 - Group['knox'] {}
2017-12-04 08:01:38,831 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,832 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,835 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,835 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,836 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,837 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,839 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,840 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger'], 'uid': None}
2017-12-04 08:01:38,841 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,842 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2017-12-04 08:01:38,843 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,844 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,847 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,848 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,849 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,850 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2017-12-04 08:01:38,852 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,853 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,853 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,857 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,860 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,861 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,864 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,868 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-04 08:01:38,891 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-12-04 08:01:38,891 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-12-04 08:01:38,892 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,896 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,897 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-12-04 08:01:38,919 - call returned (0, '1002')
2017-12-04 08:01:38,920 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-12-04 08:01:38,940 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] due to not_if
2017-12-04 08:01:38,940 - Group['hdfs'] {}
2017-12-04 08:01:38,941 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2017-12-04 08:01:38,941 - FS Type:
2017-12-04 08:01:38,941 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-04 08:01:38,963 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-04 08:01:38,963 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-04 08:01:38,985 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-04 08:01:38,997 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-04 08:01:38,999 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-04 08:01:39,000 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-04 08:01:39,006 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-04 08:01:39,007 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-04 08:01:39,007 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,200 - Skipping installation of existing package unzip
2017-12-04 08:01:39,200 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,316 - Skipping installation of existing package curl
2017-12-04 08:01:39,316 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,420 - Skipping installation of existing package hdp-select
2017-12-04 08:01:39,422 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2017-12-04 08:01:39,739 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:39,750 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-04 08:01:39,754 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:39,760 - Command repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:01:39,760 - Applicable repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:01:39,769 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:02:41,328 - No package found for hadooplzo_${stack_version}(hadooplzo_(\d|_)+$)
2017-12-04 08:02:41,329 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
11-24-2017
02:08 PM
Kerberized HDP-2.6.3.0. The table has 'Id' as it's primary key and 'DepartmentId' as the foreign key: +---------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------+--+
| col_name | data_type | comment |
+---------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------+--+
| # col_name | data_type | comment |
| | NULL | NULL |
| id | int | Surrogate PK is not fun |
| firstname | string | |
| lastname | string | |
| dob | date | |
| departmentid | int | |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | group_hadoopdeveloper | NULL |
| Owner: | ojoqcu | NULL |
| CreateTime: | Fri Nov 24 13:39:02 UTC 2017 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Retention: | 0 | NULL |
| Location: | hdfs://devhadoop/apps/hive/warehouse/group_hadoopdeveloper.db/employee | NULL |
| Table Type: | MANAGED_TABLE | NULL |
| Table Parameters: | NULL | NULL |
| | COLUMN_STATS_ACCURATE | {\"BASIC_STATS\":\"true\"} |
| | numFiles | 0 |
| | numRows | 0 |
| | rawDataSize | 0 |
| | totalSize | 0 |
| | transient_lastDdlTime | 1511530742 |
| | NULL | NULL |
| # Storage Information | NULL | NULL |
| SerDe Library: | org.apache.hadoop.hive.ql.io.orc.OrcSerde | NULL |
| InputFormat: | org.apache.hadoop.hive.ql.io.orc.OrcInputFormat | NULL |
| OutputFormat: | org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat | NULL |
| Compressed: | No | NULL |
| Num Buckets: | -1 | NULL |
| Bucket Columns: | [] | NULL |
| Sort Columns: | [] | NULL |
| Storage Desc Params: | NULL | NULL |
| | serialization.format | 1 |
| | NULL | NULL |
| # Constraints | NULL | NULL |
| | NULL | NULL |
| # Primary Key | NULL | NULL |
| Table: | group_hadoopdeveloper.employee | NULL |
| Constraint Name: | pk_203923149_1511530742932_0 | NULL |
| Column Names: | id | |
| | NULL | NULL |
| # Foreign Keys | NULL | NULL |
| Table: | group_hadoopdeveloper.employee | NULL |
| Constraint Name: | fk_employee_department | NULL |
| Parent Column Name:group_hadoopdeveloper.department.id | Column Name:departmentid | Key Sequence:1 |
| | NULL | NULL |
+---------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------+--+ I tried the following ways to retrieve the constraints: Hive JDBC - can retrieve the primary keys but the foreign keys are not implemented in the driver WebHCat: http://l4283t.sss.se.com:50111/templeton/v1/ddl/database/group_hadoopdeveloper/table/employee {"columns":[{"name":"id","type":"int","comment":"Surrogate PK is not fun"},{"name":"firstname","type":"string"},{"name":"lastname","type":"string"},{"name":"dob","type":"date"},{"name":"departmentid","type":"int"}],"database":"group_hadoopdeveloper","table":"employee"} Still struggling with the HiveMetaStoreClient implementation to retrieve the foreign keys I have the following questions: Where /how are these constraints stored in the Hive metastore Is there any way to retrieve(NOT from the beeline but those need to be available to external programs) these constraints
... View more
Labels:
- Labels:
-
Apache Hive
11-10-2017
04:53 PM
Yeah, I have tried that approach as well. The ODI doc. mentions about using it's weblogic hive jdbc driver but one can use other drivers as well. The question that I have mentioned here is around the standard(Apache)jdbc driver.
... View more
11-10-2017
01:51 PM
ODI studio 12.2.1.2.6 Kerberized HDP 2.6.3.0 Windows 7 hive-jdbc-2.1.0.2.6.3.0-235-standalone.jar(that is available under /usr/hdp/2.6.3.0-235/hive2/jdbc) I simply get a 'Connection refused', also, I didn't find any answers to the following Exception:
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
The complete error:
java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.doGetConnection(LoginTimeoutDatasourceAdapter.java:144)
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.getConnection(LoginTimeoutDatasourceAdapter.java:73)
at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java:1258)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.getLocalConnect(SnpsDialogTestConnet.java:204)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.access$500(SnpsDialogTestConnet.java:62)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet$6.doInBackground(SnpsDialogTestConnet.java:402)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet$6.doInBackground(SnpsDialogTestConnet.java:398)
at oracle.odi.ui.framework.AbsUIRunnableTask.run(AbsUIRunnableTask.java:258)
at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:961)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.doGetConnection(LoginTimeoutDatasourceAdapter.java:144)
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.getConnection(LoginTimeoutDatasourceAdapter.java:73)
at oracle.odi.core.datasource.dwgobject.support.OnConnectOnDisconnectDataSourceAdapter.getConnection(OnConnectOnDisconnectDataSourceAdapter.java:87)
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter$ConnectionProcessor.run(LoginTimeoutDatasourceAdapter.java:228)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:211)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:412)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:385)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:352)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:331)
... 6 more
Caused by: java.sql.SQLException: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:589)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:188)
... 11 more
Caused by: org.apache.hive.org.apache.thrift.transport.TTransportException: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:297)
at org.apache.hive.org.apache.thrift.transport.THttpClient.flush(THttpClient.java:313)
at org.apache.hive.org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.hive.org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.send_OpenSession(TCLIService.java:162)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:154)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:578)
... 12 more
Caused by: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
at org.apache.hive.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:117)
at org.apache.hive.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.apache.hive.org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:251)
... 18 more
Caused by: org.apache.hive.org.apache.http.HttpException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:86)
at org.apache.hive.org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
at org.apache.hive.org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:182)
at org.apache.hive.org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.hive.org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.hive.org.apache.http.impl.execchain.ServiceUnavailableRetryExec.execute(ServiceUnavailableRetryExec.java:84)
at org.apache.hive.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
... 21 more
Caused by: org.apache.hive.org.apache.http.HttpException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:68)
at org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:74)
... 27 more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2273)
at org.apache.hadoop.security.Groups.<init>(Groups.java:99)
at org.apache.hadoop.security.Groups.<init>(Groups.java:95)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:420)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:324)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge.getCurrentUGIWithConf(HadoopThriftAuthBridge.java:122)
at org.apache.hive.service.auth.HttpAuthUtils.getKerberosServiceTicket(HttpAuthUtils.java:81)
at org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:62)
... 28 more
Caused by: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2267)
... 39 more<br>
I have added the necessary config. in the odi.conf file I have added the necessary configs. in the odi.conf AddVMOption -Djava.security.krb5.conf=C:\Oracle\Middleware\Oracle_Home\oracle_common\modules\datadirect\kerb5.conf
AddVMOption -Djavax.security.auth.useSubjectCredsOnly=false I am sure that the JDBC driver + my kerberos ticket cache etc. are in place, the below, stand-alone Java class is working fine: package com.my; import java.sql.*; /** * Hello world! * */public class App { public static void main( String[] args ) throws ClassNotFoundException, SQLException { System.out.println( "Hello World!" ); System.setProperty("javax.security.auth.useSubjectCredsOnly","false"); System.setProperty("java.security.krb5.conf","C:\Oracle\Middleware\Oracle_Home\oracle_common\modules\datadirect\kerb5.conf"); //System.setProperty("sun.security.krb5.debug","true");Class.forName("org.apache.hive.jdbc.HiveDriver"); System.out.println("getting connection"); Connection con = DriverManager.getConnection("jdbc:hive2://l4284t.sss.se.scania.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.SCANIA.COM;httpPath=cliservice"); System.out.println("Connected"); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery("show tables"); while (rs.next()){ System.out.println("table name : "+rs.getString(1)); } stmt.close(); con.close(); } }
... View more
Labels:
- Labels:
-
Apache Hive
06-28-2017
12:24 PM
additivity="false" is essential Complete answer on StackOverflow.
... View more
06-22-2017
12:47 PM
NiFi 1.2.0 I am having a custom processor and I wish to have a dedicated log file for the same. Accordingly, I have configured the com.datalake.processors.SQLServerCDCProcessor class to use an appender named 'SQLSERVER-CDC' Following is the logback.xml : <?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="30 seconds">
<contextListener>
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE">
<file>/var/log/nifi/nifi-app.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
<immediateFlush>true</immediateFlush>
</appender>
<appender name="USER_FILE">
<file>/var/log/nifi/nifi-user.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-user_%d.log</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE">
<file>/var/log/nifi/nifi-bootstrap.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-bootstrap_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE">
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- Start : Added for log for custom processor -->
<appender name="SQLSERVER-CDC">
<file>/var/log/nifi/sqlserver-cdc.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/sqlserver-cdc_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>25MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
<immediateFlush>true</immediateFlush>
</encoder>
</appender>
<!-- End : Added for log for custom processor -->
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />
<logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />
<logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
<logger name="org.apache.curator.ConnectionState" level="OFF" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="INFO"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
<logger name="com.sun.jersey.spi.spring" level="ERROR"/>
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Start : Added for log for custom processor -->
<logger name="com.datalake.processors.SQLServerCDCProcessor" level="DEBUG" >
<appender-ref ref="SQLSERVER-CDC"/>
</logger>
<!-- End : Added for log for custom processor -->
<root level="INFO">
<appender-ref ref="APP_FILE"/>
</root>
</configuration> The strange fact is that the custom processor debug statements are written to both 'nifi-app.log' and the 'sqlserver-cdc.log' but I want these statements to be written only in the latter('sqlserver-cdc.log'). What am I missing ?
... View more
Labels:
- Labels:
-
Apache NiFi
06-16-2017
02:10 PM
NiFi 1.2.0 I have a custom processor who pom files look as the following. processor/pom.xml <?xml version="1.0" encoding="UTF-8"?>
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for additional
information regarding copyright ownership. The ASF licenses this file to
You under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of
the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required
by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License. -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.datalake</groupId>
<artifactId>CDCNiFi</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<artifactId>nifi-NiFiCDCPoC-processors</artifactId>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-utils</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-dbcp-service-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-processor-utils</artifactId>
</dependency>
<!-- Third-party -->
<!-- <dependency> <groupId>com.microsoft.sqlserver</groupId> <artifactId>mssql-jdbc</artifactId>
<version>6.1.0.jre8</version> </dependency> -->
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.8.7</version>
</dependency>
<!-- Testing & Cross-cutting concerns -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-mock</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project> nar/pom.xml <?xml version="1.0" encoding="UTF-8"?>
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for additional
information regarding copyright ownership. The ASF licenses this file to
You under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of
the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required
by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License. -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.datalake</groupId>
<artifactId>CDCNiFi</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<artifactId>nifi-NiFiCDCPoC-nar</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>nar</packaging>
<properties>
<maven.javadoc.skip>true</maven.javadoc.skip>
<source.skip>true</source.skip>
</properties>
<dependencies>
<dependency>
<groupId>com..datalake</groupId>
<artifactId>nifi-NiFiCDCPoC-processors</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-standard-services-api-nar</artifactId>
<type>nar</type>
</dependency>
<!-- <dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>6.1.0.jre8</version>
<scope>runtime</scope>
</dependency>-->
</dependencies>
</project> Now, I wish to do some HDFS file operations from the processor like read/write a SequenceFile, retrieve files from a HDFS directory and so on. I had a look at the existing processors like PutHDFS which has some pom entries as follows : <dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-hadoop-utils</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-flowfile-packager</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<scope>provided</scope>
</dependency> In the lib dir. of NiFi, I could see files like nifi-hadoop-libraries-nar-1.2.0.nar, nifi-hadoop-nar-1.2.0.nar The inclusion of above entries in my pom.xml didn't work, what shall I do to use the HDFS file system api within my custom processor ?
... View more
Labels:
- Labels:
-
Apache NiFi
06-09-2017
07:20 AM
I got the answer at StackOverflow. I am able to create a logfile on local, Windows machine but the same config. is not working on the Linux env., maybe I have missed something. Below is the logback.xml used on my local machine : <?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="30 seconds">
<contextListener>
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<immediateFlush>true</immediateFlush>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="USER_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user_%d.log</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE">
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- Start : Added for log for custom processor -->
<appender name="SQLSERVER-CDC">
<file>${org.apache.nifi.bootstrap.config.log.dir}/sqlserver-cdc.log</file>
<rollingPolicy>
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/sqlserver-cdc_%d.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- End : Added for log for custom processor -->
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />
<logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />
<logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
<logger name="org.apache.curator.ConnectionState" level="OFF" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="INFO"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
<logger name="com.sun.jersey.spi.spring" level="ERROR"/>
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Start : Added for log for custom processor -->
<logger name="com.datalake.processors.SQLServerCDCProcessor" level="DEBUG" >
<appender-ref ref="SQLSERVER-CDC"/>
</logger>
<!-- End : Added for log for custom processor -->
<root level="info">
<appender-ref ref="APP_FILE"/>
</root>
</configuration>
... View more
06-07-2017
06:29 AM
NiFi 1.2.0 Need to create a separate log file, say, customprocessor.log besides the app.log file created by NiFi. I went through some interesting, existing threads like this, however, I am unable to figure out how to make it working in the code. Following is the existing logback.xml : <?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration scan="true" scanPeriod="30 seconds">
<contextListener>
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE">
<file>/var/log/nifi/nifi-app.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
<immediateFlush>true</immediateFlush>
</appender>
<appender name="USER_FILE">
<file>/var/log/nifi/nifi-user.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-user_%d.log</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE">
<file>/var/log/nifi/nifi-bootstrap.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-bootstrap_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE">
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />
<logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />
<logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
<logger name="org.apache.curator.ConnectionState" level="OFF" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="INFO"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
<logger name="com.sun.jersey.spi.spring" level="ERROR"/>
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<root level="DEBUG">
<appender-ref ref="APP_FILE"/>
</root>
</configuration> Now, I can add a new appender for the custom log file : <!-- Start : Separate log file for custom processor -->
<appender name="CUSTOM_FILE">
<file>/var/log/nifi/custom-processor.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/custom-processor_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
<immediateFlush>true</immediateFlush>
</appender>
<!-- End : Separate log file for custom processor -->
<!-- Start : Separate log file for custom processor -->
<logger name="com.nifi.CustomLog" level="DEBUG" additivity="false">
<appender-ref ref="CUSTOM_FILE" />
</logger>
<!-- End : Separate log file for custom processor -->
I have the following questions : Are the entries that I am adding correct In the code, I use the following snippet to get the root logger, however, I didn't find a method/constructor to get my custom logger in the code, how shall I do that ? import org.apache.nifi.logging.ComponentLog;..final ComponentLog logger = getLogger();logger.debug("...");
... View more
Labels:
- Labels:
-
Apache NiFi