Member since
04-11-2016
174
Posts
29
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1890 | 06-28-2017 12:24 PM | |
1258 | 06-09-2017 07:20 AM | |
4348 | 08-18-2016 11:39 AM | |
2464 | 08-12-2016 09:05 AM | |
2830 | 08-09-2016 09:24 AM |
12-04-2017
02:45 PM
HDP sandbox 2.6.3 on my local Windows 7 machine. From the beeline shell/hive view from Ambari, I am trying to create a table with a primary key: create table department(Id int COMMENT 'Surrogate PK is not fun', Description string, Code string, primary key(Id) disable novalidate); The above query works fine with our dev./prod Hive set-up but when I execute it on the my local sandbox: 0: jdbc:hive2://sandbox-hdp.hortonworks.com:2> create table department(Id int COMMENT 'Surrogate PK is not fun', Description string, Code string, primary key(Id) disable novalidate); Error: Error while compiling statement: FAILED: ParseException line 1:107 cannot recognize input near 'key' '(' 'Id' in column type (state=42000,code=40000) Do I need to change any Hive config(via Ambari) for the PK and FK to work?
... View more
Labels:
- Labels:
-
Apache Hive
12-04-2017
10:05 AM
That worked 🙂 Never faced that issue in the previous versions of the sandbox, is this a new post-installation step or a sporadic, package-related error or something else?
... View more
12-04-2017
08:13 AM
For the host machine config. and other screenshots, please refer the background thread . The network is 'NAT'. Note that the 'Bridged Adapter' network setting doesn't work - I can neither access Ambari or the VM via putty. I am able to log-in Ambari at http://localhost:8080, also, via PuttY: [root@sandbox-hdp ~]#
[root@sandbox-hdp ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:741159 errors:0 dropped:0 overruns:0 frame:0
TX packets:535534 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:288135779 (274.7 MiB) TX bytes:351577113 (335.2 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:40787371 errors:0 dropped:0 overruns:0 frame:0
TX packets:40787371 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:27903918067 (25.9 GiB) TX bytes:27903918067 (25.9 GiB) On Ambari log-in, all the services were stopped, including HDFS. I tried starting the HDFS but received errors: stderr: /var/lib/ambari-agent/data/errors-489.txt Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 73, in <module>
HdfsClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_client.py", line 35, in install
import params
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params.py", line 25, in <module>
from params_linux import *
File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py", line 391, in <module>
lzo_packages = get_lzo_packages(stack_version_unformatted)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_lzo_packages.py", line 45, in get_lzo_packages
lzo_packages += [script_instance.format_package_name("hadooplzo_${stack_version}"),
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 538, in format_package_name
raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name hadooplzo_${stack_version}. Available packages: ['atlas-metadata_2_6_3_0_235', 'atlas-metadata_2_6_3_0_235-falcon-plugin', 'atlas-metadata_2_6_3_0_235-hive-plugin', 'atlas-metadata_2_6_3_0_235-sqoop-plugin', 'atlas-metadata_2_6_3_0_235-storm-plugin', 'bigtop-jsvc', 'bigtop-tomcat', 'datafu_2_6_3_0_235', 'falcon_2_6_3_0_235', 'flume_2_6_3_0_235', 'hadoop_2_6_3_0_235', 'hadoop_2_6_3_0_235-client', 'hadoop_2_6_3_0_235-hdfs', 'hadoop_2_6_3_0_235-libhdfs', 'hadoop_2_6_3_0_235-mapreduce', 'hadoop_2_6_3_0_235-yarn', 'hbase_2_6_3_0_235', 'hdp-select', 'hive2_2_6_3_0_235', 'hive2_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235', 'hive_2_6_3_0_235-hcatalog', 'hive_2_6_3_0_235-jdbc', 'hive_2_6_3_0_235-webhcat', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka_2_6_3_0_235', 'knox_2_6_3_0_235', 'livy2_2_6_3_0_235', 'oozie_2_6_3_0_235', 'oozie_2_6_3_0_235-client', 'oozie_2_6_3_0_235-common', 'oozie_2_6_3_0_235-sharelib', 'oozie_2_6_3_0_235-sharelib-distcp', 'oozie_2_6_3_0_235-sharelib-hcatalog', 'oozie_2_6_3_0_235-sharelib-hive', 'oozie_2_6_3_0_235-sharelib-hive2', 'oozie_2_6_3_0_235-sharelib-mapreduce-streaming', 'oozie_2_6_3_0_235-sharelib-pig', 'oozie_2_6_3_0_235-sharelib-spark', 'oozie_2_6_3_0_235-sharelib-sqoop', 'oozie_2_6_3_0_235-webapp', 'phoenix_2_6_3_0_235', 'pig_2_6_3_0_235', 'ranger_2_6_3_0_235-admin', 'ranger_2_6_3_0_235-atlas-plugin', 'ranger_2_6_3_0_235-hbase-plugin', 'ranger_2_6_3_0_235-hdfs-plugin', 'ranger_2_6_3_0_235-hive-plugin', 'ranger_2_6_3_0_235-kafka-plugin', 'ranger_2_6_3_0_235-kms', 'ranger_2_6_3_0_235-knox-plugin', 'ranger_2_6_3_0_235-solr-plugin', 'ranger_2_6_3_0_235-storm-plugin', 'ranger_2_6_3_0_235-tagsync', 'ranger_2_6_3_0_235-usersync', 'ranger_2_6_3_0_235-yarn-plugin', 'shc_2_6_3_0_235', 'slider_2_6_3_0_235', 'spark2_2_6_3_0_235', 'spark2_2_6_3_0_235-python', 'spark2_2_6_3_0_235-yarn-shuffle', 'spark_2_6_3_0_235', 'spark_2_6_3_0_235-python', 'spark_2_6_3_0_235-yarn-shuffle', 'spark_llap_2_6_3_0_235', 'sqoop_2_6_3_0_235', 'storm_2_6_3_0_235', 'storm_2_6_3_0_235-slider-client', 'tez_2_6_3_0_235', 'tez_hive2_2_6_3_0_235', 'zeppelin_2_6_3_0_235', 'zookeeper_2_6_3_0_235', 'zookeeper_2_6_3_0_235-server', 'extjs'] stdout: /var/lib/ambari-agent/data/output-489.txt 2017-12-04 08:01:38,824 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-04 08:01:38,825 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:38,826 - Group['livy'] {}
2017-12-04 08:01:38,829 - Group['spark'] {}
2017-12-04 08:01:38,830 - Group['ranger'] {}
2017-12-04 08:01:38,830 - Group['hdfs'] {}
2017-12-04 08:01:38,830 - Group['zeppelin'] {}
2017-12-04 08:01:38,830 - Group['hadoop'] {}
2017-12-04 08:01:38,830 - Group['users'] {}
2017-12-04 08:01:38,831 - Group['knox'] {}
2017-12-04 08:01:38,831 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,832 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,835 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,835 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,836 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,837 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,839 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,840 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger'], 'uid': None}
2017-12-04 08:01:38,841 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,842 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['zeppelin', 'hadoop'], 'uid': None}
2017-12-04 08:01:38,843 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,844 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,847 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users'], 'uid': None}
2017-12-04 08:01:38,848 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,849 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,850 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2017-12-04 08:01:38,852 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,853 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,853 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,857 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,860 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,861 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2017-12-04 08:01:38,864 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,868 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-12-04 08:01:38,891 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-12-04 08:01:38,891 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-12-04 08:01:38,892 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,896 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-12-04 08:01:38,897 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-12-04 08:01:38,919 - call returned (0, '1002')
2017-12-04 08:01:38,920 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-12-04 08:01:38,940 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1002'] due to not_if
2017-12-04 08:01:38,940 - Group['hdfs'] {}
2017-12-04 08:01:38,941 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hdfs']}
2017-12-04 08:01:38,941 - FS Type:
2017-12-04 08:01:38,941 - Directory['/etc/hadoop'] {'mode': 0755}
2017-12-04 08:01:38,963 - File['/usr/hdp/2.6.3.0-235/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-12-04 08:01:38,963 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-12-04 08:01:38,985 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-04 08:01:38,997 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-04 08:01:38,999 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-04 08:01:39,000 - Repository['HDP-UTILS-1.1.0.21-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2017-12-04 08:01:39,006 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.3.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-1]\nname=HDP-UTILS-1.1.0.21-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-12-04 08:01:39,007 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2017-12-04 08:01:39,007 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,200 - Skipping installation of existing package unzip
2017-12-04 08:01:39,200 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,316 - Skipping installation of existing package curl
2017-12-04 08:01:39,316 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-12-04 08:01:39,420 - Skipping installation of existing package hdp-select
2017-12-04 08:01:39,422 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2017-12-04 08:01:39,739 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:39,750 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2017-12-04 08:01:39,754 - Using hadoop conf dir: /usr/hdp/2.6.3.0-235/hadoop/conf
2017-12-04 08:01:39,760 - Command repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:01:39,760 - Applicable repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:01:39,769 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-UTILS-1.1.0.21-repo-1
2017-12-04 08:02:41,328 - No package found for hadooplzo_${stack_version}(hadooplzo_(\d|_)+$)
2017-12-04 08:02:41,329 - The repository with version 2.6.3.0-235 for this command has been marked as resolved. It will be used to report the version of the component which was installed
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
11-29-2017
01:08 PM
I have already verified the checksum. The settings have more than 14 GB of memory(check virtualbox-sandbox-config.jpg)
... View more
11-28-2017
07:58 AM
1 Kudo
I tried running sandbox 2.6.3 on both VMware and VirtualBox with a RAM > 14GB, after referring the following threads: https://community.hortonworks.com/questions/103239/hdp-sanbbox-26-not-working-in-windows-10.html https://community.hortonworks.com/questions/103261/unable-to-start-hdp-26-sandbox.html https://community.hortonworks.com/questions/135835/cant-start-sandbox-261.html https://community.hortonworks.com/questions/57757/hdp-25-sandbox-not-starting.htmlIt's stuck at 'Starting Sandbox container'. The VirtualBox log(VBox.log 😞 VirtualBox VM 5.1.30 r118389 win.amd64 (Oct 16 2017 10:47:00) release log
00:00:00.846301 Log opened 2017-11-28T07:40:53.297635200Z
00:00:00.846302 Build Type: release
00:00:00.846304 OS Product: Windows 7
00:00:00.846306 OS Release: 6.1.7601
00:00:00.846307 OS Service Pack: 1
00:00:00.910929 DMI Product Name: Precision 3510
00:00:00.913338 DMI Product Version:
00:00:00.913344 Host RAM: 16046MB (15.6GB) total, 7470MB (7.2GB) available
00:00:00.913348 Executable: C:\Program Files\Oracle\VirtualBox\VirtualBox.exe
00:00:00.913348 Process ID: 9412
00:00:00.913349 Package type: WINDOWS_64BITS_GENERIC
00:00:00.913904 Installed Extension Packs:
00:00:00.913944 None installed!
00:00:00.914658 Console: Machine state changed to 'Starting'
00:00:00.914854 Qt version: 5.6.2
00:00:00.915539 VRDE: VirtualBox Remote Desktop Extension is not available.
00:00:00.916653 GUI: UIMediumEnumerator: Medium-enumeration finished!
00:00:01.125401 SUP: Loaded VMMR0.r0 (C:\Program Files\Oracle\VirtualBox\VMMR0.r0) at 0xXXXXXXXXXXXXXXXX - ModuleInit at XXXXXXXXXXXXXXXX and ModuleTerm at XXXXXXXXXXXXXXXX using the native ring-0 loader
00:00:01.125430 SUP: VMMR0EntryEx located at XXXXXXXXXXXXXXXX and VMMR0EntryFast at XXXXXXXXXXXXXXXX
00:00:01.125435 SUP: windbg> .reload /f C:\Program Files\Oracle\VirtualBox\VMMR0.r0=0xXXXXXXXXXXXXXXXX
00:00:01.128704 Guest OS type: 'RedHat_64'
00:00:01.129367 fHMForced=true - Lots of RAM
00:00:01.129376 fHMForced=true - SMP
00:00:01.129380 fHMForced=true - 64-bit guest
00:00:01.130563 File system of 'E:\Omkar\Development\Software\Virtualization\VirtualBoxVM\Hortonworks Docker Sandbox HDP\Snapshots' (snapshots) is unknown
00:00:01.130579 File system of 'E:\Omkar\Development\Software\Virtualization\VirtualBoxVM\Hortonworks Docker Sandbox HDP\Hortonworks Docker Sandbox HDP-disk1.vmdk' is ntfs
00:00:01.159494 Shared clipboard service loaded
00:00:01.159511 Shared clipboard mode: Off
00:00:01.183303 Drag and drop service loaded
00:00:01.183322 Drag and drop mode: Off
00:00:01.230906 Guest Control service loaded
00:00:01.232679 ************************* CFGM dump *************************
00:00:01.232681 [/] (level 0)
00:00:01.232685 CSAMEnabled <integer> = 0x0000000000000001 (1)
00:00:01.232687 CpuExecutionCap <integer> = 0x0000000000000064 (100)
00:00:01.232688 EnablePAE <integer> = 0x0000000000000001 (1)
00:00:01.232689 HMEnabled <integer> = 0x0000000000000001 (1)
00:00:01.232690 MemBalloonSize <integer> = 0x0000000000000000 (0)
00:00:01.232690 Name <string> = "Hortonworks Docker Sandbox HDP" (cb=31)
00:00:01.232691 NumCPUs <integer> = 0x0000000000000004 (4)
00:00:01.232692 PATMEnabled <integer> = 0x0000000000000001 (1)
00:00:01.232693 PageFusionAllowed <integer> = 0x0000000000000000 (0)
00:00:01.232694 RamHoleSize <integer> = 0x0000000020000000 (536 870 912, 512 MB)
00:00:01.232696 RamSize <integer> = 0x000000037dc00000 (14 994 636 800, 14 300 MB, 13 GB)
00:00:01.232698 RawR0Enabled <integer> = 0x0000000000000001 (1)
00:00:01.232698 RawR3Enabled <integer> = 0x0000000000000001 (1)
00:00:01.232699 TimerMillies <integer> = 0x000000000000000a (10)
00:00:01.232700 UUID <bytes> = "aa 65 5f 19 b4 25 6e 4f 90 dd 62 ec c2 b2 86 78" (cb=16)
00:00:01.232702
00:00:01.232703 [/CPUM/] (level 1)
00:00:01.232704 GuestCpuName <string> = "host" (cb=5)
00:00:01.232705 PortableCpuIdLevel <integer> = 0x0000000000000000 (0)
00:00:01.232705
00:00:01.232705 [/DBGC/] (level 1)
00:00:01.232706 GlobalInitScript <string> = "C:\Users\ojoqcu\.VirtualBox/dbgc-init" (cb=38)
00:00:01.232707 HistoryFile <string> = "C:\Users\ojoqcu\.VirtualBox/dbgc-history" (cb=41)
00:00:01.232708 LocalInitScript <string> = "E:\Omkar\Development\Software\Virtualization\VirtualBoxVM\Hortonworks Docker Sandbox HDP/dbgc-init" (cb=99)
00:00:01.232709
00:00:01.232709 [/DBGF/] (level 1)
00:00:01.232709 Path <string> = "E:\Omkar\Development\Software\Virtualization\VirtualBoxVM\Hortonworks Docker Sandbox HDP/debug/;E:\Omkar\Development\Software\Virtualization\VirtualBoxVM\Hortonworks Docker Sandbox HDP/;C:\Users\ojoqcu\" (cb=203)
00:00:01.232710
00:00:01.232711 [/Devices/] (level 1)
00:00:01.232711
00:00:01.232712 [/Devices/8237A/] (level 2)
00:00:01.232712
00:00:01.232713 [/Devices/8237A/0/] (level 3)
00:00:01.232714 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.232714
00:00:01.232715 [/Devices/GIMDev/] (level 2)
00:00:01.232715
00:00:01.232716 [/Devices/GIMDev/0/] (level 3)
00:00:01.232716 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.232717
00:00:01.232717 [/Devices/VMMDev/] (level 2)
00:00:01.232718
00:00:01.232718 [/Devices/VMMDev/0/] (level 3)
00:00:01.232719 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.232720 PCIDeviceNo <integer> = 0x0000000000000004 (4)
00:00:01.232721 PCIFunctionNo <integer> = 0x0000000000000000 (0)
00:00:01.232721 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.232722
00:00:01.232722 [/Devices/VMMDev/0/Config/] (level 4)
00:00:01.232723 GuestCoreDumpDir <string> = "E:\Omkar\Development\Software\Virtualization\VirtualBoxVM\Hortonworks Docker Sandbox HDP\Snapshots" (cb=99)
00:00:01.232724
00:00:01.232724 [/Devices/VMMDev/0/LUN#0/] (level 4)
00:00:01.232725 Driver <string> = "HGCM" (cb=5)
00:00:01.232726
00:00:01.232726 [/Devices/VMMDev/0/LUN#0/Config/] (level 5)
00:00:01.232727 Object <integer> = 0x0000000004273b10 (69 679 888)
00:00:01.232728
00:00:01.232728 [/Devices/VMMDev/0/LUN#999/] (level 4)
00:00:01.232729 Driver <string> = "MainStatus" (cb=11)
00:00:01.232730
00:00:01.232730 [/Devices/VMMDev/0/LUN#999/Config/] (level 5)
00:00:01.232731 First <integer> = 0x0000000000000000 (0)
00:00:01.232732 Last <integer> = 0x0000000000000000 (0)
00:00:01.232732 papLeds <integer> = 0x0000000003fdf4e8 (66 974 952)
00:00:01.232733
00:00:01.232733 [/Devices/acpi/] (level 2)
00:00:01.232734
00:00:01.232734 [/Devices/acpi/0/] (level 3)
00:00:01.232735 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.232736 PCIDeviceNo <integer> = 0x0000000000000007 (7)
00:00:01.232737 PCIFunctionNo <integer> = 0x0000000000000000 (0)
00:00:01.232737 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.232738
00:00:01.232738 [/Devices/acpi/0/Config/] (level 4)
00:00:01.232739 CpuHotPlug <integer> = 0x0000000000000000 (0)
00:00:01.232740 FdcEnabled <integer> = 0x0000000000000000 (0)
00:00:01.232741 HostBusPciAddress <integer> = 0x0000000000000000 (0)
00:00:01.232741 HpetEnabled <integer> = 0x0000000000000000 (0)
00:00:01.232742 IOAPIC <integer> = 0x0000000000000001 (1)
00:00:01.232743 IocPciAddress <integer> = 0x0000000000010000 (65 536)
00:00:01.232744 NumCPUs <integer> = 0x0000000000000004 (4)
00:00:01.232744 Parallel0IoPortBase <integer> = 0x0000000000000000 (0)
00:00:01.232745 Parallel0Irq <integer> = 0x0000000000000000 (0)
00:00:01.232746 Parallel1IoPortBase <integer> = 0x0000000000000000 (0)
00:00:01.232746 Parallel1Irq <integer> = 0x0000000000000000 (0)
00:00:01.232747 Serial0IoPortBase <integer> = 0x0000000000000000 (0)
00:00:01.232748 Serial0Irq <integer> = 0x0000000000000000 (0)
00:00:01.232748 Serial1IoPortBase <integer> = 0x0000000000000000 (0)
00:00:01.232749 Serial1Irq <integer> = 0x0000000000000000 (0)
00:00:01.232749 ShowCpu <integer> = 0x0000000000000001 (1)
00:00:01.232750 ShowRtc <integer> = 0x0000000000000000 (0)
00:00:01.232751 SmcEnabled <integer> = 0x0000000000000000 (0)
00:00:01.232751
00:00:01.232752 [/Devices/acpi/0/LUN#0/] (level 4)
00:00:01.232753 Driver <string> = "ACPIHost" (cb=9)
00:00:01.232753
00:00:01.232753 [/Devices/acpi/0/LUN#0/Config/] (level 5)
00:00:01.232754
00:00:01.232755 [/Devices/acpi/0/LUN#1/] (level 4)
00:00:01.232755 Driver <string> = "ACPICpu" (cb=8)
00:00:01.232756
00:00:01.232756 [/Devices/acpi/0/LUN#1/Config/] (level 5)
00:00:01.232757
00:00:01.232757 [/Devices/acpi/0/LUN#2/] (level 4)
00:00:01.232758 Driver <string> = "ACPICpu" (cb=8)
00:00:01.232759
00:00:01.232759 [/Devices/acpi/0/LUN#2/Config/] (level 5)
00:00:01.232760
00:00:01.232760 [/Devices/acpi/0/LUN#3/] (level 4)
00:00:01.232761 Driver <string> = "ACPICpu" (cb=8)
00:00:01.232761
00:00:01.232761 [/Devices/acpi/0/LUN#3/Config/] (level 5)
00:00:01.232762
00:00:01.232763 [/Devices/apic/] (level 2)
00:00:01.232763
00:00:01.232764 [/Devices/apic/0/] (level 3)
00:00:01.232764 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.232765
00:00:01.232765 [/Devices/apic/0/Config/] (level 4)
00:00:01.232766 IOAPIC <integer> = 0x0000000000000001 (1)
00:00:01.232767 Mode <integer> = 0x0000000000000003 (3)
00:00:01.232767 NumCPUs <integer> = 0x0000000000000004 (4)
00:00:01.232768
00:00:01.232768 [/Devices/e1000/] (level 2)
00:00:01.232769
00:00:01.232769 [/Devices/e1000/0/] (level 3)
00:00:01.232770 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.232771 PCIDeviceNo <integer> = 0x0000000000000003 (3)
00:00:01.232772 PCIFunctionNo <integer> = 0x0000000000000000 (0)
00:00:01.232772 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.232773
00:00:01.232773 [/Devices/e1000/0/Config/] (level 4)
00:00:01.232775 AdapterType <integer> = 0x0000000000000000 (0)
00:00:01.232776 CableConnected <integer> = 0x0000000000000001 (1)
00:00:01.232776 LineSpeed <integer> = 0x0000000000000000 (0)
00:00:01.232777 MAC <bytes> = "08 00 27 21 8e c8" (cb=6)
00:00:01.232778
00:00:01.232778 [/Devices/e1000/0/LUN#0/] (level 4)
00:00:01.232779 Driver <string> = "NAT" (cb=4)
00:00:01.232780
00:00:01.232780 [/Devices/e1000/0/LUN#0/Config/] (level 5)
00:00:01.232781 AliasMode <integer> = 0x0000000000000000 (0)
00:00:01.232782 BootFile <string> = "Hortonworks Docker Sandbox HDP.pxe" (cb=35)
00:00:01.232783 DNSProxy <integer> = 0x0000000000000000 (0)
00:00:01.232783 Network <string> = "10.0.2.0/24" (cb=12)
00:00:01.232784 PassDomain <integer> = 0x0000000000000001 (1)
00:00:01.232785 TFTPPrefix <string> = "C:\Users\ojoqcu\.VirtualBox\TFTP" (cb=33)
00:00:01.232802 UseHostResolver <integer> = 0x0000000000000000 (0)
00:00:01.232802
00:00:01.232803 [/Devices/e1000/0/LUN#0/Config/PortForwarding/] (level 6)
00:00:01.232804
00:00:01.232804 [/Devices/e1000/0/LUN#0/Config/PortForwarding/0/] (level 7)
00:00:01.232806 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232806 GuestPort <integer> = 0x000000000000c3af (50 095)
00:00:01.232807 HostPort <integer> = 0x000000000000c3af (50 095)
00:00:01.232808 Name <string> = "Accumulo" (cb=9)
00:00:01.232809 Protocol <string> = "TCP" (cb=4)
00:00:01.232809
00:00:01.232810 [/Devices/e1000/0/LUN#0/Config/PortForwarding/1/] (level 7)
00:00:01.232811 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232811 GuestPort <integer> = 0x00000000000022b6 (8 886)
00:00:01.232812 HostPort <integer> = 0x00000000000022b6 (8 886)
00:00:01.232829 Name <string> = "AmbariInfra" (cb=12)
00:00:01.232829 Protocol <string> = "TCP" (cb=4)
00:00:01.232830
00:00:01.232830 [/Devices/e1000/0/LUN#0/Config/PortForwarding/10/] (level 7)
00:00:01.232831 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232832 GuestPort <integer> = 0x0000000000003a9a (15 002)
00:00:01.232833 HostPort <integer> = 0x0000000000003a9a (15 002)
00:00:01.232834 Name <string> = "Custom15" (cb=9)
00:00:01.232834 Protocol <string> = "TCP" (cb=4)
00:00:01.232835
00:00:01.232835 [/Devices/e1000/0/LUN#0/Config/PortForwarding/11/] (level 7)
00:00:01.232836 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232837 GuestPort <integer> = 0x000000000000006f (111)
00:00:01.232838 HostPort <integer> = 0x000000000000006f (111)
00:00:01.232838 Name <string> = "Custom16" (cb=9)
00:00:01.232839 Protocol <string> = "TCP" (cb=4)
00:00:01.232839
00:00:01.232840 [/Devices/e1000/0/LUN#0/Config/PortForwarding/12/] (level 7)
00:00:01.232841 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232841 GuestPort <integer> = 0x0000000000000801 (2 049)
00:00:01.232842 HostPort <integer> = 0x0000000000000801 (2 049)
00:00:01.232843 Name <string> = "Custom17" (cb=9)
00:00:01.232843 Protocol <string> = "TCP" (cb=4)
00:00:01.232844
00:00:01.232844 [/Devices/e1000/0/LUN#0/Config/PortForwarding/13/] (level 7)
00:00:01.232845 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232846 GuestPort <integer> = 0x0000000000001092 (4 242)
00:00:01.232846 HostPort <integer> = 0x0000000000001092 (4 242)
00:00:01.232847 Name <string> = "Custom18" (cb=9)
00:00:01.232848 Protocol <string> = "TCP" (cb=4)
00:00:01.232848
00:00:01.232849 [/Devices/e1000/0/LUN#0/Config/PortForwarding/14/] (level 7)
00:00:01.232850 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232850 GuestPort <integer> = 0x000000000000c39f (50 079)
00:00:01.232851 HostPort <integer> = 0x000000000000c39f (50 079)
00:00:01.232852 Name <string> = "Custom19" (cb=9)
00:00:01.232852 Protocol <string> = "TCP" (cb=4)
00:00:01.232853
00:00:01.232853 [/Devices/e1000/0/LUN#0/Config/PortForwarding/15/] (level 7)
00:00:01.232854 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232855 GuestPort <integer> = 0x0000000000000bb8 (3 000)
00:00:01.232855 HostPort <integer> = 0x0000000000000bb8 (3 000)
00:00:01.232856 Name <string> = "Custom20" (cb=9)
00:00:01.232856 Protocol <string> = "TCP" (cb=4)
00:00:01.232857
00:00:01.232857 [/Devices/e1000/0/LUN#0/Config/PortForwarding/16/] (level 7)
00:00:01.232858 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232859 GuestPort <integer> = 0x0000000000003e80 (16 000)
00:00:01.232860 HostPort <integer> = 0x0000000000003e80 (16 000)
00:00:01.232860 Name <string> = "Custom21" (cb=9)
00:00:01.232861 Protocol <string> = "TCP" (cb=4)
00:00:01.232861
00:00:01.232862 [/Devices/e1000/0/LUN#0/Config/PortForwarding/17/] (level 7)
00:00:01.232863 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232863 GuestPort <integer> = 0x0000000000003e94 (16 020)
00:00:01.232864 HostPort <integer> = 0x0000000000003e94 (16 020)
00:00:01.232865 Name <string> = "Custom22" (cb=9)
00:00:01.232865 Protocol <string> = "TCP" (cb=4)
00:00:01.232866
00:00:01.232866 [/Devices/e1000/0/LUN#0/Config/PortForwarding/18/] (level 7)
00:00:01.232867 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232867 GuestPort <integer> = 0x0000000000003c8c (15 500)
00:00:01.232868 HostPort <integer> = 0x0000000000003c8c (15 500)
00:00:01.232869 Name <string> = "Custom23" (cb=9)
00:00:01.232869 Protocol <string> = "TCP" (cb=4)
00:00:01.232870
00:00:01.232870 [/Devices/e1000/0/LUN#0/Config/PortForwarding/19/] (level 7)
00:00:01.232871 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232871 GuestPort <integer> = 0x0000000000003c8d (15 501)
00:00:01.232872 HostPort <integer> = 0x0000000000003c8d (15 501)
00:00:01.232873 Name <string> = "Custom24" (cb=9)
00:00:01.232873 Protocol <string> = "TCP" (cb=4)
00:00:01.232874
00:00:01.232874 [/Devices/e1000/0/LUN#0/Config/PortForwarding/2/] (level 7)
00:00:01.232875 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232876 GuestPort <integer> = 0x0000000000001068 (4 200)
00:00:01.232876 HostPort <integer> = 0x0000000000001068 (4 200)
00:00:01.232877 Name <string> = "AmbariShell" (cb=12)
00:00:01.232878 Protocol <string> = "TCP" (cb=4)
00:00:01.232878
00:00:01.232879 [/Devices/e1000/0/LUN#0/Config/PortForwarding/20/] (level 7)
00:00:01.232880 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232880 GuestPort <integer> = 0x0000000000003c8e (15 502)
00:00:01.232881 HostPort <integer> = 0x0000000000003c8e (15 502)
00:00:01.232882 Name <string> = "Custom25" (cb=9)
00:00:01.232882 Protocol <string> = "TCP" (cb=4)
00:00:01.232883
00:00:01.232883 [/Devices/e1000/0/LUN#0/Config/PortForwarding/21/] (level 7)
00:00:01.232884 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232885 GuestPort <integer> = 0x0000000000003c8f (15 503)
00:00:01.232885 HostPort <integer> = 0x0000000000003c8f (15 503)
00:00:01.232886 Name <string> = "Custom26" (cb=9)
00:00:01.232887 Protocol <string> = "TCP" (cb=4)
00:00:01.232887
00:00:01.232887 [/Devices/e1000/0/LUN#0/Config/PortForwarding/22/] (level 7)
00:00:01.232888 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232889 GuestPort <integer> = 0x0000000000003c90 (15 504)
00:00:01.232889 HostPort <integer> = 0x0000000000003c90 (15 504)
00:00:01.232890 Name <string> = "Custom27" (cb=9)
00:00:01.232891 Protocol <string> = "TCP" (cb=4)
00:00:01.232891
00:00:01.232891 [/Devices/e1000/0/LUN#0/Config/PortForwarding/23/] (level 7)
00:00:01.232893 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232893 GuestPort <integer> = 0x0000000000003c91 (15 505)
00:00:01.232894 HostPort <integer> = 0x0000000000003c91 (15 505)
00:00:01.232895 Name <string> = "Custom28" (cb=9)
00:00:01.232895 Protocol <string> = "TCP" (cb=4)
00:00:01.232895
00:00:01.232896 [/Devices/e1000/0/LUN#0/Config/PortForwarding/24/] (level 7)
00:00:01.232897 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232897 GuestPort <integer> = 0x000000000000223d (8 765)
00:00:01.232898 HostPort <integer> = 0x000000000000223d (8 765)
00:00:01.232899 Name <string> = "Custom3" (cb=8)
00:00:01.232899 Protocol <string> = "TCP" (cb=4)
00:00:01.232900
00:00:01.232900 [/Devices/e1000/0/LUN#0/Config/PortForwarding/25/] (level 7)
00:00:01.232901 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232902 GuestPort <integer> = 0x0000000000001f9a (8 090)
00:00:01.232902 HostPort <integer> = 0x0000000000001f9a (8 090)
00:00:01.232903 Name <string> = "Custom4" (cb=8)
00:00:01.232904 Protocol <string> = "TCP" (cb=4)
00:00:01.232904
00:00:01.232904 [/Devices/e1000/0/LUN#0/Config/PortForwarding/26/] (level 7)
00:00:01.232906 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232906 GuestPort <integer> = 0x0000000000001f9b (8 091)
00:00:01.232907 HostPort <integer> = 0x0000000000001f9b (8 091)
00:00:01.232907 Name <string> = "Custom5" (cb=8)
00:00:01.232908 Protocol <string> = "TCP" (cb=4)
00:00:01.232908
00:00:01.232909 [/Devices/e1000/0/LUN#0/Config/PortForwarding/27/] (level 7)
00:00:01.232910 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232910 GuestPort <integer> = 0x0000000000001f45 (8 005)
00:00:01.232911 HostPort <integer> = 0x0000000000001f45 (8 005)
00:00:01.232912 Name <string> = "Custom6" (cb=8)
00:00:01.232912 Protocol <string> = "TCP" (cb=4)
00:00:01.232913
00:00:01.232913 [/Devices/e1000/0/LUN#0/Config/PortForwarding/28/] (level 7)
00:00:01.232914 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232915 GuestPort <integer> = 0x0000000000001f96 (8 086)
00:00:01.232916 HostPort <integer> = 0x0000000000001f96 (8 086)
00:00:01.232916 Name <string> = "Custom7" (cb=8)
00:00:01.232917 Protocol <string> = "TCP" (cb=4)
00:00:01.232917
00:00:01.232918 [/Devices/e1000/0/LUN#0/Config/PortForwarding/29/] (level 7)
00:00:01.232919 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232919 GuestPort <integer> = 0x0000000000001f92 (8 082)
00:00:01.232920 HostPort <integer> = 0x0000000000001f92 (8 082)
00:00:01.232921 Name <string> = "Custom8" (cb=8)
00:00:01.232921 Protocol <string> = "TCP" (cb=4)
00:00:01.232922
00:00:01.232922 [/Devices/e1000/0/LUN#0/Config/PortForwarding/3/] (level 7)
00:00:01.232923 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232924 GuestPort <integer> = 0x0000000000005208 (21 000)
00:00:01.232924 HostPort <integer> = 0x0000000000005208 (21 000)
00:00:01.232925 Name <string> = "Atlas" (cb=6)
00:00:01.232926 Protocol <string> = "TCP" (cb=4)
00:00:01.232926
00:00:01.232926 [/Devices/e1000/0/LUN#0/Config/PortForwarding/30/] (level 7)
00:00:01.232927 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232928 GuestPort <integer> = 0x00000000000046a1 (18 081)
00:00:01.232929 HostPort <integer> = 0x00000000000046a1 (18 081)
00:00:01.232929 Name <string> = "Custom9" (cb=8)
00:00:01.232930 Protocol <string> = "TCP" (cb=4)
00:00:01.232930
00:00:01.232931 [/Devices/e1000/0/LUN#0/Config/PortForwarding/31/] (level 7)
00:00:01.232932 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232932 GuestPort <integer> = 0x000000000000c39b (50 075)
00:00:01.232933 HostPort <integer> = 0x000000000000c39b (50 075)
00:00:01.232934 Name <string> = "Datanode" (cb=9)
00:00:01.232934 Protocol <string> = "TCP" (cb=4)
00:00:01.232935
00:00:01.232935 [/Devices/e1000/0/LUN#0/Config/PortForwarding/32/] (level 7)
00:00:01.232936 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232936 GuestPort <integer> = 0x00000000000008ae (2 222)
00:00:01.232937 HostPort <integer> = 0x00000000000008ae (2 222)
00:00:01.232938 Name <string> = "DockerSSH" (cb=10)
00:00:01.232938 Protocol <string> = "TCP" (cb=4)
00:00:01.232939
00:00:01.232939 [/Devices/e1000/0/LUN#0/Config/PortForwarding/33/] (level 7)
00:00:01.232940 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232940 GuestPort <integer> = 0x0000000000003a98 (15 000)
00:00:01.232941 HostPort <integer> = 0x0000000000003a98 (15 000)
00:00:01.232942 Name <string> = "Falcon" (cb=7)
00:00:01.232942 Protocol <string> = "TCP" (cb=4)
00:00:01.232943
00:00:01.232943 [/Devices/e1000/0/LUN#0/Config/PortForwarding/34/] (level 7)
00:00:01.232944 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232945 GuestPort <integer> = 0x0000000000003e8a (16 010)
00:00:01.232945 HostPort <integer> = 0x0000000000003e8a (16 010)
00:00:01.232946 Name <string> = "HBaseMaster" (cb=12)
00:00:01.232947 Protocol <string> = "TCP" (cb=4)
00:00:01.232947
00:00:01.232947 [/Devices/e1000/0/LUN#0/Config/PortForwarding/35/] (level 7)
00:00:01.232948 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232949 GuestPort <integer> = 0x0000000000003e9e (16 030)
00:00:01.232950 HostPort <integer> = 0x0000000000003e9e (16 030)
00:00:01.232950 Name <string> = "HBaseRegion" (cb=12)
00:00:01.232951 Protocol <string> = "TCP" (cb=4)
00:00:01.232951
00:00:01.232952 [/Devices/e1000/0/LUN#0/Config/PortForwarding/36/] (level 7)
00:00:01.232953 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232953 GuestPort <integer> = 0x0000000000002710 (10 000)
00:00:01.232954 HostPort <integer> = 0x0000000000002710 (10 000)
00:00:01.232955 Name <string> = "HS2" (cb=4)
00:00:01.232955 Protocol <string> = "TCP" (cb=4)
00:00:01.232956
00:00:01.232956 [/Devices/e1000/0/LUN#0/Config/PortForwarding/37/] (level 7)
00:00:01.232957 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232957 GuestPort <integer> = 0x0000000000002711 (10 001)
00:00:01.232958 HostPort <integer> = 0x0000000000002711 (10 001)
00:00:01.232959 Name <string> = "HS2Http" (cb=8)
00:00:01.232959 Protocol <string> = "TCP" (cb=4)
00:00:01.232960
00:00:01.232960 [/Devices/e1000/0/LUN#0/Config/PortForwarding/38/] (level 7)
00:00:01.232961 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232962 GuestPort <integer> = 0x0000000000002904 (10 500)
00:00:01.232962 HostPort <integer> = 0x0000000000002904 (10 500)
00:00:01.232963 Name <string> = "HS2v2" (cb=6)
00:00:01.232963 Protocol <string> = "TCP" (cb=4)
00:00:01.232964
00:00:01.232964 [/Devices/e1000/0/LUN#0/Config/PortForwarding/39/] (level 7)
00:00:01.232965 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232966 GuestPort <integer> = 0x0000000000002328 (9 000)
00:00:01.232967 HostPort <integer> = 0x0000000000002328 (9 000)
00:00:01.232967 Name <string> = "HST" (cb=4)
00:00:01.232968 Protocol <string> = "TCP" (cb=4)
00:00:01.232968
00:00:01.232969 [/Devices/e1000/0/LUN#0/Config/PortForwarding/4/] (level 7)
00:00:01.232970 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232970 GuestPort <integer> = 0x000000000000ea60 (60 000)
00:00:01.232971 HostPort <integer> = 0x000000000000ea60 (60 000)
00:00:01.232972 Name <string> = "Custom1" (cb=8)
00:00:01.232972 Protocol <string> = "TCP" (cb=4)
00:00:01.232973
00:00:01.232973 [/Devices/e1000/0/LUN#0/Config/PortForwarding/40/] (level 7)
00:00:01.232974 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232975 GuestPort <integer> = 0x0000000000000016 (22)
00:00:01.232975 HostPort <integer> = 0x000000000000084a (2 122)
00:00:01.232976 Name <string> = "HostSSH" (cb=8)
00:00:01.232976 Protocol <string> = "TCP" (cb=4)
00:00:01.232977
00:00:01.232977 [/Devices/e1000/0/LUN#0/Config/PortForwarding/41/] (level 7)
00:00:01.232978 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232979 GuestPort <integer> = 0x0000000000004db0 (19 888)
00:00:01.232979 HostPort <integer> = 0x0000000000004db0 (19 888)
00:00:01.232980 Name <string> = "JobHistory" (cb=11)
00:00:01.232981 Protocol <string> = "TCP" (cb=4)
00:00:01.232981
00:00:01.232981 [/Devices/e1000/0/LUN#0/Config/PortForwarding/42/] (level 7)
00:00:01.232983 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232983 GuestPort <integer> = 0x00000000000022b9 (8 889)
00:00:01.232984 HostPort <integer> = 0x00000000000022b9 (8 889)
00:00:01.232984 Name <string> = "Jupyter" (cb=8)
00:00:01.232985 Protocol <string> = "TCP" (cb=4)
00:00:01.232985
00:00:01.232986 [/Devices/e1000/0/LUN#0/Config/PortForwarding/43/] (level 7)
00:00:01.232987 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232987 GuestPort <integer> = 0x00000000000020fb (8 443)
00:00:01.232988 HostPort <integer> = 0x00000000000020fb (8 443)
00:00:01.232989 Name <string> = "Knox" (cb=5)
00:00:01.232989 Protocol <string> = "TCP" (cb=4)
00:00:01.232990
00:00:01.232990 [/Devices/e1000/0/LUN#0/Config/PortForwarding/44/] (level 7)
00:00:01.232991 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232992 GuestPort <integer> = 0x0000000000002382 (9 090)
00:00:01.232992 HostPort <integer> = 0x0000000000002382 (9 090)
00:00:01.232993 Name <string> = "Nifi" (cb=5)
00:00:01.232994 Protocol <string> = "TCP" (cb=4)
00:00:01.232994
00:00:01.232994 [/Devices/e1000/0/LUN#0/Config/PortForwarding/45/] (level 7)
00:00:01.232995 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.232996 GuestPort <integer> = 0x0000000000001f6a (8 042)
00:00:01.232997 HostPort <integer> = 0x0000000000001f6a (8 042)
00:00:01.232997 Name <string> = "NodeManager" (cb=12)
00:00:01.232998 Protocol <string> = "TCP" (cb=4)
00:00:01.232998
00:00:01.232999 [/Devices/e1000/0/LUN#0/Config/PortForwarding/46/] (level 7)
00:00:01.233000 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233000 GuestPort <integer> = 0x0000000000002af8 (11 000)
00:00:01.233001 HostPort <integer> = 0x0000000000002af8 (11 000)
00:00:01.233002 Name <string> = "Oozie" (cb=6)
00:00:01.233002 Protocol <string> = "TCP" (cb=4)
00:00:01.233002
00:00:01.233003 [/Devices/e1000/0/LUN#0/Config/PortForwarding/47/] (level 7)
00:00:01.233004 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233004 GuestPort <integer> = 0x0000000000001f60 (8 032)
00:00:01.233005 HostPort <integer> = 0x0000000000001f60 (8 032)
00:00:01.233006 Name <string> = "RM" (cb=3)
00:00:01.233006 Protocol <string> = "TCP" (cb=4)
00:00:01.233007
00:00:01.233007 [/Devices/e1000/0/LUN#0/Config/PortForwarding/48/] (level 7)
00:00:01.233008 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233008 GuestPort <integer> = 0x0000000000002321 (8 993)
00:00:01.233009 HostPort <integer> = 0x0000000000002321 (8 993)
00:00:01.233010 Name <string> = "Solr" (cb=5)
00:00:01.233010 Protocol <string> = "TCP" (cb=4)
00:00:01.233011
00:00:01.233011 [/Devices/e1000/0/LUN#0/Config/PortForwarding/49/] (level 7)
00:00:01.233012 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233013 GuestPort <integer> = 0x0000000000002317 (8 983)
00:00:01.233013 HostPort <integer> = 0x0000000000002317 (8 983)
00:00:01.233014 Name <string> = "SolrAdmin" (cb=10)
00:00:01.233015 Protocol <string> = "TCP" (cb=4)
00:00:01.233015
00:00:01.233015 [/Devices/e1000/0/LUN#0/Config/PortForwarding/5/] (level 7)
00:00:01.233017 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233017 GuestPort <integer> = 0x000000000000271f (10 015)
00:00:01.233018 HostPort <integer> = 0x000000000000271f (10 015)
00:00:01.233018 Name <string> = "Custom10" (cb=9)
00:00:01.233019 Protocol <string> = "TCP" (cb=4)
00:00:01.233019
00:00:01.233020 [/Devices/e1000/0/LUN#0/Config/PortForwarding/50/] (level 7)
00:00:01.233021 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233021 GuestPort <integer> = 0x0000000000000fc8 (4 040)
00:00:01.233022 HostPort <integer> = 0x0000000000000fc8 (4 040)
00:00:01.233023 Name <string> = "Spark" (cb=6)
00:00:01.233023 Protocol <string> = "TCP" (cb=4)
00:00:01.233024
00:00:01.233024 [/Devices/e1000/0/LUN#0/Config/PortForwarding/51/] (level 7)
00:00:01.233025 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233026 GuestPort <integer> = 0x00000000000046a0 (18 080)
00:00:01.233026 HostPort <integer> = 0x00000000000046a0 (18 080)
00:00:01.233027 Name <string> = "SparkHistoryServer" (cb=19)
00:00:01.233028 Protocol <string> = "TCP" (cb=4)
00:00:01.233028
00:00:01.233028 [/Devices/e1000/0/LUN#0/Config/PortForwarding/52/] (level 7)
00:00:01.233029 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233030 GuestPort <integer> = 0x0000000000002228 (8 744)
00:00:01.233031 HostPort <integer> = 0x0000000000002228 (8 744)
00:00:01.233031 Name <string> = "StormUI" (cb=8)
00:00:01.233032 Protocol <string> = "TCP" (cb=4)
00:00:01.233032
00:00:01.233032 [/Devices/e1000/0/LUN#0/Config/PortForwarding/53/] (level 7)
00:00:01.233034 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233034 GuestPort <integer> = 0x00000000000022b8 (8 888)
00:00:01.233035 HostPort <integer> = 0x00000000000022b8 (8 888)
00:00:01.233035 Name <string> = "Tutorials" (cb=10)
00:00:01.233036 Protocol <string> = "TCP" (cb=4)
00:00:01.233036
00:00:01.233037 [/Devices/e1000/0/LUN#0/Config/PortForwarding/54/] (level 7)
00:00:01.233038 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233038 GuestPort <integer> = 0x000000000000eab0 (60 080)
00:00:01.233039 HostPort <integer> = 0x000000000000eab0 (60 080)
00:00:01.233040 Name <string> = "WebHBase" (cb=9)
00:00:01.233040 Protocol <string> = "TCP" (cb=4)
00:00:01.233041
00:00:01.233041 [/Devices/e1000/0/LUN#0/Config/PortForwarding/55/] (level 7)
00:00:01.233042 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233043 GuestPort <integer> = 0x000000000000c3bf (50 111)
00:00:01.233043 HostPort <integer> = 0x000000000000c3bf (50 111)
00:00:01.233044 Name <string> = "WebHcat" (cb=8)
00:00:01.233045 Protocol <string> = "TCP" (cb=4)
00:00:01.233045
00:00:01.233045 [/Devices/e1000/0/LUN#0/Config/PortForwarding/56/] (level 7)
00:00:01.233046 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233047 GuestPort <integer> = 0x000000000000c396 (50 070)
00:00:01.233048 HostPort <integer> = 0x000000000000c396 (50 070)
00:00:01.233048 Name <string> = "WebHdfs" (cb=8)
00:00:01.233049 Protocol <string> = "TCP" (cb=4)
00:00:01.233049
00:00:01.233050 [/Devices/e1000/0/LUN#0/Config/PortForwarding/57/] (level 7)
00:00:01.233051 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233051 GuestPort <integer> = 0x00000000000017c0 (6 080)
00:00:01.233052 HostPort <integer> = 0x00000000000017c0 (6 080)
00:00:01.233053 Name <string> = "XASecure" (cb=9)
00:00:01.233053 Protocol <string> = "TCP" (cb=4)
00:00:01.233054
00:00:01.233054 [/Devices/e1000/0/LUN#0/Config/PortForwarding/58/] (level 7)
00:00:01.233055 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233055 GuestPort <integer> = 0x0000000000001ffc (8 188)
00:00:01.233056 HostPort <integer> = 0x0000000000001ffc (8 188)
00:00:01.233057 Name <string> = "YarnATS" (cb=8)
00:00:01.233057 Protocol <string> = "TCP" (cb=4)
00:00:01.233058
00:00:01.233058 [/Devices/e1000/0/LUN#0/Config/PortForwarding/59/] (level 7)
00:00:01.233059 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233060 GuestPort <integer> = 0x0000000000001f98 (8 088)
00:00:01.233060 HostPort <integer> = 0x0000000000001f98 (8 088)
00:00:01.233061 Name <string> = "YarnRM" (cb=7)
00:00:01.233062 Protocol <string> = "TCP" (cb=4)
00:00:01.233062
00:00:01.233062 [/Devices/e1000/0/LUN#0/Config/PortForwarding/6/] (level 7)
00:00:01.233063 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233064 GuestPort <integer> = 0x0000000000002720 (10 016)
00:00:01.233065 HostPort <integer> = 0x0000000000002720 (10 016)
00:00:01.233065 Name <string> = "Custom11" (cb=9)
00:00:01.233066 Protocol <string> = "TCP" (cb=4)
00:00:01.233066
00:00:01.233067 [/Devices/e1000/0/LUN#0/Config/PortForwarding/60/] (level 7)
00:00:01.233068 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233068 GuestPort <integer> = 0x000000000000270b (9 995)
00:00:01.233069 HostPort <integer> = 0x000000000000270b (9 995)
00:00:01.233069 Name <string> = "Zeppelin1" (cb=10)
00:00:01.233070 Protocol <string> = "TCP" (cb=4)
00:00:01.233070
00:00:01.233071 [/Devices/e1000/0/LUN#0/Config/PortForwarding/61/] (level 7)
00:00:01.233072 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233072 GuestPort <integer> = 0x000000000000270c (9 996)
00:00:01.233073 HostPort <integer> = 0x000000000000270c (9 996)
00:00:01.233073 Name <string> = "Zeppelin2" (cb=10)
00:00:01.233074 Protocol <string> = "TCP" (cb=4)
00:00:01.233074
00:00:01.233075 [/Devices/e1000/0/LUN#0/Config/PortForwarding/62/] (level 7)
00:00:01.233076 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233076 GuestPort <integer> = 0x0000000000000885 (2 181)
00:00:01.233077 HostPort <integer> = 0x0000000000000885 (2 181)
00:00:01.233078 Name <string> = "Zookeeper" (cb=10)
00:00:01.233078 Protocol <string> = "TCP" (cb=4)
00:00:01.233078
00:00:01.233079 [/Devices/e1000/0/LUN#0/Config/PortForwarding/63/] (level 7)
00:00:01.233080 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233080 GuestPort <integer> = 0x0000000000001f90 (8 080)
00:00:01.233081 HostPort <integer> = 0x0000000000001f90 (8 080)
00:00:01.233082 Name <string> = "ambari" (cb=7)
00:00:01.233082 Protocol <string> = "TCP" (cb=4)
00:00:01.233083
00:00:01.233083 [/Devices/e1000/0/LUN#0/Config/PortForwarding/64/] (level 7)
00:00:01.233084 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233084 GuestPort <integer> = 0x0000000000000050 (80)
00:00:01.233085 HostPort <integer> = 0x000000000000a460 (42 080)
00:00:01.233086 Name <string> = "apache" (cb=7)
00:00:01.233086 Protocol <string> = "TCP" (cb=4)
00:00:01.233087
00:00:01.233087 [/Devices/e1000/0/LUN#0/Config/PortForwarding/65/] (level 7)
00:00:01.233088 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233089 GuestPort <integer> = 0x0000000000001f54 (8 020)
00:00:01.233089 HostPort <integer> = 0x0000000000001f54 (8 020)
00:00:01.233090 Name <string> = "hdfs" (cb=5)
00:00:01.233090 Protocol <string> = "TCP" (cb=4)
00:00:01.233091
00:00:01.233091 [/Devices/e1000/0/LUN#0/Config/PortForwarding/66/] (level 7)
00:00:01.233092 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233093 GuestPort <integer> = 0x000000000000a47f (42 111)
00:00:01.233093 HostPort <integer> = 0x000000000000a47f (42 111)
00:00:01.233094 Name <string> = "nfs" (cb=4)
00:00:01.233095 Protocol <string> = "TCP" (cb=4)
00:00:01.233095
00:00:01.233095 [/Devices/e1000/0/LUN#0/Config/PortForwarding/67/] (level 7)
00:00:01.233096 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233097 GuestPort <integer> = 0x0000000000001f68 (8 040)
00:00:01.233098 HostPort <integer> = 0x0000000000001f68 (8 040)
00:00:01.233098 Name <string> = "nodemanager" (cb=12)
00:00:01.233099 Protocol <string> = "TCP" (cb=4)
00:00:01.233099
00:00:01.233099 [/Devices/e1000/0/LUN#0/Config/PortForwarding/7/] (level 7)
00:00:01.233101 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233101 GuestPort <integer> = 0x0000000000002906 (10 502)
00:00:01.233102 HostPort <integer> = 0x0000000000002906 (10 502)
00:00:01.233102 Name <string> = "Custom12" (cb=9)
00:00:01.233103 Protocol <string> = "TCP" (cb=4)
00:00:01.233103
00:00:01.233104 [/Devices/e1000/0/LUN#0/Config/PortForwarding/8/] (level 7)
00:00:01.233105 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233105 GuestPort <integer> = 0x0000000000008311 (33 553)
00:00:01.233106 HostPort <integer> = 0x0000000000008311 (33 553)
00:00:01.233107 Name <string> = "Custom13" (cb=9)
00:00:01.233107 Protocol <string> = "TCP" (cb=4)
00:00:01.233108
00:00:01.233108 [/Devices/e1000/0/LUN#0/Config/PortForwarding/9/] (level 7)
00:00:01.233109 BindIP <string> = "127.0.0.1" (cb=10)
00:00:01.233110 GuestPort <integer> = 0x00000000000099fb (39 419)
00:00:01.233111 HostPort <integer> = 0x00000000000099fb (39 419)
00:00:01.233111 Name <string> = "Custom14" (cb=9)
00:00:01.233112 Protocol <string> = "TCP" (cb=4)
00:00:01.233112
00:00:01.233113 [/Devices/e1000/0/LUN#999/] (level 4)
00:00:01.233113 Driver <string> = "MainStatus" (cb=11)
00:00:01.233114
00:00:01.233114 [/Devices/e1000/0/LUN#999/Config/] (level 5)
00:00:01.233115 First <integer> = 0x0000000000000000 (0)
00:00:01.233116 Last <integer> = 0x0000000000000000 (0)
00:00:01.233117 papLeds <integer> = 0x0000000003fdf3c8 (66 974 664)
00:00:01.233118
00:00:01.233118 [/Devices/i8254/] (level 2)
00:00:01.233119
00:00:01.233119 [/Devices/i8254/0/] (level 3)
00:00:01.233120
00:00:01.233120 [/Devices/i8254/0/Config/] (level 4)
00:00:01.233121
00:00:01.233121 [/Devices/i8259/] (level 2)
00:00:01.233122
00:00:01.233122 [/Devices/i8259/0/] (level 3)
00:00:01.233123 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233123
00:00:01.233124 [/Devices/i8259/0/Config/] (level 4)
00:00:01.233124
00:00:01.233125 [/Devices/ichac97/] (level 2)
00:00:01.233125
00:00:01.233126 [/Devices/ichac97/0/] (level 3)
00:00:01.233126 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.233127 PCIDeviceNo <integer> = 0x0000000000000005 (5)
00:00:01.233128 PCIFunctionNo <integer> = 0x0000000000000000 (0)
00:00:01.233128 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233129
00:00:01.233129 [/Devices/ichac97/0/AudioConfig/] (level 4)
00:00:01.233130
00:00:01.233130 [/Devices/ichac97/0/Config/] (level 4)
00:00:01.233131 Codec <string> = "STAC9700" (cb=9)
00:00:01.233132
00:00:01.233132 [/Devices/ichac97/0/LUN#0/] (level 4)
00:00:01.233133 Driver <string> = "AUDIO" (cb=6)
00:00:01.233133
00:00:01.233134 [/Devices/ichac97/0/LUN#0/AttachedDriver/] (level 5)
00:00:01.233135 Driver <string> = "DSoundAudio" (cb=12)
00:00:01.233135
00:00:01.233135 [/Devices/ichac97/0/LUN#0/AttachedDriver/Config/] (level 6)
00:00:01.233136 StreamName <string> = "Hortonworks Docker Sandbox HDP" (cb=31)
00:00:01.233137
00:00:01.233137 [/Devices/ichac97/0/LUN#0/Config/] (level 5)
00:00:01.233138
00:00:01.233139 [/Devices/ichac97/0/LUN#1/] (level 4)
00:00:01.233139 Driver <string> = "AUDIO" (cb=6)
00:00:01.233140
00:00:01.233140 [/Devices/ichac97/0/LUN#1/AttachedDriver/] (level 5)
00:00:01.233141 Driver <string> = "AudioVRDE" (cb=10)
00:00:01.233142
00:00:01.233142 [/Devices/ichac97/0/LUN#1/AttachedDriver/Config/] (level 6)
00:00:01.233143 AudioDriver <string> = "AudioVRDE" (cb=10)
00:00:01.233143 Object <integer> = 0x000000000402e820 (67 299 360)
00:00:01.233144 ObjectVRDPServer <integer> = 0x000000000402fc40 (67 304 512)
00:00:01.233145 StreamName <string> = "Hortonworks Docker Sandbox HDP" (cb=31)
00:00:01.233146
00:00:01.233146 [/Devices/ioapic/] (level 2)
00:00:01.233147
00:00:01.233147 [/Devices/ioapic/0/] (level 3)
00:00:01.233148 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233149
00:00:01.233149 [/Devices/ioapic/0/Config/] (level 4)
00:00:01.233150 NumCPUs <integer> = 0x0000000000000004 (4)
00:00:01.233150
00:00:01.233150 [/Devices/mc146818/] (level 2)
00:00:01.233151
00:00:01.233152 [/Devices/mc146818/0/] (level 3)
00:00:01.233152
00:00:01.233153 [/Devices/mc146818/0/Config/] (level 4)
00:00:01.233154 UseUTC <integer> = 0x0000000000000001 (1)
00:00:01.233154
00:00:01.233154 [/Devices/parallel/] (level 2)
00:00:01.233155
00:00:01.233155 [/Devices/pcarch/] (level 2)
00:00:01.233156
00:00:01.233156 [/Devices/pcarch/0/] (level 3)
00:00:01.233157 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233158
00:00:01.233158 [/Devices/pcarch/0/Config/] (level 4)
00:00:01.233159
00:00:01.233159 [/Devices/pcbios/] (level 2)
00:00:01.233160
00:00:01.233160 [/Devices/pcbios/0/] (level 3)
00:00:01.233161 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233161
00:00:01.233162 [/Devices/pcbios/0/Config/] (level 4)
00:00:01.233163 APIC <integer> = 0x0000000000000001 (1)
00:00:01.233164 BootDevice0 <string> = "IDE" (cb=4)
00:00:01.233164 BootDevice1 <string> = "DVD" (cb=4)
00:00:01.233165 BootDevice2 <string> = "NONE" (cb=5)
00:00:01.233165 BootDevice3 <string> = "NONE" (cb=5)
00:00:01.233166 FloppyDevice <string> = "i82078" (cb=7)
00:00:01.233166 HardDiskDevice <string> = "piix3ide" (cb=9)
00:00:01.233167 IOAPIC <integer> = 0x0000000000000001 (1)
00:00:01.233168 McfgBase <integer> = 0x0000000000000000 (0)
00:00:01.233168 McfgLength <integer> = 0x0000000000000000 (0)
00:00:01.233169 NumCPUs <integer> = 0x0000000000000004 (4)
00:00:01.233169 PXEDebug <integer> = 0x0000000000000000 (0)
00:00:01.233170 UUID <bytes> = "aa 65 5f 19 b4 25 6e 4f 90 dd 62 ec c2 b2 86 78" (cb=16)
00:00:01.233172
00:00:01.233172 [/Devices/pcbios/0/Config/NetBoot/] (level 5)
00:00:01.233173
00:00:01.233173 [/Devices/pcbios/0/Config/NetBoot/0/] (level 6)
00:00:01.233174 NIC <integer> = 0x0000000000000000 (0)
00:00:01.233175 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.233176 PCIDeviceNo <integer> = 0x0000000000000003 (3)
00:00:01.233176 PCIFunctionNo <integer> = 0x0000000000000000 (0)
00:00:01.233177
00:00:01.233177 [/Devices/pci/] (level 2)
00:00:01.233178
00:00:01.233178 [/Devices/pci/0/] (level 3)
00:00:01.233179 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233179
00:00:01.233180 [/Devices/pci/0/Config/] (level 4)
00:00:01.233180 IOAPIC <integer> = 0x0000000000000001 (1)
00:00:01.233181
00:00:01.233181 [/Devices/pckbd/] (level 2)
00:00:01.233182
00:00:01.233182 [/Devices/pckbd/0/] (level 3)
00:00:01.233183 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233183
00:00:01.233184 [/Devices/pckbd/0/Config/] (level 4)
00:00:01.233184
00:00:01.233185 [/Devices/pckbd/0/LUN#0/] (level 4)
00:00:01.233185 Driver <string> = "KeyboardQueue" (cb=14)
00:00:01.233186
00:00:01.233186 [/Devices/pckbd/0/LUN#0/AttachedDriver/] (level 5)
00:00:01.233187 Driver <string> = "MainKeyboard" (cb=13)
00:00:01.233188
00:00:01.233188 [/Devices/pckbd/0/LUN#0/AttachedDriver/Config/] (level 6)
00:00:01.233189 Object <integer> = 0x0000000003b87d80 (62 422 400)
00:00:01.233190
00:00:01.233190 [/Devices/pckbd/0/LUN#0/Config/] (level 5)
00:00:01.233191 QueueSize <integer> = 0x0000000000000040 (64)
00:00:01.233192
00:00:01.233192 [/Devices/pckbd/0/LUN#1/] (level 4)
00:00:01.233193 Driver <string> = "MouseQueue" (cb=11)
00:00:01.233194
00:00:01.233194 [/Devices/pckbd/0/LUN#1/AttachedDriver/] (level 5)
00:00:01.233195 Driver <string> = "MainMouse" (cb=10)
00:00:01.233195
00:00:01.233195 [/Devices/pckbd/0/LUN#1/AttachedDriver/Config/] (level 6)
00:00:01.233197 Object <integer> = 0x000000000402f690 (67 303 056)
00:00:01.233197
00:00:01.233198 [/Devices/pckbd/0/LUN#1/Config/] (level 5)
00:00:01.233199 QueueSize <integer> = 0x0000000000000080 (128)
00:00:01.233199
00:00:01.233200 [/Devices/pcnet/] (level 2)
00:00:01.233200
00:00:01.233201 [/Devices/piix3ide/] (level 2)
00:00:01.233201
00:00:01.233202 [/Devices/piix3ide/0/] (level 3)
00:00:01.233203 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.233203 PCIDeviceNo <integer> = 0x0000000000000001 (1)
00:00:01.233204 PCIFunctionNo <integer> = 0x0000000000000001 (1)
00:00:01.233204 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233205
00:00:01.233205 [/Devices/piix3ide/0/Config/] (level 4)
00:00:01.233206 Type <string> = "PIIX4" (cb=6)
00:00:01.233207
00:00:01.233207 [/Devices/piix3ide/0/Config/PrimaryMaster/] (level 5)
00:00:01.233208 NonRotationalMedium <integer> = 0x0000000000000000 (0)
00:00:01.233209
00:00:01.233209 [/Devices/piix3ide/0/LUN#0/] (level 4)
00:00:01.233210 Driver <string> = "VD" (cb=3)
00:00:01.233211
00:00:01.233211 [/Devices/piix3ide/0/LUN#0/Config/] (level 5)
00:00:01.233212 Format <string> = "VMDK" (cb=5)
00:00:01.233213 Mountable <integer> = 0x0000000000000000 (0)
00:00:01.233213 Path <string> = "E:\Omkar\Development\Software\Virtualization\VirtualBoxVM\Hortonworks Docker Sandbox HDP\Hortonworks Docker Sandbox HDP-disk1.vmdk" (cb=131)
00:00:01.233214 Type <string> = "HardDisk" (cb=9)
00:00:01.233215
00:00:01.233215 [/Devices/piix3ide/0/LUN#999/] (level 4)
00:00:01.233216 Driver <string> = "MainStatus" (cb=11)
00:00:01.233216
00:00:01.233217 [/Devices/piix3ide/0/LUN#999/Config/] (level 5)
00:00:01.233218 DeviceInstance <string> = "piix3ide/0" (cb=11)
00:00:01.233218 First <integer> = 0x0000000000000000 (0)
00:00:01.233219 Last <integer> = 0x0000000000000003 (3)
00:00:01.233220 pConsole <integer> = 0x0000000003fdece0 (66 972 896)
00:00:01.233221 papLeds <integer> = 0x0000000003fdf0c8 (66 973 896)
00:00:01.233222 pmapMediumAttachments <integer> = 0x0000000003fdf508 (66 974 984)
00:00:01.233223
00:00:01.233223 [/Devices/serial/] (level 2)
00:00:01.233224
00:00:01.233224 [/Devices/usb-ohci/] (level 2)
00:00:01.233225
00:00:01.233225 [/Devices/usb-ohci/0/] (level 3)
00:00:01.233226 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.233226 PCIDeviceNo <integer> = 0x0000000000000006 (6)
00:00:01.233227 PCIFunctionNo <integer> = 0x0000000000000000 (0)
00:00:01.233228 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233228
00:00:01.233228 [/Devices/usb-ohci/0/Config/] (level 4)
00:00:01.233229
00:00:01.233229 [/Devices/usb-ohci/0/LUN#0/] (level 4)
00:00:01.233230 Driver <string> = "VUSBRootHub" (cb=12)
00:00:01.233231
00:00:01.233231 [/Devices/usb-ohci/0/LUN#0/Config/] (level 5)
00:00:01.233232
00:00:01.233232 [/Devices/usb-ohci/0/LUN#999/] (level 4)
00:00:01.233233 Driver <string> = "MainStatus" (cb=11)
00:00:01.233234
00:00:01.233234 [/Devices/usb-ohci/0/LUN#999/Config/] (level 5)
00:00:01.233235 First <integer> = 0x0000000000000000 (0)
00:00:01.233236 Last <integer> = 0x0000000000000000 (0)
00:00:01.233236 papLeds <integer> = 0x0000000003fdf4f0 (66 974 960)
00:00:01.233237
00:00:01.233237 [/Devices/vga/] (level 2)
00:00:01.233238
00:00:01.233238 [/Devices/vga/0/] (level 3)
00:00:01.233239 PCIBusNo <integer> = 0x0000000000000000 (0)
00:00:01.233240 PCIDeviceNo <integer> = 0x0000000000000002 (2)
00:00:01.233240 PCIFunctionNo <integer> = 0x0000000000000000 (0)
00:00:01.233241 Trusted <integer> = 0x0000000000000001 (1)
00:00:01.233241
00:00:01.233242 [/Devices/vga/0/Config/] (level 4)
00:00:01.233243 CustomVideoModes <integer> = 0x0000000000000000 (0)
00:00:01.233244 FadeIn <integer> = 0x0000000000000001 (1)
00:00:01.233244 FadeOut <integer> = 0x0000000000000001 (1)
00:00:01.233245 HeightReduction <integer> = 0x0000000000000000 (0)
00:00:01.233245 LogoFile <string> = "" (cb=1)
00:00:01.233246 LogoTime <integer> = 0x0000000000000000 (0)
00:00:01.233247 MonitorCount <integer> = 0x0000000000000001 (1)
00:00:01.233247 ShowBootMenu <integer> = 0x0000000000000002 (2)
00:00:01.233248 VRamSize <integer> = 0x0000000000800000 (8 388 608, 8 MB)
00:00:01.233249
00:00:01.233250 [/Devices/vga/0/LUN#0/] (level 4)
00:00:01.233250 Driver <string> = "MainDisplay" (cb=12)
00:00:01.233251
00:00:01.233251 [/Devices/vga/0/LUN#0/Config/] (level 5)
00:00:01.233252 Object <integer> = 0x0000000003fe2490 (66 987 152)
00:00:01.233253
00:00:01.233253 [/Devices/vga/0/LUN#999/] (level 4)
00:00:01.233254 Driver <string> = "MainStatus" (cb=11)
00:00:01.233255
00:00:01.233255 [/Devices/vga/0/LUN#999/Config/] (level 5)
00:00:01.233256 First <integer> = 0x0000000000000000 (0)
00:00:01.233256 Last <integer> = 0x0000000000000000 (0)
00:00:01.233257 papLeds <integer> = 0x0000000003fdf500 (66 974 976)
00:00:01.233258
00:00:01.233258 [/Devices/virtio-net/] (level 2)
00:00:01.233259
00:00:01.233259 [/EM/] (level 1)
00:00:01.233260 TripleFaultReset <integer> = 0x0000000000000000 (0)
00:00:01.233260
00:00:01.233261 [/GIM/] (level 1)
00:00:01.233261 Provider <string> = "KVM" (cb=4)
00:00:01.233262
00:00:01.233262 [/HM/] (level 1)
00:00:01.233263 64bitEnabled <integer> = 0x0000000000000001 (1)
00:00:01.233263 EnableLargePages <integer> = 0x0000000000000000 (0)
00:00:01.233264 EnableNestedPaging <integer> = 0x0000000000000001 (1)
00:00:01.233264 EnableUX <integer> = 0x0000000000000001 (1)
00:00:01.233265 EnableVPID <integer> = 0x0000000000000001 (1)
00:00:01.233266 Exclusive <integer> = 0x0000000000000000 (0)
00:00:01.233325 HMForced <integer> = 0x0000000000000001 (1)
00:00:01.233326
00:00:01.233327 [/MM/] (level 1)
00:00:01.233327 CanUseLargerHeap <integer> = 0x0000000000000000 (0)
00:00:01.233328
00:00:01.233328 [/PDM/] (level 1)
00:00:01.233329
00:00:01.233329 [/PDM/AsyncCompletion/] (level 2)
00:00:01.233330
00:00:01.233330 [/PDM/AsyncCompletion/File/] (level 3)
00:00:01.233331
00:00:01.233331 [/PDM/AsyncCompletion/File/BwGroups/] (level 4)
00:00:01.233332
00:00:01.233332 [/PDM/BlkCache/] (level 2)
00:00:01.233333 CacheSize <integer> = 0x0000000000500000 (5 242 880, 5 MB)
00:00:01.233334
00:00:01.233334 [/PDM/Devices/] (level 2)
00:00:01.233335
00:00:01.233335 [/PDM/Drivers/] (level 2)
00:00:01.233336
00:00:01.233336 [/PDM/Drivers/VBoxC/] (level 3)
00:00:01.233337 Path <string> = "VBoxC" (cb=6)
00:00:01.233338
00:00:01.233338 [/PDM/NetworkShaper/] (level 2)
00:00:01.233339
00:00:01.233339 [/PDM/NetworkShaper/BwGroups/] (level 3)
00:00:01.233340
00:00:01.233340 [/TM/] (level 1)
00:00:01.233341 UTCOffset <integer> = 0x0000000000000000 (0)
00:00:01.233342
00:00:01.233342 [/USB/] (level 1)
00:00:01.233343
00:00:01.233343 [/USB/HidMouse/] (level 2)
00:00:01.233344
00:00:01.233344 [/USB/HidMouse/0/] (level 3)
00:00:01.233345
00:00:01.233345 [/USB/HidMouse/0/Config/] (level 4)
00:00:01.233346 Mode <string> = "absolute" (cb=9)
00:00:01.233346
00:00:01.233347 [/USB/HidMouse/0/LUN#0/] (level 4)
00:00:01.233347 Driver <string> = "MouseQueue" (cb=11)
00:00:01.233348
00:00:01.233348 [/USB/HidMouse/0/LUN#0/AttachedDriver/] (level 5)
00:00:01.233349 Driver <string> = "MainMouse" (cb=10)
00:00:01.233350
00:00:01.233350 [/USB/HidMouse/0/LUN#0/AttachedDriver/Config/] (level 6)
00:00:01.233351 Object <integer> = 0x000000000402f690 (67 303 056)
00:00:01.233367
00:00:01.233368 [/USB/HidMouse/0/LUN#0/Config/] (level 5)
00:00:01.233369 QueueSize <integer> = 0x0000000000000080 (128)
00:00:01.233369
00:00:01.233370 [/USB/USBProxy/] (level 2)
00:00:01.233370
00:00:01.233370 [/USB/USBProxy/GlobalConfig/] (level 3)
00:00:01.233371
00:00:01.233372 ********************* End of CFGM dump **********************
00:00:01.233431 VM: fHMEnabled=true (configured) fRecompileUser=false fRecompileSupervisor=false
00:00:01.233432 VM: fRawRing1Enabled=false CSAM=true PATM=true
00:00:01.233644 HM: HMR3Init: VT-x w/ nested paging and unrestricted guest execution hw support
00:00:01.233779 MM: cbHyperHeap=0x140000 (1310720)
00:00:01.235117 CPUM: fXStateHostMask=0x7; initial: 0x7; host XCR0=0x7
00:00:01.238015 CPUM: Matched host CPU INTEL 0x6/0x5e/0x3 Intel_Core7_Skylake with CPU DB entry 'Intel Core i7-6700K' (INTEL 0x6/0x5e/0x3 Intel_Core7_Skylake)
00:00:01.238234 CPUM: MXCSR_MASK=0xffff (host: 0xffff)
00:00:01.238277 CPUM: Microcode revision 0x0000009E
00:00:01.238356 CPUM: SetGuestCpuIdFeature: Enabled PAE
00:00:01.238376 CPUM: VCPU 0: Cached APIC base MSR = 0x0
00:00:01.238382 CPUM: VCPU 1: Cached APIC base MSR = 0x0
00:00:01.238387 CPUM: VCPU 2: Cached APIC base MSR = 0x0
00:00:01.238391 CPUM: VCPU 3: Cached APIC base MSR = 0x0
00:00:01.239786 PGM: HCPhysInterPD=000000008cc22000 HCPhysInterPaePDPT=000000008cc1d000 HCPhysInterPaePML4=000000008cc1b000
00:00:01.239811 PGM: apInterPTs={000000008cc21000,000000008cc20000} apInterPaePTs={000000003bbed000,00000002119f9000} apInterPaePDs={00000001643fa000,00000000610fb000,00000001ce40d000,000000012eccb000} pInterPaePDPT64=000000008cc1c000
00:00:01.239869 PGM: Host paging mode: AMD64+PGE+NX
00:00:01.239942 PGM: PGMPool: cMaxPages=7200 (u64MaxPages=7195)
00:00:01.239942 PGM: pgmR3PoolInit: cMaxPages=0x1c20 cMaxUsers=0x3840 cMaxPhysExts=0x2000 fCacheEnable=true
00:00:01.326850 TM: GIP - u32Mode=3 (Invariant) u32UpdateHz=100 u32UpdateIntervalNS=10000000 enmUseTscDelta=2 (Pratically Zero) fGetGipCpu=0x1 cCpus=8
00:00:01.326892 TM: GIP - u64CpuHz=2 718 944 806 (0xa20fce26) SUPGetCpuHzFromGip => 2 718 944 806
00:00:01.326899 TM: GIP - CPU: iCpuSet=0x0 idCpu=0x0 idApic=0x0 iGipCpu=0x0 i64TSCDelta=0 enmState=3 u64CpuHz=2718944806(*) cErrors=0
00:00:01.326905 TM: GIP - CPU: iCpuSet=0x1 idCpu=0x1 idApic=0x1 iGipCpu=0x1 i64TSCDelta=0 enmState=3 u64CpuHz=2392559689(*) cErrors=0
00:00:01.326909 TM: GIP - CPU: iCpuSet=0x2 idCpu=0x2 idApic=0x2 iGipCpu=0x7 i64TSCDelta=0 enmState=3 u64CpuHz=2728334143(*) cErrors=0
00:00:01.326914 TM: GIP - CPU: iCpuSet=0x3 idCpu=0x3 idApic=0x3 iGipCpu=0x2 i64TSCDelta=0 enmState=3 u64CpuHz=2712037041(*) cErrors=0
00:00:01.326918 TM: GIP - CPU: iCpuSet=0x4 idCpu=0x4 idApic=0x4 iGipCpu=0x5 i64TSCDelta=0 enmState=3 u64CpuHz=2749400563(*) cErrors=0
00:00:01.326922 TM: GIP - CPU: iCpuSet=0x5 idCpu=0x5 idApic=0x5 iGipCpu=0x3 i64TSCDelta=0 enmState=3 u64CpuHz=2712010054(*) cErrors=0
00:00:01.326927 TM: GIP - CPU: iCpuSet=0x6 idCpu=0x6 idApic=0x6 iGipCpu=0x6 i64TSCDelta=0 enmState=3 u64CpuHz=2732084806(*) cErrors=0
00:00:01.326931 TM: GIP - CPU: iCpuSet=0x7 idCpu=0x7 idApic=0x7 iGipCpu=0x4 i64TSCDelta=0 enmState=3 u64CpuHz=2750459509(*) cErrors=0
00:00:01.327018 TM: cTSCTicksPerSecond=2 718 944 806 (0xa20fce26) enmTSCMode=1 (VirtTscEmulated)
00:00:01.327020 TM: TSCTiedToExecution=false TSCNotTiedToHalt=false
00:00:01.328766 VMM: CoreCode: R3=00000000039f0000 R0=XXXXXXXXXXXXXXXX RC=a10c2000 Phys=000000008cbfe000 cb=0x1000
00:00:01.329154 IEM: TargetCpu=CURRENT, Microarch=Intel_Core7_Skylake
00:00:01.329365 GIM: Using provider 'KVM' (Implementation version: 0)
00:00:01.329387 CPUM: SetGuestCpuIdFeature: Enabled Hypervisor Present bit
00:00:01.329548 AIOMgr: Default manager type is 'Async'
00:00:01.329555 AIOMgr: Default file backend is 'NonBuffered'
00:00:01.329942 BlkCache: Cache successfully initialized. Cache size is 5242880 bytes
00:00:01.329942 BlkCache: Cache commit interval is 10000 ms
00:00:01.329942 BlkCache: Cache commit threshold is 2621440 bytes
00:00:01.573442 PcBios: [SMP] BIOS with 4 CPUs
00:00:01.573545 PcBios: Using the 386+ BIOS image.
00:00:01.573876 PcBios: MPS table at 000e1300
00:00:01.576482 PcBios: fCheckShutdownStatusForSoftReset=true fClearShutdownStatusOnHardReset=true
00:00:01.606964 SUP: Loaded VBoxDDR0.r0 (C:\Program Files\Oracle\VirtualBox\VBoxDDR0.r0) at 0xXXXXXXXXXXXXXXXX - ModuleInit at XXXXXXXXXXXXXXXX and ModuleTerm at XXXXXXXXXXXXXXXX using the native ring-0 loader
00:00:01.606993 SUP: windbg> .reload /f C:\Program Files\Oracle\VirtualBox\VBoxDDR0.r0=0xXXXXXXXXXXXXXXXX
00:00:01.607831 CPUM: SetGuestCpuIdFeature: Enabled xAPIC
00:00:01.607842 CPUM: SetGuestCpuIdFeature: Enabled x2APIC
00:00:01.608396 IOAPIC: Using implementation 2.0!
00:00:01.608548 PIT: mode=3 count=0x10000 (65536) - 18.20 Hz (ch=0)
00:00:01.635548 Shared Folders service loaded
00:00:01.642595 VGA: Using the 386+ BIOS image.
00:00:01.647569 DrvVD: Flushes will be ignored
00:00:01.647590 DrvVD: Async flushes will be passed to the disk
00:00:01.647861 VD: VDInit finished
00:00:01.652981 VD: Opening the disk took 5365477 ns
00:00:01.653219 PIIX3 ATA: LUN#0: disk, PCHS=16383/16/63, total number of sectors 102400000
00:00:01.653233 PIIX3 ATA: LUN#1: no unit
00:00:01.653605 PIIX3 ATA: LUN#2: no unit
00:00:01.653620 PIIX3 ATA: LUN#3: no unit
00:00:01.653671 PIIX3 ATA: Ctl#0: finished processing RESET
00:00:01.653764 PIIX3 ATA: Ctl#1: finished processing RESET
00:00:01.653889 E1000#0 Chip=82540EM LinkUpDelay=5000ms EthernetCRC=on GSO=enabled Itr=enabled ItrRx=enabled R0=enabled GC=enabled
00:00:01.661390 NAT: Guest address guess set to 10.0.2.15 by initialization
00:00:01.689942 NAT: DNS#0: 138.106.13.128
00:00:01.689942 NAT: DNS#1: 138.106.14.37
00:00:01.689942 NAT: DNS#2: 192.168.43.1
00:00:01.689942 NAT: DNS#3: 192.168.18.2
00:00:01.712037 NAT: Set redirect TCP 127.0.0.1:50095 -> 0.0.0.0:50095
00:00:01.712294 NAT: Set redirect TCP 127.0.0.1:8886 -> 0.0.0.0:8886
00:00:01.712541 NAT: Set redirect TCP 127.0.0.1:15002 -> 0.0.0.0:15002
00:00:01.712737 NAT: Set redirect TCP 127.0.0.1:111 -> 0.0.0.0:111
00:00:01.712898 NAT: Set redirect TCP 127.0.0.1:2049 -> 0.0.0.0:2049
00:00:01.713061 NAT: Set redirect TCP 127.0.0.1:4242 -> 0.0.0.0:4242
00:00:01.713164 NAT: Set redirect TCP 127.0.0.1:50079 -> 0.0.0.0:50079
00:00:01.713242 NAT: Set redirect TCP 127.0.0.1:3000 -> 0.0.0.0:3000
00:00:01.713320 NAT: Set redirect TCP 127.0.0.1:16000 -> 0.0.0.0:16000
00:00:01.713407 NAT: Set redirect TCP 127.0.0.1:16020 -> 0.0.0.0:16020
00:00:01.713485 NAT: Set redirect TCP 127.0.0.1:15500 -> 0.0.0.0:15500
00:00:01.713561 NAT: Set redirect TCP 127.0.0.1:15501 -> 0.0.0.0:15501
00:00:01.713636 NAT: Set redirect TCP 127.0.0.1:4200 -> 0.0.0.0:4200
00:00:01.713711 NAT: Set redirect TCP 127.0.0.1:15502 -> 0.0.0.0:15502
00:00:01.713788 NAT: Set redirect TCP 127.0.0.1:15503 -> 0.0.0.0:15503
00:00:01.713864 NAT: Set redirect TCP 127.0.0.1:15504 -> 0.0.0.0:15504
00:00:01.713942 NAT: Set redirect TCP 127.0.0.1:15505 -> 0.0.0.0:15505
00:00:01.714354 NAT: Set redirect TCP 127.0.0.1:8765 -> 0.0.0.0:8765
00:00:01.714445 NAT: Set redirect TCP 127.0.0.1:8090 -> 0.0.0.0:8090
00:00:01.714524 NAT: Set redirect TCP 127.0.0.1:8091 -> 0.0.0.0:8091
00:00:01.714599 NAT: Set redirect TCP 127.0.0.1:8005 -> 0.0.0.0:8005
00:00:01.714674 NAT: Set redirect TCP 127.0.0.1:8086 -> 0.0.0.0:8086
00:00:01.714748 NAT: Set redirect TCP 127.0.0.1:8082 -> 0.0.0.0:8082
00:00:01.714824 NAT: Set redirect TCP 127.0.0.1:21000 -> 0.0.0.0:21000
00:00:01.714900 NAT: Set redirect TCP 127.0.0.1:18081 -> 0.0.0.0:18081
00:00:01.714975 NAT: Set redirect TCP 127.0.0.1:50075 -> 0.0.0.0:50075
00:00:01.715083 NAT: Set redirect TCP 127.0.0.1:2222 -> 0.0.0.0:2222
00:00:01.715189 NAT: Set redirect TCP 127.0.0.1:15000 -> 0.0.0.0:15000
00:00:01.715265 NAT: Set redirect TCP 127.0.0.1:16010 -> 0.0.0.0:16010
00:00:01.715374 NAT: Set redirect TCP 127.0.0.1:16030 -> 0.0.0.0:16030
00:00:01.715489 NAT: Set redirect TCP 127.0.0.1:10000 -> 0.0.0.0:10000
00:00:01.715582 NAT: Set redirect TCP 127.0.0.1:10001 -> 0.0.0.0:10001
00:00:01.715663 NAT: Set redirect TCP 127.0.0.1:10500 -> 0.0.0.0:10500
00:00:01.715741 NAT: Set redirect TCP 127.0.0.1:9000 -> 0.0.0.0:9000
00:00:01.715816 NAT: Set redirect TCP 127.0.0.1:60000 -> 0.0.0.0:60000
00:00:01.715889 NAT: Set redirect TCP 127.0.0.1:2122 -> 0.0.0.0:22
00:00:01.715964 NAT: Set redirect TCP 127.0.0.1:19888 -> 0.0.0.0:19888
00:00:01.716039 NAT: Set redirect TCP 127.0.0.1:8889 -> 0.0.0.0:8889
00:00:01.716118 NAT: Set redirect TCP 127.0.0.1:8443 -> 0.0.0.0:8443
00:00:01.716195 NAT: Set redirect TCP 127.0.0.1:9090 -> 0.0.0.0:9090
00:00:01.716270 NAT: Set redirect TCP 127.0.0.1:8042 -> 0.0.0.0:8042
00:00:01.716345 NAT: Set redirect TCP 127.0.0.1:11000 -> 0.0.0.0:11000
00:00:01.716421 NAT: Set redirect TCP 127.0.0.1:8032 -> 0.0.0.0:8032
00:00:01.716503 NAT: Set redirect TCP 127.0.0.1:8993 -> 0.0.0.0:8993
00:00:01.716579 NAT: Set redirect TCP 127.0.0.1:8983 -> 0.0.0.0:8983
00:00:01.716654 NAT: Set redirect TCP 127.0.0.1:10015 -> 0.0.0.0:10015
00:00:01.716730 NAT: Set redirect TCP 127.0.0.1:4040 -> 0.0.0.0:4040
00:00:01.716806 NAT: Set redirect TCP 127.0.0.1:18080 -> 0.0.0.0:18080
00:00:01.716881 NAT: Set redirect TCP 127.0.0.1:8744 -> 0.0.0.0:8744
00:00:01.716955 NAT: Set redirect TCP 127.0.0.1:8888 -> 0.0.0.0:8888
00:00:01.717029 NAT: Set redirect TCP 127.0.0.1:60080 -> 0.0.0.0:60080
00:00:01.717104 NAT: Set redirect TCP 127.0.0.1:50111 -> 0.0.0.0:50111
00:00:01.717179 NAT: Set redirect TCP 127.0.0.1:50070 -> 0.0.0.0:50070
00:00:01.717258 NAT: Set redirect TCP 127.0.0.1:6080 -> 0.0.0.0:6080
00:00:01.717335 NAT: Set redirect TCP 127.0.0.1:8188 -> 0.0.0.0:8188
00:00:01.717409 NAT: Set redirect TCP 127.0.0.1:8088 -> 0.0.0.0:8088
00:00:01.717484 NAT: Set redirect TCP 127.0.0.1:10016 -> 0.0.0.0:10016
00:00:01.717575 NAT: Set redirect TCP 127.0.0.1:9995 -> 0.0.0.0:9995
00:00:01.717665 NAT: Set redirect TCP 127.0.0.1:9996 -> 0.0.0.0:9996
00:00:01.717739 NAT: Set redirect TCP 127.0.0.1:2181 -> 0.0.0.0:2181
00:00:01.717813 NAT: Set redirect TCP 127.0.0.1:8080 -> 0.0.0.0:8080
00:00:01.717889 NAT: Set redirect TCP 127.0.0.1:42080 -> 0.0.0.0:80
00:00:01.717963 NAT: Set redirect TCP 127.0.0.1:8020 -> 0.0.0.0:8020
00:00:01.718038 NAT: Set redirect TCP 127.0.0.1:42111 -> 0.0.0.0:42111
00:00:01.718112 NAT: Set redirect TCP 127.0.0.1:8040 -> 0.0.0.0:8040
00:00:01.718187 NAT: Set redirect TCP 127.0.0.1:10502 -> 0.0.0.0:10502
00:00:01.718263 NAT: Set redirect TCP 127.0.0.1:33553 -> 0.0.0.0:33553
00:00:01.718338 NAT: Set redirect TCP 127.0.0.1:39419 -> 0.0.0.0:39419
00:00:01.719725 Audio: Initializing DirectSound audio driver
00:00:01.822624 DSound: Output: GUID: {9F8EA98D-B246-4B07-BB17-69BDD99DF445} [Headset Earphone (2- Microsoft LifeChat LX-6000)] (Module: {0.0.0.00000000}.{9f8ea98d-b246-4b07-bb17-69bdd99df445})
00:00:01.822646 DSound: Output: GUID: {EECC4F2B-7ED9-4CF8-AFFB-71A70944DF20} [Speakers / Headphones (Realtek High Definition Audio)] (Module: {0.0.0.00000000}.{eecc4f2b-7ed9-4cf8-affb-71a70944df20})
00:00:01.949942 DSound: Input: GUID: {613F31C7-684F-4F46-954F-8F495AB29314} [Microphone (HD Pro Webcam C920)] (Module: {0.0.1.00000000}.{613f31c7-684f-4f46-954f-8f495ab29314})
00:00:01.949942 DSound: Input: GUID: {454316F8-E0A8-436B-836D-7E149B151774} [Microphone Array (Realtek High Definition Audio)] (Module: {0.0.1.00000000}.{454316f8-e0a8-436b-836d-7e149b151774})
00:00:01.949942 DSound: Input: GUID: {E29E26D9-DB66-4905-8ED7-B32C30E8B594} [Headset Microphone (2- Microsoft LifeChat LX-6000)] (Module: {0.0.1.00000000}.{e29e26d9-db66-4905-8ed7-b32c30e8b594})
00:00:01.949942 DSound: Found 2 host playback devices
00:00:01.949942 DSound: Found 3 host capturing devices
00:00:01.951284 Audio: Host audio backend supports 2 output streams and 3 input streams at once
00:00:01.951308 Audio: Initializing VRDE driver
00:00:01.951317 Audio: Host audio backend supports 1 output streams and 2 input streams at once
00:00:01.951362 AC97: Reset
00:00:01.951896 DSound: Guest "Microphone In" is using host device with GUID: {613F31C7-684F-4F46-954F-8F495AB29314}
00:00:01.967020 VUSB: Attached 'HidMouse' to port 1
00:00:01.967297 PGM: The CPU physical address width is 39 bits
00:00:01.967309 PGM: PGMR3InitFinalize: 4 MB PSE mask 0000007fffffffff
00:00:01.967493 TM: TMR3InitFinalize: fTSCModeSwitchAllowed=true
00:00:02.007550 VMM: Thread-context hooks unavailable
00:00:02.013188 HM: Using VT-x implementation 2.0
00:00:02.013189 HM: Host CR4 = 0x406f8
00:00:02.013191 HM: Host EFER = 0xd01
00:00:02.013192 HM: MSR_IA32_SMM_MONITOR_CTL = 0x0
00:00:02.013192 HM: MSR_IA32_FEATURE_CONTROL = 0x5
00:00:02.013193 HM: MSR_IA32_VMX_BASIC_INFO = 0xda040000000004
00:00:02.013194 HM: VMCS id = 0x4
00:00:02.013194 HM: VMCS size = 1024 bytes
00:00:02.013195 HM: VMCS physical address limit = None
00:00:02.013196 HM: VMCS memory type = 0x6
00:00:02.013196 HM: Dual-monitor treatment support = true
00:00:02.013197 HM: OUTS & INS instruction-info = true
00:00:02.013197 HM: Max resume loops = 1024
00:00:02.013198 HM: MSR_IA32_VMX_PINBASED_CTLS = 0x7f00000016
00:00:02.013199 HM: EXT_INT_EXIT
00:00:02.013199 HM: NMI_EXIT
00:00:02.013199 HM: VIRTUAL_NMI
00:00:02.013200 HM: PREEMPT_TIMER
00:00:02.013200 HM: POSTED_INTR (must be cleared)
00:00:02.013200 HM: MSR_IA32_VMX_PROCBASED_CTLS = 0xfff9fffe0401e172
00:00:02.013201 HM: INT_WINDOW_EXIT
00:00:02.013202 HM: USE_TSC_OFFSETTING
00:00:02.013202 HM: HLT_EXIT
00:00:02.013202 HM: INVLPG_EXIT
00:00:02.013229 HM: MWAIT_EXIT
00:00:02.013230 HM: RDPMC_EXIT
00:00:02.013230 HM: RDTSC_EXIT
00:00:02.013230 HM: CR3_LOAD_EXIT (must be set)
00:00:02.013231 HM: CR3_STORE_EXIT (must be set)
00:00:02.013231 HM: CR8_LOAD_EXIT
00:00:02.013231 HM: CR8_STORE_EXIT
00:00:02.013232 HM: USE_TPR_SHADOW
00:00:02.013232 HM: NMI_WINDOW_EXIT
00:00:02.013232 HM: MOV_DR_EXIT
00:00:02.013232 HM: UNCOND_IO_EXIT
00:00:02.013233 HM: USE_IO_BITMAPS
00:00:02.013233 HM: MONITOR_TRAP_FLAG
00:00:02.013233 HM: USE_MSR_BITMAPS
00:00:02.013234 HM: MONITOR_EXIT
00:00:02.013234 HM: PAUSE_EXIT
00:00:02.013234 HM: USE_SECONDARY_EXEC_CTRL
00:00:02.013235 HM: MSR_IA32_VMX_PROCBASED_CTLS2 = 0x1ffcff00000000
00:00:02.013236 HM: VIRT_APIC
00:00:02.013236 HM: EPT
00:00:02.013236 HM: DESCRIPTOR_TABLE_EXIT
00:00:02.013236 HM: RDTSCP
00:00:02.013237 HM: VIRT_X2APIC
00:00:02.013237 HM: VPID
00:00:02.013237 HM: WBINVD_EXIT
00:00:02.013238 HM: UNRESTRICTED_GUEST
00:00:02.013238 HM: APIC_REG_VIRT (must be cleared)
00:00:02.013238 HM: VIRT_INTR_DELIVERY (must be cleared)
00:00:02.013239 HM: PAUSE_LOOP_EXIT
00:00:02.013239 HM: RDRAND_EXIT
00:00:02.013239 HM: INVPCID
00:00:02.013240 HM: VMFUNC
00:00:02.013240 HM: VMCS_SHADOWING
00:00:02.013240 HM: ENCLS_EXIT
00:00:02.013240 HM: RDSEED_EXIT
00:00:02.013241 HM: PML
00:00:02.013241 HM: EPT_VE
00:00:02.013241 HM: CONCEAL_FROM_PT
00:00:02.013242 HM: XSAVES_XRSTORS
00:00:02.013242 HM: TSC_SCALING (must be cleared)
00:00:02.013255 HM: MSR_IA32_VMX_ENTRY_CTLS = 0x3ffff000011ff
00:00:02.013256 HM: LOAD_DEBUG (must be set)
00:00:02.013256 HM: IA32E_MODE_GUEST
00:00:02.013257 HM: ENTRY_SMM
00:00:02.013257 HM: DEACTIVATE_DUALMON
00:00:02.013257 HM: LOAD_GUEST_PERF_MSR
00:00:02.013257 HM: LOAD_GUEST_PAT_MSR
00:00:02.013258 HM: LOAD_GUEST_EFER_MSR
00:00:02.013258 HM: MSR_IA32_VMX_EXIT_CTLS = 0x1ffffff00036dff
00:00:02.013259 HM: SAVE_DEBUG (must be set)
00:00:02.013272 HM: HOST_ADDR_SPACE_SIZE
00:00:02.013273 HM: LOAD_PERF_MSR
00:00:02.013273 HM: ACK_EXT_INT
00:00:02.013273 HM: SAVE_GUEST_PAT_MSR
00:00:02.013273 HM: LOAD_HOST_PAT_MSR
00:00:02.013274 HM: SAVE_GUEST_EFER_MSR
00:00:02.013274 HM: LOAD_HOST_EFER_MSR
00:00:02.013274 HM: SAVE_VMX_PREEMPT_TIMER
00:00:02.013275 HM: MSR_IA32_VMX_EPT_VPID_CAP = 0xf0106334141
00:00:02.013275 HM: RWX_X_ONLY
00:00:02.013276 HM: PAGE_WALK_LENGTH_4
00:00:02.013276 HM: EMT_UC
00:00:02.013276 HM: EMT_WB
00:00:02.013277 HM: PDE_2M
00:00:02.013277 HM: PDPTE_1G
00:00:02.013277 HM: INVEPT
00:00:02.013277 HM: EPT_ACCESS_DIRTY
00:00:02.013278 HM: INVEPT_SINGLE_CONTEXT
00:00:02.013278 HM: INVEPT_ALL_CONTEXTS
00:00:02.013278 HM: INVVPID
00:00:02.013279 HM: INVVPID_INDIV_ADDR
00:00:02.013279 HM: INVVPID_SINGLE_CONTEXT
00:00:02.013279 HM: INVVPID_ALL_CONTEXTS
00:00:02.013280 HM: INVVPID_SINGLE_CONTEXT_RETAIN_GLOBALS
00:00:02.013280 HM: MSR_IA32_VMX_MISC = 0x7004c1e7
00:00:02.013281 HM: PREEMPT_TSC_BIT = 0x7
00:00:02.013281 HM: STORE_EFERLMA_VMEXIT = true
00:00:02.013282 HM: ACTIVITY_STATES = 0x7
00:00:02.013282 HM: CR3_TARGET = 0x4
00:00:02.013282 HM: MAX_MSR = 512
00:00:02.013283 HM: RDMSR_SMBASE_MSR_SMM = true
00:00:02.013283 HM: SMM_MONITOR_CTL_B2 = true
00:00:02.013284 HM: VMWRITE_VMEXIT_INFO = true
00:00:02.013284 HM: MSEG_ID = 0x0
00:00:02.013285 HM: MSR_IA32_VMX_CR0_FIXED0 = 0x80000021
00:00:02.013285 HM: MSR_IA32_VMX_CR0_FIXED1 = 0xffffffff
00:00:02.013286 HM: MSR_IA32_VMX_CR4_FIXED0 = 0x2000
00:00:02.013287 HM: MSR_IA32_VMX_CR4_FIXED1 = 0x3767ff
00:00:02.013287 HM: MSR_IA32_VMX_VMCS_ENUM = 0x2e
00:00:02.013288 HM: HIGHEST_INDEX = 0x17
00:00:02.013288 HM: MSR_IA32_VMX_VMFUNC = 0x1
00:00:02.013289 HM: EPTP_SWITCHING
00:00:02.013289 HM: APIC-access page physaddr = 0x000000008cbfd000
00:00:02.013290 HM: VCPU 0: MSR bitmap physaddr = 0x000000008cbfa000
00:00:02.013291 HM: VCPU 0: VMCS physaddr = 0x000000008cbfc000
00:00:02.013292 HM: VCPU 1: MSR bitmap physaddr = 0x000000008cbf4000
00:00:02.013293 HM: VCPU 1: VMCS physaddr = 0x000000008cbf7000
00:00:02.013293 HM: VCPU 2: MSR bitmap physaddr = 0x000000008cbee000
00:00:02.013294 HM: VCPU 2: VMCS physaddr = 0x000000008cbf0000
00:00:02.013295 HM: VCPU 3: MSR bitmap physaddr = 0x000000008cbe7000
00:00:02.013295 HM: VCPU 3: VMCS physaddr = 0x000000008cbea000
00:00:02.013296 HM: Guest support: 32-bit and 64-bit
00:00:02.013302 HM: Supports VMCS EFER fields = true
00:00:02.013303 HM: Enabled VMX
00:00:02.013305 CPUM: SetGuestCpuIdFeature: Enabled SYSENTER/EXIT
00:00:02.013306 CPUM: SetGuestCpuIdFeature: Enabled PAE
00:00:02.013306 CPUM: SetGuestCpuIdFeature: Enabled LONG MODE
00:00:02.013306 CPUM: SetGuestCpuIdFeature: Enabled SYSCALL/RET
00:00:02.013307 CPUM: SetGuestCpuIdFeature: Enabled LAHF/SAHF
00:00:02.013307 CPUM: SetGuestCpuIdFeature: Enabled NX
00:00:02.013308 HM: Enabled nested paging
00:00:02.013308 HM: EPT flush type = VMXFLUSHEPT_SINGLE_CONTEXT
00:00:02.013308 HM: Enabled unrestricted guest execution
00:00:02.013309 HM: Enabled VPID
00:00:02.013309 HM: VPID flush type = VMXFLUSHVPID_SINGLE_CONTEXT
00:00:02.013309 HM: Enabled VMX-preemption timer (cPreemptTimerShift=7)
00:00:02.013310 HM: VT-x/AMD-V init method: LOCAL
00:00:02.013325 CPUM: VCPU 0: Cached APIC base MSR = 0xfee00900
00:00:02.013374 CPUM: VCPU 1: Cached APIC base MSR = 0xfee00800
00:00:02.013382 CPUM: VCPU 2: Cached APIC base MSR = 0xfee00800
00:00:02.013386 CPUM: VCPU 3: Cached APIC base MSR = 0xfee00800
00:00:02.013390 VMM: fUsePeriodicPreemptionTimers=false
00:00:02.013440 CPUM: Logical host processors: 8 present, 8 max, 8 online, online mask: 00000000000000ff
00:00:02.013442 CPUM: Physical host cores: 4
00:00:02.013442 ************************* CPUID dump ************************
00:00:02.013472 Raw Standard CPUID Leaves
00:00:02.013472 Leaf/sub-leaf eax ebx ecx edx
00:00:02.013492 Gst: 00000000/0000 00000016 756e6547 6c65746e 49656e69
00:00:02.013493 Hst: 00000016 756e6547 6c65746e 49656e69
00:00:02.013494 Gst: 00000001/0000 000506e3 00040800 d6f82203 178bfbff
00:00:02.013495 Hst: 000506e3 00100800 7ffafbff bfebfbff
00:00:02.013496 Gst: 00000002/0000 76036301 00f0b5ff 00000000 00c30000
00:00:02.013497 Hst: 76036301 00f0b5ff 00000000 00c30000
00:00:02.013498 Gst: 00000003/0000 00000000 00000000 00000000 00000000
00:00:02.013499 Hst: 00000000 00000000 00000000 00000000
00:00:02.013500 Gst: 00000004/0000 0c000121 01c0003f 0000003f 00000000
00:00:02.013501 Hst: 1c004121 01c0003f 0000003f 00000000
00:00:02.013502 Gst: 00000004/0001 0c000122 01c0003f 0000003f 00000000
00:00:02.013503 Hst: 1c004122 01c0003f 0000003f 00000000
00:00:02.013503 Gst: 00000004/0002 0c000143 00c0003f 000003ff 00000000
00:00:02.013504 Hst: 1c004143 00c0003f 000003ff 00000000
00:00:02.013505 Gst: 00000004/0003 0c000163 03c0003f 00001fff 00000006
00:00:02.013506 Hst: 1c03c163 03c0003f 00001fff 00000006
00:00:02.013507 Gst: 00000004/0004 0c000000 00000000 00000000 00000000
00:00:02.013508 Hst: 00000000 00000000 00000000 00000000
00:00:02.013508 Gst: 00000005/0000 00000000 00000000 00000000 00000000
00:00:02.013509 Hst: 00000040 00000040 00000003 11142120
00:00:02.013510 Gst: 00000006/0000 00000000 00000000 00000000 00000000
00:00:02.013511 Hst: 000027f7 00000002 00000009 00000000
00:00:02.013511 Gst: 00000007/0000 00000000 00842000 00000000 00000000
00:00:02.013512 Hst: 00000000 029c6fbf 00000000 00000000
00:00:02.013513 Gst: 00000007/0001 00000000 00000000 00000000 00000000
00:00:02.013513 Hst: 00000000 00000000 00000000 00000000
00:00:02.013514 Gst: 00000008/0000 00000000 00000000 00000000 00000000
00:00:02.013515 Hst: 00000000 00000000 00000000 00000000
00:00:02.013515 Gst: 00000009/0000 00000000 00000000 00000000 00000000
00:00:02.013516 Hst: 00000000 00000000 00000000 00000000
00:00:02.013516 Gst: 0000000a/0000 00000000 00000000 00000000 00000000
00:00:02.013517 Hst: 07300404 00000000 00000000 00000603
00:00:02.013518 Gst: 0000000b/0000 00000000 00000001 00000100 00000000
00:00:02.013518 Hst: 00000001 00000002 00000100 00000000
00:00:02.013519 Gst: 0000000b/0001 00000002 00000004 00000201 00000000
00:00:02.013520 Hst: 00000004 00000008 00000201 00000000
00:00:02.013520 Gst: 0000000b/0002 00000000 00000000 00000002 00000000
00:00:02.013521 Hst: 00000000 00000000 00000002 00000000
00:00:02.013522 Gst: 0000000c/0000 00000000 00000000 00000000 00000000
00:00:02.013522 Hst: 00000000 00000000 00000000 00000000
00:00:02.013523 Gst: 0000000d/0000 00000007 00000340 00000440 00000000
00:00:02.013523 Hst: 0000001f 00000340 00000440 00000000
00:00:02.013524 Gst: 0000000d/0001 00000000 00000340 00000000 00000000
00:00:02.013525 Hst: 0000000f 00000340 00000100 00000000
00:00:02.013526 Gst: 0000000d/0002 00000100 00000240 00000000 00000000
00:00:02.013526 Hst: 00000100 00000240 00000000 00000000
00:00:02.013527 Gst: 0000000d/0003 00000000 00000000 00000000 00000000
00:00:02.013527 Hst: 00000040 000003c0 00000000 00000000
00:00:02.013528 Gst: 0000000d/0004 00000000 00000000 00000000 00000000
00:00:02.013529 Hst: 00000040 00000400 00000000 00000000
00:00:02.013529 Gst: 0000000d/0005 00000000 00000000 00000000 00000000
00:00:02.013530 Hst: 00000000 00000000 00000000 00000000
00:00:02.013531 Gst: 0000000d/0006 00000000 00000000 00000000 00000000
00:00:02.013531 Hst: 00000000 00000000 00000000 00000000
00:00:02.013532 Gst: 0000000d/0007 00000000 00000000 00000000 00000000
00:00:02.013532 Hst: 00000000 00000000 00000000 00000000
00:00:02.013533 Gst: 0000000d/0008 00000000 00000000 00000000 00000000
00:00:02.013534 Hst: 00000080 00000000 00000001 00000000
00:00:02.013534 Gst: 0000000d/0009 00000000 00000000 00000000 00000000
00:00:02.013535 Hst: 00000000 00000000 00000000 00000000
00:00:02.013555 Gst: 0000000e/0000 00000000 00000000 00000000 00000000
00:00:02.013556 Hst: 00000000 00000000 00000000 00000000
00:00:02.013556 Gst: 0000000f/0000 00000000 00000000 00000000 00000000
00:00:02.013557 Hst: 00000000 00000000 00000000 00000000
00:00:02.013557 Gst: 00000010/0000 00000000 00000000 00000000 00000000
00:00:02.013558 Hst: 00000000 00000000 00000000 00000000
00:00:02.013559 Gst: 00000011/0000 00000000 00000000 00000000 00000000
00:00:02.013559 Hst: 00000000 00000000 00000000 00000000
00:00:02.013560 Gst: 00000012/0000 00000000 00000000 00000000 00000000
00:00:02.013561 Hst: 00000000 00000000 00000000 00000000
00:00:02.013561 Gst: 00000013/0000 00000000 00000000 00000000 00000000
00:00:02.013562 Hst: 00000000 00000000 00000000 00000000
00:00:02.013562 Gst: 00000014/0000 00000000 00000000 00000000 00000000
00:00:02.013563 Hst: 00000001 0000000f 00000007 00000000
00:00:02.013564 Hst: 00000015/0000 00000002 000000e2 00000000 00000000
00:00:02.013565 Hst: 00000016/0000 00000a8c 00000e10 00000064 00000000
00:00:02.013566 Name: GenuineIntel
00:00:02.013567 Supports: 0x00000000-0x00000016
00:00:02.013593 Family: 6 Extended: 0 Effective: 6
00:00:02.013595 Model: 14 Extended: 5 Effective: 94
00:00:02.013596 Stepping: 3
00:00:02.013596 Type: 0 (primary)
00:00:02.013597 APIC ID: 0x00
00:00:02.013598 Logical CPUs: 4
00:00:02.013598 CLFLUSH Size: 8
00:00:02.013599 Brand ID: 0x00
00:00:02.013600 Features
00:00:02.013600 Mnemonic - Description = guest (host)
00:00:02.013601 FPU - x87 FPU on Chip = 1 (1)
00:00:02.013602 VME - Virtual 8086 Mode Enhancements = 1 (1)
00:00:02.013603 DE - Debugging extensions = 1 (1)
00:00:02.013604 PSE - Page Size Extension = 1 (1)
00:00:02.013605 TSC - Time Stamp Counter = 1 (1)
00:00:02.013606 MSR - Model Specific Registers = 1 (1)
00:00:02.013607 PAE - Physical Address Extension = 1 (1)
00:00:02.013608 MCE - Machine Check Exception = 1 (1)
00:00:02.013608 CX8 - CMPXCHG8B instruction = 1 (1)
00:00:02.013609 APIC - APIC On-Chip = 1 (1)
00:00:02.013610 SEP - SYSENTER and SYSEXIT Present = 1 (1)
00:00:02.013611 MTRR - Memory Type Range Registers = 1 (1)
00:00:02.013612 PGE - PTE Global Bit = 1 (1)
00:00:02.013613 MCA - Machine Check Architecture = 1 (1)
00:00:02.013614 CMOV - Conditional Move instructions = 1 (1)
00:00:02.013614 PAT - Page Attribute Table = 1 (1)
00:00:02.013615 PSE-36 - 36-bit Page Size Extension = 1 (1)
00:00:02.013616 PSN - Processor Serial Number = 0 (0)
00:00:02.013617 CLFSH - CLFLUSH instruction = 1 (1)
00:00:02.013618 DS - Debug Store = 0 (1)
00:00:02.013619 ACPI - Thermal Mon. & Soft. Clock Ctrl. = 0 (1)
00:00:02.013620 MMX - Intel MMX Technology = 1 (1)
00:00:02.013621 FXSR - FXSAVE and FXRSTOR instructions = 1 (1)
00:00:02.013621 SSE - SSE support = 1 (1)
00:00:02.013622 SSE2 - SSE2 support = 1 (1)
00:00:02.013623 SS - Self Snoop = 0 (1)
00:00:02.013624 HTT - Hyper-Threading Technology = 1 (1)
00:00:02.013625 TM - Therm. Monitor = 0 (1)
00:00:02.013626 PBE - Pending Break Enabled = 0 (1)
00:00:02.013627 SSE3 - SSE3 support = 1 (1)
00:00:02.013628 PCLMUL - PCLMULQDQ support (for AES-GCM) = 1 (1)
00:00:02.013629 DTES64 - DS Area 64-bit Layout = 0 (1)
00:00:02.013629 MONITOR - MONITOR/MWAIT instructions = 0 (1)
00:00:02.013630 CPL-DS - CPL Qualified Debug Store = 0 (1)
00:00:02.013631 VMX - Virtual Machine Extensions = 0 (1)
00:00:02.013632 SMX - Safer Mode Extensions = 0 (1)
00:00:02.013633 EST - Enhanced SpeedStep Technology = 0 (1)
00:00:02.013633 TM2 - Terminal Monitor 2 = 0 (1)
00:00:02.013634 SSSE3 - Supplemental Streaming SIMD Extensions 3 = 1 (1)
00:00:02.013635 CNTX-ID - L1 Context ID = 0 (0)
00:00:02.013636 SDBG - Silicon Debug interface = 0 (1)
00:00:02.013637 FMA - Fused Multiply Add extensions = 0 (1)
00:00:02.013638 CX16 - CMPXCHG16B instruction = 1 (1)
00:00:02.013638 TPRUPDATE - xTPR Update Control = 0 (1)
00:00:02.013639 PDCM - Perf/Debug Capability MSR = 0 (1)
00:00:02.013640 PCID - Process Context Identifiers = 0 (1)
00:00:02.013641 DCA - Direct Cache Access = 0 (0)
00:00:02.013642 SSE4_1 - SSE4_1 support = 1 (1)
00:00:02.013643 SSE4_2 - SSE4_2 support = 1 (1)
00:00:02.013644 X2APIC - x2APIC support = 1 (1)
00:00:02.013644 MOVBE - MOVBE instruction = 1 (1)
00:00:02.013645 POPCNT - POPCNT instruction = 1 (1)
00:00:02.013646 TSCDEADL - Time Stamp Counter Deadline = 0 (1)
00:00:02.013647 AES - AES instructions = 1 (1)
00:00:02.013648 XSAVE - XSAVE instruction = 1 (1)
00:00:02.013649 OSXSAVE - OSXSAVE instruction = 0 (1)
00:00:02.013649 AVX - AVX support = 1 (1)
00:00:02.013650 F16C - 16-bit floating point conversion instructions = 0 (1)
00:00:02.013665 RDRAND - RDRAND instruction = 1 (1)
00:00:02.013666 HVP - Hypervisor Present (we're a guest) = 1 (0)
00:00:02.013667 Structured Extended Feature Flags Enumeration (leaf 7):
00:00:02.013667 Mnemonic - Description = guest (host)
00:00:02.013668 FSGSBASE - RDFSBASE/RDGSBASE/WRFSBASE/WRGSBASE instr. = 0 (1)
00:00:02.013668 TSCADJUST - Supports MSR_IA32_TSC_ADJUST = 0 (1)
00:00:02.013669 SGX - Supports Software Guard Extensions = 0 (1)
00:00:02.013670 BMI1 - Advanced Bit Manipulation extension 1 = 0 (1)
00:00:02.013671 HLE - Hardware Lock Elision = 0 (1)
00:00:02.013672 AVX2 - Advanced Vector Extensions 2 = 0 (1)
00:00:02.013672 FDP_EXCPTN_ONLY - FPU DP only updated on exceptions = 0 (0)
00:00:02.013673 SMEP - Supervisor Mode Execution Prevention = 0 (1)
00:00:02.013674 BMI2 - Advanced Bit Manipulation extension 2 = 0 (1)
00:00:02.013674 ERMS - Enhanced REP MOVSB/STOSB instructions = 0 (1)
00:00:02.013675 INVPCID - INVPCID instruction = 0 (1)
00:00:02.013676 RTM - Restricted Transactional Memory = 0 (1)
00:00:02.013677 PQM - Platform Quality of Service Monitoring = 0 (0)
00:00:02.013677 DEPFPU_CS_DS - Deprecates FPU CS, FPU DS values if set = 1 (1)
00:00:02.013678 MPE - Intel Memory Protection Extensions = 0 (1)
00:00:02.013679 PQE - Platform Quality of Service Enforcement = 0 (0)
00:00:02.013680 AVX512F - AVX512 Foundation instructions = 0 (0)
00:00:02.013680 RDSEED - RDSEED instruction = 1 (1)
00:00:02.013681 ADX - ADCX/ADOX instructions = 0 (1)
00:00:02.013682 SMAP - Supervisor Mode Access Prevention = 0 (1)
00:00:02.013683 CLFLUSHOPT - CLFLUSHOPT (Cache Line Flush) instruction = 1 (1)
00:00:02.013683 INTEL_PT - Intel Processor Trace = 0 (1)
00:00:02.013684 AVX512PF - AVX512 Prefetch instructions = 0 (0)
00:00:02.013685 AVX512ER - AVX512 Exponential & Reciprocal instructions = 0 (0)
00:00:02.013686 AVX512CD - AVX512 Conflict Detection instructions = 0 (0)
00:00:02.013686 SHA - Secure Hash Algorithm extensions = 0 (0)
00:00:02.013687 PREFETCHWT1 - PREFETCHWT1 instruction = 0 (0)
00:00:02.013688 PKU - Protection Key for Usermode pages = 0 (0)
00:00:02.013689 OSPKU - CR4.PKU mirror = 0 (0)
00:00:02.013704 Processor Extended State Enumeration (leaf 0xd):
00:00:02.013705 XSAVE area cur/max size by XCR0, guest: 0x340/0x440
00:00:02.013705 XSAVE area cur/max size by XCR0, host: 0x340/0x440
00:00:02.013706 Valid XCR0 bits, guest: 0x00000000`00000007 ( x87 SSE YMM_Hi128 )
00:00:02.013708 Valid XCR0 bits, host: 0x00000000`0000001f ( x87 SSE YMM_Hi128 BNDREGS BNDCSR )
00:00:02.013710 XSAVE features, guest:
00:00:02.013710 XSAVE features, host: XSAVEOPT XSAVEC XGETBC1 XSAVES
00:00:02.013712 XSAVE area cur size XCR0|XSS, guest: 0x340
00:00:02.013712 XSAVE area cur size XCR0|XSS, host: 0x340
00:00:02.013713 Valid IA32_XSS bits, guest: 0x00000000`00000000
00:00:02.013713 Valid IA32_XSS bits, host: 0x00000100`00000000 ( 40 )
00:00:02.013715 State #2, guest: off=0x0240, cb=0x0100 IA32_XSS-bit -- YMM_Hi128
00:00:02.013716 State #2, host: off=0x0240, cb=0x0100 IA32_XSS-bit -- YMM_Hi128
00:00:02.013718 State #3, host: off=0x03c0, cb=0x0040 IA32_XSS-bit -- BNDREGS
00:00:02.013719 State #4, host: off=0x0400, cb=0x0040 IA32_XSS-bit -- BNDCSR
00:00:02.013720 State #8, host: off=0x0000, cb=0x0080 XCR0-bit -- 8
00:00:02.013730 Unknown CPUID Leaves
00:00:02.013731 Leaf/sub-leaf eax ebx ecx edx
00:00:02.013731 Gst: 00000014/0001 00000000 00000000 00000000 00000000
00:00:02.013732 Hst: 02490002 003f3fff 00000000 00000000
00:00:02.013733 Gst: 00000014/0002 00000000 00000000 00000000 00000000
00:00:02.013734 Hst: 00000000 00000000 00000000 00000000
00:00:02.013734 Gst: 00000015/0000 00000000 00000000 00000000 00000000
00:00:02.013735 Hst: 00000002 000000e2 00000000 00000000
00:00:02.013736 Gst: 00000016/0000 00000000 00000000 00000000 00000000
00:00:02.013737 Hst: 00000a8c 00000e10 00000064 00000000
00:00:02.013738 Raw Hypervisor CPUID Leaves
00:00:02.013738 Leaf/sub-leaf eax ebx ecx edx
00:00:02.013739 Gst: 40000000/0000 40000001 4b4d564b 564b4d56 0000004d
00:00:02.013740 Hst: 00000a8c 00000e10 00000064 00000000
00:00:02.013741 Gst: 40000001/0000 01000089 00000000 00000000 00000000
00:00:02.013742 Hst: 00000a8c 00000e10 00000064 00000000
00:00:02.013743 Raw Extended CPUID Leaves
00:00:02.013743 Leaf/sub-leaf eax ebx ecx edx
00:00:02.013743 Gst: 80000000/0000 80000008 00000000 00000000 00000000
00:00:02.013744 Hst: 80000008 00000000 00000000 00000000
00:00:02.013745 Gst: 80000001/0000 00000000 00000000 00000121 28100800
00:00:02.013746 Hst: 00000000 00000000 00000121 2c100800
00:00:02.013747 Gst: 80000002/0000 65746e49 2952286c 726f4320 4d542865
00:00:02.013748 Hst: 65746e49 2952286c 726f4320 4d542865
00:00:02.013750 Gst: 80000003/0000 37692029 3238362d 20514830 20555043
00:00:02.013751 Hst: 37692029 3238362d 20514830 20555043
00:00:02.013752 Gst: 80000004/0000 2e322040 48473037 0000007a 00000000
00:00:02.013753 Hst: 2e322040 48473037 0000007a 00000000
00:00:02.013754 Gst: 80000005/0000 00000000 00000000 00000000 00000000
00:00:02.013754 Hst: 00000000 00000000 00000000 00000000
00:00:02.013755 Gst: 80000006/0000 00000000 00000000 01006040 00000000
00:00:02.013756 Hst: 00000000 00000000 01006040 00000000
00:00:02.013757 Gst: 80000007/0000 00000000 00000000 00000000 00000100
00:00:02.013757 Hst: 00000000 00000000 00000000 00000100
00:00:02.013758 Gst: 80000008/0000 00003027 00000000 00000000 00000000
00:00:02.013759 Hst: 00003027 00000000 00000000 00000000
00:00:02.013759 Ext Name:
00:00:02.013760 Ext Supports: 0x80000000-0x80000008
00:00:02.013761 Family: 0 Extended: 0 Effective: 0
00:00:02.013761 Model: 0 Extended: 0 Effective: 0
00:00:02.013762 Stepping: 0
00:00:02.013762 Brand ID: 0x000
00:00:02.013763 Ext Features
00:00:02.013763 Mnemonic - Description = guest (host)
00:00:02.013763 FPU - x87 FPU on Chip = 0 (0)
00:00:02.013764 VME - Virtual 8086 Mode Enhancements = 0 (0)
00:00:02.013765 DE - Debugging extensions = 0 (0)
00:00:02.013766 PSE - Page Size Extension = 0 (0)
00:00:02.013767 TSC - Time Stamp Counter = 0 (0)
00:00:02.013768 MSR - K86 Model Specific Registers = 0 (0)
00:00:02.013769 PAE - Physical Address Extension = 0 (0)
00:00:02.013769 MCE - Machine Check Exception = 0 (0)
00:00:02.013770 CX8 - CMPXCHG8B instruction = 0 (0)
00:00:02.013771 APIC - APIC On-Chip = 0 (0)
00:00:02.013772 SEP - SYSCALL/SYSRET = 1 (1)
00:00:02.013773 MTRR - Memory Type Range Registers = 0 (0)
00:00:02.013774 PGE - PTE Global Bit = 0 (0)
00:00:02.013775 MCA - Machine Check Architecture = 0 (0)
00:00:02.013775 CMOV - Conditional Move instructions = 0 (0)
00:00:02.013776 PAT - Page Attribute Table = 0 (0)
00:00:02.013777 PSE-36 - 36-bit Page Size Extension = 0 (0)
00:00:02.013778 NX - No-Execute/Execute-Disable = 1 (1)
00:00:02.013779 AXMMX - AMD Extensions to MMX instructions = 0 (0)
00:00:02.013779 MMX - Intel MMX Technology = 0 (0)
00:00:02.013780 FXSR - FXSAVE and FXRSTOR Instructions = 0 (0)
00:00:02.013781 FFXSR - AMD fast FXSAVE and FXRSTOR instructions = 0 (0)
00:00:02.013782 Page1GB - 1 GB large page = 0 (1)
00:00:02.013782 RDTSCP - RDTSCP instruction = 1 (1)
00:00:02.013783 LM - AMD64 Long Mode = 1 (1)
00:00:02.013784 3DNOWEXT - AMD Extensions to 3DNow = 0 (0)
00:00:02.013785 3DNOW - AMD 3DNow = 0 (0)
00:00:02.013786 LahfSahf - LAHF/SAHF support in 64-bit mode = 1 (1)
00:00:02.013787 CmpLegacy - Core multi-processing legacy mode = 0 (0)
00:00:02.013787 SVM - AMD Secure Virtual Machine extensions = 0 (0)
00:00:02.013788 EXTAPIC - AMD Extended APIC registers = 0 (0)
00:00:02.013789 CR8L - AMD LOCK MOV CR0 means MOV CR8 = 0 (0)
00:00:02.013790 ABM - AMD Advanced Bit Manipulation = 1 (1)
00:00:02.013790 SSE4A - SSE4A instructions = 0 (0)
00:00:02.013791 MISALIGNSSE - AMD Misaligned SSE mode = 0 (0)
00:00:02.013792 3DNOWPRF - AMD PREFETCH and PREFETCHW instructions = 1 (1)
00:00:02.013793 OSVW - AMD OS Visible Workaround = 0 (0)
00:00:02.013793 IBS - Instruct Based Sampling = 0 (0)
00:00:02.013794 XOP - Extended Operation support = 0 (0)
00:00:02.013795 SKINIT - SKINIT, STGI, and DEV support = 0 (0)
00:00:02.013796 WDT - AMD Watchdog Timer support = 0 (0)
00:00:02.013797 LWP - Lightweight Profiling support = 0 (0)
00:00:02.013797 FMA4 - Four operand FMA instruction support = 0 (0)
00:00:02.013798 NodeId - NodeId in MSR C001_100C = 0 (0)
00:00:02.013799 TBM - Trailing Bit Manipulation instructions = 0 (0)
00:00:02.013800 TOPOEXT - Topology Extensions = 0 (0)
00:00:02.013801 Full Name: "Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz"
00:00:02.013801 TLB 2/4M Instr/Uni: res0 0 entries
00:00:02.013802 TLB 2/4M Data: res0 0 entries
00:00:02.013803 TLB 4K Instr/Uni: res0 0 entries
00:00:02.013803 TLB 4K Data: res0 0 entries
00:00:02.013804 L1 Instr Cache Line Size: 0 bytes
00:00:02.013804 L1 Instr Cache Lines Per Tag: 0
00:00:02.013804 L1 Instr Cache Associativity: res0
00:00:02.013805 L1 Instr Cache Size: 0 KB
00:00:02.013805 L1 Data Cache Line Size: 0 bytes
00:00:02.013805 L1 Data Cache Lines Per Tag: 0
00:00:02.013806 L1 Data Cache Associativity: res0
00:00:02.013806 L1 Data Cache Size: 0 KB
00:00:02.013807 L2 TLB 2/4M Instr/Uni: off 0 entries
00:00:02.013807 L2 TLB 2/4M Data: off 0 entries
00:00:02.013808 L2 TLB 4K Instr/Uni: off 0 entries
00:00:02.013808 L2 TLB 4K Data: off 0 entries
00:00:02.013809 L2 Cache Line Size: 0 bytes
00:00:02.013809 L2 Cache Lines Per Tag: 0
00:00:02.013809 L2 Cache Associativity: off
00:00:02.013810 L2 Cache Size: 0 KB
00:00:02.013810 APM Features: TscInvariant
00:00:02.013811 Host Invariant-TSC support: true
00:00:02.013812 Physical Address Width: 39 bits
00:00:02.013812 Virtual Address Width: 48 bits
00:00:02.013812 Guest Physical Address Width: 0 bits
00:00:02.013813 Physical Core Count: 1
00:00:02.013814
00:00:02.013814 ******************** End of CPUID dump **********************
00:00:02.035410 PcBios: ATA LUN#0 LCHS=1024/255/63
00:00:02.035476 APIC: fPostedIntrsEnabled=false fVirtApicRegsEnabled=false fSupportsTscDeadline=false
00:00:02.035552 VMEmt: Halt method global1 (5)
00:00:02.035627 VMEmt: HaltedGlobal1 config: cNsSpinBlockThresholdCfg=125000
00:00:02.035664 Changing the VM state from 'CREATING' to 'CREATED'
00:00:02.036708 Changing the VM state from 'CREATED' to 'POWERING_ON'
00:00:02.036857 Changing the VM state from 'POWERING_ON' to 'RUNNING'
00:00:02.036878 Console: Machine state changed to 'Running'
00:00:02.042232 ERROR [COM]: aRC=VBOX_E_IPRT_ERROR (0x80bb0005) aIID={02326f63-bcb3-4481-96e0-30d1c2ee97f6} aComponent={DisplayWrap} aText={Could not take a screenshot (VERR_NOT_SUPPORTED)}, preserve=false aResultDetail=0
00:00:02.045068 VMMDev: Guest Log: BIOS: VirtualBox 5.1.30
00:00:02.046284 PIT: mode=2 count=0x10000 (65536) - 18.20 Hz (ch=0)
00:00:02.052853 ERROR [COM]: aRC=VBOX_E_IPRT_ERROR (0x80bb0005) aIID={02326f63-bcb3-4481-96e0-30d1c2ee97f6} aComponent={DisplayWrap} aText={Could not take a screenshot (VERR_NOT_SUPPORTED)}, preserve=false aResultDetail=0
00:00:02.057378 Display::handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0
00:00:02.057478 GUI: UIFrameBufferPrivate::NotifyChange: Screen=0, Origin=0x0, Size=720x400, Sending to async-handler
00:00:02.057628 GUI: UIMachineView::sltHandleNotifyChange: Screen=0, Size=720x400
00:00:02.057690 GUI: UIFrameBufferPrivate::handleNotifyChange: Size=720x400
00:00:02.057700 GUI: UIFrameBufferPrivate::performResize: Size=720x400, Directly using source bitmap content
00:00:02.058379 PIIX3 ATA: Ctl#0: RESET, DevSel=0 AIOIf=0 CmdIf0=0x00 (-1 usec ago) CmdIf1=0x00 (-1 usec ago)
00:00:02.058413 PIIX3 ATA: Ctl#0: finished processing RESET
00:00:02.058738 VMMDev: Guest Log: BIOS: ata0-0: PCHS=16383/16/63 LCHS=1024/255/63
00:00:02.059770 PIIX3 ATA: Ctl#0: RESET, DevSel=1 AIOIf=0 CmdIf0=0xec (-1 usec ago) CmdIf1=0x00 (-1 usec ago)
00:00:02.059942 PIIX3 ATA: Ctl#0: finished processing RESET
00:00:02.067062 PIT: mode=2 count=0x48d3 (18643) - 64.00 Hz (ch=0)
00:00:02.081192 Display::handleDisplayResize: uScreenId=0 pvVRAM=0000000012c60000 w=640 h=480 bpp=32 cbLine=0xA00 flags=0x0
00:00:02.081270 GUI: UIFrameBufferPrivate::NotifyChange: Screen=0, Origin=0x0, Size=640x480, Sending to async-handler
00:00:02.081403 GUI: UIMachineView::sltHandleNotifyChange: Screen=0, Size=640x480
00:00:02.081421 GUI: UIFrameBufferPrivate::handleNotifyChange: Size=640x480
00:00:02.081518 GUI: UIFrameBufferPrivate::performResize: Size=640x480, Directly using source bitmap content
00:00:02.346525 GUI: UIMachineViewNormal::resendSizeHint: Restoring guest size-hint for screen 0 to 800x600
00:00:02.346646 VMMDev: SetVideoModeHint: Got a video mode hint (800x600x32)@(0x0),(1;0) at 0
00:00:02.349942 GUI: 2D video acceleration is disabled
00:00:02.349942 GUI: HID LEDs sync is enabled
00:00:02.349942 GUI: UIMachineLogicNormal::sltCheckForRequestedVisualStateType: Requested-state=0, Machine-state=5
00:00:04.557413 Display::handleDisplayResize: uScreenId=0 pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0 flags=0x0
00:00:04.557603 GUI: UIFrameBufferPrivate::NotifyChange: Screen=0, Origin=0x0, Size=720x400, Sending to async-handler
00:00:04.557708 GUI: UIMachineView::sltHandleNotifyChange: Screen=0, Size=720x400
00:00:04.557747 GUI: UIFrameBufferPrivate::handleNotifyChange: Size=720x400
00:00:04.557776 GUI: UIFrameBufferPrivate::performResize: Size=720x400, Directly using source bitmap content
00:00:04.563415 PIT: mode=2 count=0x10000 (65536) - 18.20 Hz (ch=0)
00:00:04.563826 VMMDev: Guest Log: BIOS: Boot : bseqnr=1, bootseq=0032
00:00:04.568770 VMMDev: Guest Log: BIOS: Booting from Hard Disk...
00:00:04.569711 PIIX3 ATA: Ctl#0: RESET, DevSel=0 AIOIf=0 CmdIf0=0xc4 (-1 usec ago) CmdIf1=0x00 (-1 usec ago)
00:00:04.569785 PIIX3 ATA: Ctl#0: finished processing RESET
00:00:09.912603 VMMDev: Guest Log: BIOS: KBD: unsupported int 16h function 03
00:00:09.912885 VMMDev: Guest Log: BIOS: AX=0305 BX=0000 CX=0000 DX=0000
00:00:09.913346 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=81
00:00:09.913677 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=81
00:00:09.914024 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=82
00:00:09.914351 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=82
00:00:09.914702 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=83
00:00:09.914996 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=83
00:00:09.915415 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=84
00:00:09.915835 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=84
00:00:09.916140 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=85
00:00:09.916461 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=85
00:00:09.916759 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=86
00:00:09.917078 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=86
00:00:09.917417 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=87
00:00:09.917695 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=87
00:00:09.918074 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=88
00:00:09.918356 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=88
00:00:09.918699 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=89
00:00:09.918978 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=89
00:00:09.919318 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=8a
00:00:09.919596 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=8a
00:00:09.919926 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=8b
00:00:09.919942 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=8b
00:00:09.919942 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=8c
00:00:09.919942 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=8c
00:00:09.919942 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=8d
00:00:09.920050 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=8d
00:00:09.920348 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=8e
00:00:09.920631 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=8e
00:00:09.920929 VMMDev: Guest Log: int13_harddisk_ext: function 41, unmapped device for ELDL=8f
00:00:09.921207 VMMDev: Guest Log: int13_harddisk: function 02, unmapped device for ELDL=8f
00:00:10.100413 GIM: KVM: VCPU 0: Enabled system-time struct. at 0x000000039db52000 - u32TscScale=0xbc4eefc7 i8TscShift=-1 uVersion=2 fFlags=0x1 uTsc=0x51abec127 uVirtNanoTS=0x1e09b82e1
00:00:10.100554 TM: Switching TSC mode from 'VirtTscEmulated' to 'RealTscOffset'
00:00:10.457505 GIM: KVM: Enabled wall-clock struct. at 0x00000000020c66e8 - u32Sec=1511854862 u32Nano=908505120 uVersion=2
00:00:10.468042 PIT: mode=2 count=0x4a9 (1193) - 1000.15 Hz (ch=0)
00:00:10.499942 APIC0: Switched mode to x2APIC
00:00:10.604577 PIT: mode=0 count=0x10000 (65536) - 18.20 Hz (ch=0)
00:00:10.605408 CPUM: VCPU 1: Cached APIC base MSR = 0xfee00800
00:00:10.605502 APIC1: Switched mode to x2APIC
00:00:10.605537 GIM: KVM: VCPU 1: Enabled system-time struct. at 0x000000039db52040 - u32TscScale=0xbc4eefc7 i8TscShift=-1 uVersion=2 fFlags=0x1 uTsc=0x51abec127 uVirtNanoTS=0x1e09b82e1
00:00:10.608234 CPUM: VCPU 2: Cached APIC base MSR = 0xfee00800
00:00:10.608324 APIC2: Switched mode to x2APIC
00:00:10.608357 GIM: KVM: VCPU 2: Enabled system-time struct. at 0x000000039db52080 - u32TscScale=0xbc4eefc7 i8TscShift=-1 uVersion=2 fFlags=0x1 uTsc=0x51abec127 uVirtNanoTS=0x1e09b82e1
00:00:10.609943 CPUM: VCPU 3: Cached APIC base MSR = 0xfee00800
00:00:10.610015 APIC3: Switched mode to x2APIC
00:00:10.610047 GIM: KVM: VCPU 3: Enabled system-time struct. at 0x000000039db520c0 - u32TscScale=0xbc4eefc7 i8TscShift=-1 uVersion=2 fFlags=0x1 uTsc=0x51abec127 uVirtNanoTS=0x1e09b82e1
00:00:10.811411 OHCI: Software reset
00:00:11.319059 OHCI: USB Reset
00:00:11.370332 OHCI: Software reset
00:00:11.370898 OHCI: USB Operational
00:00:12.000874 PIIX3 ATA: Ctl#0: RESET, DevSel=0 AIOIf=0 CmdIf0=0xc4 (-1 usec ago) CmdIf1=0x00 (-1 usec ago)
00:00:12.000955 PIIX3 ATA: Ctl#0: finished processing RESET
00:00:12.001289 PIIX3 ATA: Ctl#1: RESET, DevSel=0 AIOIf=0 CmdIf0=0x00 (-1 usec ago) CmdIf1=0x00 (-1 usec ago)
00:00:12.001375 PIIX3 ATA: Ctl#1: finished processing RESET
00:00:13.652074 AC97: Reset
00:00:13.652446 AC97: Reset
00:00:13.999528 NAT: Link up
00:00:15.879441 NAT: IPv6 not supported
00:00:15.962698 NAT: DHCP offered IP address 10.0.2.15 The VirtualBox sandbox configuration, the Windows machine config. and the actual memory utilization are attached
... View more
11-24-2017
02:08 PM
Kerberized HDP-2.6.3.0. The table has 'Id' as it's primary key and 'DepartmentId' as the foreign key: +---------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------+--+
| col_name | data_type | comment |
+---------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------+--+
| # col_name | data_type | comment |
| | NULL | NULL |
| id | int | Surrogate PK is not fun |
| firstname | string | |
| lastname | string | |
| dob | date | |
| departmentid | int | |
| | NULL | NULL |
| # Detailed Table Information | NULL | NULL |
| Database: | group_hadoopdeveloper | NULL |
| Owner: | ojoqcu | NULL |
| CreateTime: | Fri Nov 24 13:39:02 UTC 2017 | NULL |
| LastAccessTime: | UNKNOWN | NULL |
| Retention: | 0 | NULL |
| Location: | hdfs://devhadoop/apps/hive/warehouse/group_hadoopdeveloper.db/employee | NULL |
| Table Type: | MANAGED_TABLE | NULL |
| Table Parameters: | NULL | NULL |
| | COLUMN_STATS_ACCURATE | {\"BASIC_STATS\":\"true\"} |
| | numFiles | 0 |
| | numRows | 0 |
| | rawDataSize | 0 |
| | totalSize | 0 |
| | transient_lastDdlTime | 1511530742 |
| | NULL | NULL |
| # Storage Information | NULL | NULL |
| SerDe Library: | org.apache.hadoop.hive.ql.io.orc.OrcSerde | NULL |
| InputFormat: | org.apache.hadoop.hive.ql.io.orc.OrcInputFormat | NULL |
| OutputFormat: | org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat | NULL |
| Compressed: | No | NULL |
| Num Buckets: | -1 | NULL |
| Bucket Columns: | [] | NULL |
| Sort Columns: | [] | NULL |
| Storage Desc Params: | NULL | NULL |
| | serialization.format | 1 |
| | NULL | NULL |
| # Constraints | NULL | NULL |
| | NULL | NULL |
| # Primary Key | NULL | NULL |
| Table: | group_hadoopdeveloper.employee | NULL |
| Constraint Name: | pk_203923149_1511530742932_0 | NULL |
| Column Names: | id | |
| | NULL | NULL |
| # Foreign Keys | NULL | NULL |
| Table: | group_hadoopdeveloper.employee | NULL |
| Constraint Name: | fk_employee_department | NULL |
| Parent Column Name:group_hadoopdeveloper.department.id | Column Name:departmentid | Key Sequence:1 |
| | NULL | NULL |
+---------------------------------------------------------+-------------------------------------------------------------------------+-----------------------------+--+ I tried the following ways to retrieve the constraints: Hive JDBC - can retrieve the primary keys but the foreign keys are not implemented in the driver WebHCat: http://l4283t.sss.se.com:50111/templeton/v1/ddl/database/group_hadoopdeveloper/table/employee {"columns":[{"name":"id","type":"int","comment":"Surrogate PK is not fun"},{"name":"firstname","type":"string"},{"name":"lastname","type":"string"},{"name":"dob","type":"date"},{"name":"departmentid","type":"int"}],"database":"group_hadoopdeveloper","table":"employee"} Still struggling with the HiveMetaStoreClient implementation to retrieve the foreign keys I have the following questions: Where /how are these constraints stored in the Hive metastore Is there any way to retrieve(NOT from the beeline but those need to be available to external programs) these constraints
... View more
Labels:
- Labels:
-
Apache Hive
11-23-2017
02:58 PM
Kerberized HDP-2.6.3.0. I am able to connect to Hive from my Windows machine using the Hive JDBC driver, however, I need to use some methods of the HiveMetaStoreClient . I flipped through the api and wrote a test code which I am executing from an IDE. private static void connectHiveMetastore() throws MetaException, MalformedURLException {
//System.setProperty("javax.security.auth.useSubjectCredsOnly","false");
//System.setProperty("java.security.krb5.conf","C:\\kerb5.conf");
Configuration configuration = new Configuration();
//configuration.addResource("E:\\hdp\\client_config\\HDFS_CLIENT\\core-site.xml");
//configuration.addResource("E:\\hdp\\client_config\\HDFS_CLIENT\\hdfs-site.xml");
HiveConf hiveConf = new HiveConf(configuration,Configuration.class);
//URL url = new File("E:\\hdp\\client_config\\HDFS_CLIENT\\hive-site.xml").toURI().toURL();
//hiveConf.setHiveSiteLocation(url);
//hiveConf.setVar(HiveConf.ConfVars.METASTOREURIS,"thrift://l4283t.sss.com:9083,thrift://l4284t.sss.com:9083");
HiveMetaStoreClient hiveMetaStoreClient = new HiveMetaStoreClient(hiveConf);
} The dependencies in the pom file:
</dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hive/hive-metastore -->
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-metastore</artifactId>
<version>2.3.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hive/hive-exec -->
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>2.3.2</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.9.0</version>
</dependency>
</dependencies> Irrespective of whether I comment or uncomment the lines pertaining to the config and Kerberos, I receive the following exception which is explained on the Hive wiki: 15:35:27.139 [main] ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler - MetaException(message:Version information not found in metastore. )
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7564)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7542)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy8.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:591)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:584)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:651)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:427)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:79)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:92)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:164)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:129)
at com.my.App.connectHiveMetastore(App.java:58)
at com.my.App.main(App.java:37)
15:35:27.141 [main] ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler - HMSHandler Fatal error: MetaException(message:Version information not found in metastore. )
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7564)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7542)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy8.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:591)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:584)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:651)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:427)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:79)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:92)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:164)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:129)
at com.my.App.connectHiveMetastore(App.java:58)
at com.my.App.main(App.java:37)
Exception in thread "main" MetaException(message:Version information not found in metastore. )
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:83)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:92)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:6893)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:164)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:129)
at com.my.App.connectHiveMetastore(App.java:58)
at com.my.App.main(App.java:37)
Caused by: MetaException(message:Version information not found in metastore. )
at org.apache.hadoop.hive.metastore.ObjectStore.checkSchema(ObjectStore.java:7564)
at org.apache.hadoop.hive.metastore.ObjectStore.verifySchema(ObjectStore.java:7542)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy8.verifySchema(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMSForConf(HiveMetaStore.java:591)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:584)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:651)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:427)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:79)
... 6 more
Process finished with exit code 1 I have the following questions/concerns:
Is the way I am thinking about and connecting the HiveMetaStoreClient correct? If not, how do I retrieve the metadata information provided by the methods of HiveMetaStoreClient ? The code certainly isn't 'reaching' the cluster. Is the above exception pertaining to the dependency versions? If not, what can be the root cause?
... View more
Labels:
- Labels:
-
Apache Hive
11-10-2017
04:53 PM
Yeah, I have tried that approach as well. The ODI doc. mentions about using it's weblogic hive jdbc driver but one can use other drivers as well. The question that I have mentioned here is around the standard(Apache)jdbc driver.
... View more
11-10-2017
01:51 PM
ODI studio 12.2.1.2.6 Kerberized HDP 2.6.3.0 Windows 7 hive-jdbc-2.1.0.2.6.3.0-235-standalone.jar(that is available under /usr/hdp/2.6.3.0-235/hive2/jdbc) I simply get a 'Connection refused', also, I didn't find any answers to the following Exception:
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
The complete error:
java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.doGetConnection(LoginTimeoutDatasourceAdapter.java:144)
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.getConnection(LoginTimeoutDatasourceAdapter.java:73)
at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java:1258)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.getLocalConnect(SnpsDialogTestConnet.java:204)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet.access$500(SnpsDialogTestConnet.java:62)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet$6.doInBackground(SnpsDialogTestConnet.java:402)
at com.sunopsis.graphical.dialog.SnpsDialogTestConnet$6.doInBackground(SnpsDialogTestConnet.java:398)
at oracle.odi.ui.framework.AbsUIRunnableTask.run(AbsUIRunnableTask.java:258)
at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:961)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.doGetConnection(LoginTimeoutDatasourceAdapter.java:144)
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter.getConnection(LoginTimeoutDatasourceAdapter.java:73)
at oracle.odi.core.datasource.dwgobject.support.OnConnectOnDisconnectDataSourceAdapter.getConnection(OnConnectOnDisconnectDataSourceAdapter.java:87)
at oracle.odi.jdbc.datasource.LoginTimeoutDatasourceAdapter$ConnectionProcessor.run(LoginTimeoutDatasourceAdapter.java:228)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:211)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:412)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:385)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnectionFromDriver(DriverManagerDataSource.java:352)
at oracle.odi.jdbc.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:331)
... 6 more
Caused by: java.sql.SQLException: Could not establish connection to jdbc:hive2://l4284t.sss.se.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.COM;httpPath=cliservice: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:589)
at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:188)
... 11 more
Caused by: org.apache.hive.org.apache.thrift.transport.TTransportException: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:297)
at org.apache.hive.org.apache.thrift.transport.THttpClient.flush(THttpClient.java:313)
at org.apache.hive.org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.hive.org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.send_OpenSession(TCLIService.java:162)
at org.apache.hive.service.rpc.thrift.TCLIService$Client.OpenSession(TCLIService.java:154)
at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:578)
... 12 more
Caused by: org.apache.hive.org.apache.http.client.ClientProtocolException
at org.apache.hive.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
at org.apache.hive.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:117)
at org.apache.hive.org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at org.apache.hive.org.apache.thrift.transport.THttpClient.flushUsingHttpClient(THttpClient.java:251)
... 18 more
Caused by: org.apache.hive.org.apache.http.HttpException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:86)
at org.apache.hive.org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:132)
at org.apache.hive.org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:182)
at org.apache.hive.org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.hive.org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.hive.org.apache.http.impl.execchain.ServiceUnavailableRetryExec.execute(ServiceUnavailableRetryExec.java:84)
at org.apache.hive.org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
... 21 more
Caused by: org.apache.hive.org.apache.http.HttpException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:68)
at org.apache.hive.jdbc.HttpRequestInterceptorBase.process(HttpRequestInterceptorBase.java:74)
... 27 more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2273)
at org.apache.hadoop.security.Groups.<init>(Groups.java:99)
at org.apache.hadoop.security.Groups.<init>(Groups.java:95)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:420)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:324)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:291)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:846)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:816)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge.getCurrentUGIWithConf(HadoopThriftAuthBridge.java:122)
at org.apache.hive.service.auth.HttpAuthUtils.getKerberosServiceTicket(HttpAuthUtils.java:81)
at org.apache.hive.jdbc.HttpKerberosRequestInterceptor.addHttpAuthHeader(HttpKerberosRequestInterceptor.java:62)
... 28 more
Caused by: java.lang.RuntimeException: class org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback not org.apache.hadoop.security.GroupMappingServiceProvider
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2267)
... 39 more<br>
I have added the necessary config. in the odi.conf file I have added the necessary configs. in the odi.conf AddVMOption -Djava.security.krb5.conf=C:\Oracle\Middleware\Oracle_Home\oracle_common\modules\datadirect\kerb5.conf
AddVMOption -Djavax.security.auth.useSubjectCredsOnly=false I am sure that the JDBC driver + my kerberos ticket cache etc. are in place, the below, stand-alone Java class is working fine: package com.my; import java.sql.*; /** * Hello world! * */public class App { public static void main( String[] args ) throws ClassNotFoundException, SQLException { System.out.println( "Hello World!" ); System.setProperty("javax.security.auth.useSubjectCredsOnly","false"); System.setProperty("java.security.krb5.conf","C:\Oracle\Middleware\Oracle_Home\oracle_common\modules\datadirect\kerb5.conf"); //System.setProperty("sun.security.krb5.debug","true");Class.forName("org.apache.hive.jdbc.HiveDriver"); System.out.println("getting connection"); Connection con = DriverManager.getConnection("jdbc:hive2://l4284t.sss.se.scania.com:10501/;transportMode=http;principal=hive/_HOST@GLOBAL.SCD.SCANIA.COM;httpPath=cliservice"); System.out.println("Connected"); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery("show tables"); while (rs.next()){ System.out.println("table name : "+rs.getString(1)); } stmt.close(); con.close(); } }
... View more
Labels:
- Labels:
-
Apache Hive
08-16-2017
08:49 AM
1 Kudo
NiFi 1.2.0. There is a custom processor that reads data from SQL Server and creates a flow file per row. This causes creation of millions of flow-files and the destination queue often gets full. The objective is to prevent triggering of the processor if the destination queue is full. The background thread about the backpressuring & throttling . In our dev. env., in spite of adding 'Back Pressure Object Threshold' and 'Back Pressure Data Size Threshold', the processor executed every (scheduled)5 minutes even when the destination queue had far more number of files than configured. I am a bit confused about the following said in the documentation : Several factors exist that will contribute to when a Processor’s onTrigger method is invoked. First, the Processor will not be triggered unless a user has configured the Processor to run. If a Processor is scheduled to run, the Framework periodically (the period is configured by users in the User Interface) checks if there is work for the Processor to do, as described above. If so, the Framework will check downstream destinations of the Processor. If any of the Processor’s outbound Connections is full, by default, the Processor will not be scheduled to run. Can any of the below ways prevent processor triggering the moment : Usage of TriggerWhenAnyDestinationAvailable . It's documentation says : By default, NiFi will not schedule a Processor to run if any of its outbound queues is full. This allows back-pressure to be applied all the way a chain of Processors. However, some Processors may need to run even if one of the outbound queues is full. This annotations indicates that the Processor should run if any Relationship is "available." A Relationship is said to be "available" if none of the connections that use that Relationship is full. For example, the DistributeLoad Processor makes use of this annotation. If the "round robin" scheduling strategy is used, the Processor will not run if any outbound queue is full. However, if the "next available" scheduling strategy is used, the Processor will run if any Relationship at all is available and will route FlowFiles only to those relationships that are available. It's a horrible thought but can, within the processor's code, ProcessSession.getQueueSize() be used to detect the destination queue size and sleep the processor ?
... View more
Labels:
- Labels:
-
Apache NiFi
07-03-2017
12:38 PM
NiFi 1.2.0 There is a custom processor that reads data from db and passes it further. In a recent stress testing, the 'success' relationship queue was clogged and also the later flow as the processor dumped hundred thousands of flow files of several GBs. Obviously, the backpressuring was not implemented. I also read an informative post about throttling and backpressuring. What I have figured out is that backpressuring is something we configure in the relationship queue and standard processors like ControlRate can help to regulate the data flow. Question : Is additional coding required(e.g: Some interface to be implemented) in the processor to enable it to 'sleep/stop consuming data' for backpressuring or does the NiFi framework handle that, once the 'success' relationship of the processor is configured for backpressuring
... View more
Labels:
- Labels:
-
Apache NiFi
06-28-2017
12:24 PM
additivity="false" is essential Complete answer on StackOverflow.
... View more
06-22-2017
12:47 PM
NiFi 1.2.0 I am having a custom processor and I wish to have a dedicated log file for the same. Accordingly, I have configured the com.datalake.processors.SQLServerCDCProcessor class to use an appender named 'SQLSERVER-CDC' Following is the logback.xml : <?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="30 seconds">
<contextListener>
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE">
<file>/var/log/nifi/nifi-app.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
<immediateFlush>true</immediateFlush>
</appender>
<appender name="USER_FILE">
<file>/var/log/nifi/nifi-user.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-user_%d.log</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE">
<file>/var/log/nifi/nifi-bootstrap.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-bootstrap_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE">
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- Start : Added for log for custom processor -->
<appender name="SQLSERVER-CDC">
<file>/var/log/nifi/sqlserver-cdc.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/sqlserver-cdc_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>25MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
<immediateFlush>true</immediateFlush>
</encoder>
</appender>
<!-- End : Added for log for custom processor -->
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />
<logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />
<logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
<logger name="org.apache.curator.ConnectionState" level="OFF" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="INFO"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
<logger name="com.sun.jersey.spi.spring" level="ERROR"/>
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Start : Added for log for custom processor -->
<logger name="com.datalake.processors.SQLServerCDCProcessor" level="DEBUG" >
<appender-ref ref="SQLSERVER-CDC"/>
</logger>
<!-- End : Added for log for custom processor -->
<root level="INFO">
<appender-ref ref="APP_FILE"/>
</root>
</configuration> The strange fact is that the custom processor debug statements are written to both 'nifi-app.log' and the 'sqlserver-cdc.log' but I want these statements to be written only in the latter('sqlserver-cdc.log'). What am I missing ?
... View more
Labels:
- Labels:
-
Apache NiFi
06-16-2017
02:10 PM
NiFi 1.2.0 I have a custom processor who pom files look as the following. processor/pom.xml <?xml version="1.0" encoding="UTF-8"?>
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for additional
information regarding copyright ownership. The ASF licenses this file to
You under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of
the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required
by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License. -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.datalake</groupId>
<artifactId>CDCNiFi</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<artifactId>nifi-NiFiCDCPoC-processors</artifactId>
<packaging>jar</packaging>
<dependencies>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-utils</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-dbcp-service-api</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-processor-utils</artifactId>
</dependency>
<!-- Third-party -->
<!-- <dependency> <groupId>com.microsoft.sqlserver</groupId> <artifactId>mssql-jdbc</artifactId>
<version>6.1.0.jre8</version> </dependency> -->
<dependency>
<groupId>org.codehaus.jackson</groupId>
<artifactId>jackson-mapper-asl</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.8.7</version>
</dependency>
<!-- Testing & Cross-cutting concerns -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-mock</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project> nar/pom.xml <?xml version="1.0" encoding="UTF-8"?>
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor
license agreements. See the NOTICE file distributed with this work for additional
information regarding copyright ownership. The ASF licenses this file to
You under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of
the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required
by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License. -->
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.datalake</groupId>
<artifactId>CDCNiFi</artifactId>
<version>1.0-SNAPSHOT</version>
</parent>
<artifactId>nifi-NiFiCDCPoC-nar</artifactId>
<version>1.0-SNAPSHOT</version>
<packaging>nar</packaging>
<properties>
<maven.javadoc.skip>true</maven.javadoc.skip>
<source.skip>true</source.skip>
</properties>
<dependencies>
<dependency>
<groupId>com..datalake</groupId>
<artifactId>nifi-NiFiCDCPoC-processors</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-standard-services-api-nar</artifactId>
<type>nar</type>
</dependency>
<!-- <dependency>
<groupId>com.microsoft.sqlserver</groupId>
<artifactId>mssql-jdbc</artifactId>
<version>6.1.0.jre8</version>
<scope>runtime</scope>
</dependency>-->
</dependencies>
</project> Now, I wish to do some HDFS file operations from the processor like read/write a SequenceFile, retrieve files from a HDFS directory and so on. I had a look at the existing processors like PutHDFS which has some pom entries as follows : <dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-hadoop-utils</artifactId>
</dependency>
<dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-flowfile-packager</artifactId>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<scope>provided</scope>
</dependency> In the lib dir. of NiFi, I could see files like nifi-hadoop-libraries-nar-1.2.0.nar, nifi-hadoop-nar-1.2.0.nar The inclusion of above entries in my pom.xml didn't work, what shall I do to use the HDFS file system api within my custom processor ?
... View more
- Tags:
- NiFi
- nifi-processor
Labels:
- Labels:
-
Apache NiFi
06-15-2017
11:43 AM
Any suggestions to check if the LSN is persisted and retrieved correctly?
... View more
06-14-2017
06:43 AM
The point is that error occurs only on the Linux machines, that too after the state is stored. With state cleared or on the Windows machine, the JDBC error doesn't occur and the data gets inserted. Attaching the sample processor code
... View more
06-13-2017
12:45 PM
NiFi 1.2.0, two nodes viz: l4513t.sss.se.com & l4514t.sss.se.com, both are RHEL 7 nodes. I have a local(Windows 7) NiFi 1.2.0 where I develop and test my custom processor. Following is a state management code snippet that stores an LSN, a SQL Server binary data type(retrieved and used as byte array in Java) : final StateManager stateManager = context.getStateManager();
try {
StateMap stateMap = stateManager.getState(Scope.CLUSTER);
final Map<String, String> newStateMapProperties = new HashMap<>();
newStateMapProperties.put(ProcessorConstants.LAST_MAX_LSN, new String(lsnUsedDuringLastLoad));
logger.debug("Persisting stateMap : " + newStateMapProperties);
if (stateMap.getVersion() == -1) {
stateManager.setState(newStateMapProperties, Scope.CLUSTER);
} else {
stateManager.replace(stateMap, newStateMapProperties, Scope.CLUSTER);
}
} catch (IOException ioException) {
logger.error("Error while persisting the state to NiFi", ioException);
throw new ProcessException("The state(LSN) couldn't be persisted", ioException);
} The below code is used to retrieve the LSN : final StateManager stateManager = context.getStateManager();
final StateMap stateMap;
final Map<String, String> stateMapProperties;
byte[] lastMaxLSN = null;
try {
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : " + stateMapProperties);
lastMaxLSN = (stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN) == null
|| stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN).isEmpty()) ? null
: stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN).getBytes();
} catch (IOException ioe) {
logger.error("Couldn't load the state map", ioe);
throw new ProcessException(ioe);
} The processor works smoothly on the local, Windows 7 machine and the LSN is stored and retrieved several times, without any errors. The problem surfaced when I deployed the processor on the dev. env. which has the above mentioned two RHEL 7 nodes. The processor executed successfully ONLY ONCE, later, it didn't throw any error but simply didn't get expected LSN, thus, not doing any work. To get a complete idea of the situation, I created a test table which would store every query that would execute with the LSN. Again, this worked fine on the local, Windows 7 machine, following are some rows inserted in the test table. Note : Just focus on the LSN field which is a binary(10) field, rest all can be ignored. On dev., the processor executes EXACTLY ONCE, either when it was deployed the first time or after clearing its state. On the second execution, it retrieves the LSN and tries to store it back in the test table, only to get a JDBC exception: com.microsoft.sqlserver.jdbc.SQLServerException: String or binary data would be truncated.
com.microsoft.sqlserver.jdbc.SQLServerException: String or binary data would be truncated.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:217)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1655)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:440)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:385)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7505)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2445)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:191)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:166)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:328)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at com.datalake.processors.SQLServerCDCProcessorCCC.logCTQueryWithParams(SQLServerCDCProcessorCCC.java:62)
at com.datalake.processors.SQLServerCDCProcessor.writeDataFromChangeTablesToFlowFiles(SQLServerCDCProcessor.java:575)
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:193)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1118)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:144)
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) I believe that the underlying ZK is the one which stores this metadata. I am unable to pinpoint the root cause and the location. ***********Edit-1*********** Out of curiosity/desperation, I tried using the UTF-8 charset : while storing : newStateMapProperties.put(ProcessorConstants.LAST_MAX_LSN,
new String(lsnUsedDuringLastLoad, StandardCharsets.UTF_8));
and retrieving the bytes : lastMaxLSN = (stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN) == null
|| stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN).isEmpty()) ? null
: `stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN).getBytes(StandardCharsets.UTF_8);`
Now, I get the same exception even on the Windows machine, the RHEL error remains the same.
... View more
Labels:
- Labels:
-
Apache NiFi
06-09-2017
07:20 AM
I got the answer at StackOverflow. I am able to create a logfile on local, Windows machine but the same config. is not working on the Linux env., maybe I have missed something. Below is the logback.xml used on my local machine : <?xml version="1.0" encoding="UTF-8"?> <configuration scan="true" scanPeriod="30 seconds">
<contextListener>
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<maxFileSize>100MB</maxFileSize>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<immediateFlush>true</immediateFlush>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="USER_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-user_%d.log</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE">
<file>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/nifi-bootstrap_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE">
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- Start : Added for log for custom processor -->
<appender name="SQLSERVER-CDC">
<file>${org.apache.nifi.bootstrap.config.log.dir}/sqlserver-cdc.log</file>
<rollingPolicy>
<fileNamePattern>${org.apache.nifi.bootstrap.config.log.dir}/sqlserver-cdc_%d.log</fileNamePattern>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- End : Added for log for custom processor -->
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />
<logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />
<logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
<logger name="org.apache.curator.ConnectionState" level="OFF" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="INFO"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
<logger name="com.sun.jersey.spi.spring" level="ERROR"/>
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Start : Added for log for custom processor -->
<logger name="com.datalake.processors.SQLServerCDCProcessor" level="DEBUG" >
<appender-ref ref="SQLSERVER-CDC"/>
</logger>
<!-- End : Added for log for custom processor -->
<root level="info">
<appender-ref ref="APP_FILE"/>
</root>
</configuration>
... View more
06-07-2017
06:29 AM
NiFi 1.2.0 Need to create a separate log file, say, customprocessor.log besides the app.log file created by NiFi. I went through some interesting, existing threads like this, however, I am unable to figure out how to make it working in the code. Following is the existing logback.xml : <?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<configuration scan="true" scanPeriod="30 seconds">
<contextListener>
<resetJUL>true</resetJUL>
</contextListener>
<appender name="APP_FILE">
<file>/var/log/nifi/nifi-app.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-app_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
<immediateFlush>true</immediateFlush>
</appender>
<appender name="USER_FILE">
<file>/var/log/nifi/nifi-user.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-user_%d.log</fileNamePattern>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="BOOTSTRAP_FILE">
<file>/var/log/nifi/nifi-bootstrap.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'user_%d.log'.
For hourly rollover, use 'user_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/nifi-bootstrap_%d.log</fileNamePattern>
<!-- keep 5 log files worth of history -->
<maxHistory>5</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<appender name="CONSOLE">
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
</appender>
<!-- valid logging levels: TRACE, DEBUG, INFO, WARN, ERROR -->
<logger name="org.apache.nifi" level="INFO"/>
<logger name="org.apache.nifi.processors" level="WARN"/>
<logger name="org.apache.nifi.processors.standard.LogAttribute" level="INFO"/>
<logger name="org.apache.nifi.controller.repository.StandardProcessSession" level="WARN" />
<logger name="org.apache.zookeeper.ClientCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxn" level="ERROR" />
<logger name="org.apache.zookeeper.server.NIOServerCnxnFactory" level="ERROR" />
<logger name="org.apache.zookeeper.server.quorum" level="ERROR" />
<logger name="org.apache.zookeeper.ZooKeeper" level="ERROR" />
<logger name="org.apache.zookeeper.server.PrepRequestProcessor" level="ERROR" />
<logger name="org.apache.calcite.runtime.CalciteException" level="OFF" />
<logger name="org.apache.curator.framework.recipes.leader.LeaderSelector" level="OFF" />
<logger name="org.apache.curator.ConnectionState" level="OFF" />
<!-- Logger for managing logging statements for nifi clusters. -->
<logger name="org.apache.nifi.cluster" level="INFO"/>
<!-- Logger for logging HTTP requests received by the web server. -->
<logger name="org.apache.nifi.server.JettyServer" level="INFO"/>
<!-- Logger for managing logging statements for jetty -->
<logger name="org.eclipse.jetty" level="INFO"/>
<!-- Suppress non-error messages due to excessive logging by class or library -->
<logger name="com.sun.jersey.spi.container.servlet.WebComponent" level="ERROR"/>
<logger name="com.sun.jersey.spi.spring" level="ERROR"/>
<logger name="org.springframework" level="ERROR"/>
<!-- Suppress non-error messages due to known warning about redundant path annotation (NIFI-574) -->
<logger name="com.sun.jersey.spi.inject.Errors" level="ERROR"/>
<!--
Logger for capturing user events. We do not want to propagate these
log events to the root logger. These messages are only sent to the
user-log appender.
-->
<logger name="org.apache.nifi.web.security" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.api.config" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.cluster.authorization" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<logger name="org.apache.nifi.web.filter.RequestLogger" level="INFO" additivity="false">
<appender-ref ref="USER_FILE"/>
</logger>
<!--
Logger for capturing Bootstrap logs and NiFi's standard error and standard out.
-->
<logger name="org.apache.nifi.bootstrap" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<logger name="org.apache.nifi.bootstrap.Command" level="INFO" additivity="false">
<appender-ref ref="CONSOLE" />
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Out will be logged with the logger org.apache.nifi.StdOut at INFO level -->
<logger name="org.apache.nifi.StdOut" level="INFO" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<!-- Everything written to NiFi's Standard Error will be logged with the logger org.apache.nifi.StdErr at ERROR level -->
<logger name="org.apache.nifi.StdErr" level="ERROR" additivity="false">
<appender-ref ref="BOOTSTRAP_FILE" />
</logger>
<root level="DEBUG">
<appender-ref ref="APP_FILE"/>
</root>
</configuration> Now, I can add a new appender for the custom log file : <!-- Start : Separate log file for custom processor -->
<appender name="CUSTOM_FILE">
<file>/var/log/nifi/custom-processor.log</file>
<rollingPolicy>
<!--
For daily rollover, use 'app_%d.log'.
For hourly rollover, use 'app_%d{yyyy-MM-dd_HH}.log'.
To GZIP rolled files, replace '.log' with '.log.gz'.
To ZIP rolled files, replace '.log' with '.log.zip'.
-->
<fileNamePattern>/var/log/nifi/archive/custom-processor_%d{yyyy-MM-dd_HH}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy>
<maxFileSize>100MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<!-- keep 30 log files worth of history -->
<maxHistory>3</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date %level [%thread] %logger{40} %msg%n</pattern>
</encoder>
<immediateFlush>true</immediateFlush>
</appender>
<!-- End : Separate log file for custom processor -->
<!-- Start : Separate log file for custom processor -->
<logger name="com.nifi.CustomLog" level="DEBUG" additivity="false">
<appender-ref ref="CUSTOM_FILE" />
</logger>
<!-- End : Separate log file for custom processor -->
I have the following questions : Are the entries that I am adding correct In the code, I use the following snippet to get the root logger, however, I didn't find a method/constructor to get my custom logger in the code, how shall I do that ? import org.apache.nifi.logging.ComponentLog;..final ComponentLog logger = getLogger();logger.debug("...");
... View more
Labels:
- Labels:
-
Apache NiFi
06-02-2017
07:10 AM
Added the state-management.xml from both the nodes, can you check ? There is no 'LOCAL' word in the processor code 🙂
... View more
05-31-2017
11:20 AM
NiFi 1.2.0, two nodes. There is a custom processor which persists some bytes via the State api, following is the relevant code snippet : @Override
public void onTrigger(final ProcessContext context, final ProcessSession session) throws ProcessException {
final StateManager stateManager = context.getStateManager();
try {
StateMap stateMap = stateManager.getState(Scope.CLUSTER);
final Map<String, String> newStateMapProperties = new HashMap<>();
newStateMapProperties.put(ProcessorConstants.LAST_MAX_LSN, new String(lsnUsedDuringLastLoad));
logger.debug("Persisting stateMap : " + newStateMapProperties);
if (stateMap.getVersion() == -1) {
stateManager.setState(newStateMapProperties, Scope.CLUSTER);
} else {
stateManager.replace(stateMap, newStateMapProperties, Scope.CLUSTER);
}
} catch (IOException ioException) {
logger.error("Error while persisting the state to NiFi", ioException);
throw new ProcessException("The state(LSN) couldn't be persisted", ioException);
}
.
.
.
}
private boolean writeDataFromChangeTablesToFlowFiles(ProcessContext context, ProcessSession session) {
.
.
.
final StateManager stateManager = context.getStateManager();
final StateMap stateMap;
final Map<String, String> stateMapProperties;
.
.
.
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : " + stateMapProperties);
lastMaxLSN = (stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN) == null
|| stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN).isEmpty()) ? null
: stateMapProperties.get(ProcessorConstants.LAST_MAX_LSN).getBytes();
} On my local machine, the processor runs fine for even days, the data is picked up whenever it's inserted/updated/deleted. When the nar was first deployed, the processor worked fine and fetched the data it was supposed to. After some time(probably, the second execution itself), it simply stops pulling data. I checked the state of the processor and I am unsure if looks correct - the lsn bytes should be stored as a single, uniform value accessible to all nodes : Now, if I start-stop the processor and clear the state, it starts fetching the data ? What is the logical mistake that I have committed ? **********Edit-1********* I configured the processor to run only on the primary node, yet I have the same issue. **********Edit-2********** The state-management.xml on l4513t : <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<stateManagement>
<!--
State Provider that stores state locally in a configurable directory. This Provider requires the following properties:
Directory - the directory to store components' state in. If the directory being used is a sub-directory of the NiFi installation, it
is important that the directory be copied over to the new version when upgrading NiFi.
Always Sync - If set to true, any change to the repository will be synchronized to the disk, meaning that NiFi will ask the operating system not to cache the information. This is very
expensive and can significantly reduce NiFi performance. However, if it is false, there could be the potential for data loss if either there is a sudden power loss or the
operating system crashes. The default value is false.
Partitions - The number of partitions.
Checkpoint Interval - The amount of time between checkpoints.
-->
<local-provider>
<id>local-provider</id>
<class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
<property name="Directory">./state/local</property>
<property name="Always Sync">false</property>
<property name="Partitions">16</property>
<property name="Checkpoint Interval">2 mins</property>
</local-provider>
<!--
State Provider that is used to store state in ZooKeeper. This Provider requires the following properties:
Root Node - the root node in ZooKeeper where state should be stored. The default is '/nifi', but it is advisable to change this to a different value if not using
the embedded ZooKeeper server and if multiple NiFi instances may all be using the same ZooKeeper Server.
Connect String - A comma-separated list of host:port pairs to connect to ZooKeeper. For example, myhost.mydomain:2181,host2.mydomain:5555,host3:6666
Session Timeout - Specifies how long this instance of NiFi is allowed to be disconnected from ZooKeeper before creating a new ZooKeeper Session. Default value is "30 seconds"
Access Control - Specifies which Access Controls will be applied to the ZooKeeper ZNodes that are created by this State Provider. This value must be set to one of:
- Open : ZNodes will be open to any ZooKeeper client.
- CreatorOnly : ZNodes will be accessible only by the creator. The creator will have full access to create children, read, write, delete, and administer the ZNodes.
This option is available only if access to ZooKeeper is secured via Kerberos or if a Username and Password are set.
<property name="Connect String">l4513t.sss.se.com:2181,l4514t.sss.se.com:2181</property>
-->
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">l4373t.sss.se.com:2181,l4283t.sss.se.com:2181,l4284t.sss.se.com:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider>
</stateManagement> The state-management.xml on l4514t : <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<!--
This file provides a mechanism for defining and configuring the State Providers
that should be used for storing state locally and across a NiFi cluster. In order
to use a specific provider, it must be configured here and its identifier
must be specified in the nifi.properties file.
-->
<stateManagement>
<!--
State Provider that stores state locally in a configurable directory. This Provider requires the following properties:
Directory - the directory to store components' state in. If the directory being used is a sub-directory of the NiFi installation, it
is important that the directory be copied over to the new version when upgrading NiFi.
Always Sync - If set to true, any change to the repository will be synchronized to the disk, meaning that NiFi will ask the operating system not to cache the information. This is very
expensive and can significantly reduce NiFi performance. However, if it is false, there could be the potential for data loss if either there is a sudden power loss or the
operating system crashes. The default value is false.
Partitions - The number of partitions.
Checkpoint Interval - The amount of time between checkpoints.
-->
<local-provider>
<id>local-provider</id>
<class>org.apache.nifi.controller.state.providers.local.WriteAheadLocalStateProvider</class>
<property name="Directory">./state/local</property>
<property name="Always Sync">false</property>
<property name="Partitions">16</property>
<property name="Checkpoint Interval">2 mins</property>
</local-provider>
<!--
State Provider that is used to store state in ZooKeeper. This Provider requires the following properties:
Root Node - the root node in ZooKeeper where state should be stored. The default is '/nifi', but it is advisable to change this to a different value if not using
the embedded ZooKeeper server and if multiple NiFi instances may all be using the same ZooKeeper Server.
Connect String - A comma-separated list of host:port pairs to connect to ZooKeeper. For example, myhost.mydomain:2181,host2.mydomain:5555,host3:6666
Session Timeout - Specifies how long this instance of NiFi is allowed to be disconnected from ZooKeeper before creating a new ZooKeeper Session. Default value is "30 seconds"
Access Control - Specifies which Access Controls will be applied to the ZooKeeper ZNodes that are created by this State Provider. This value must be set to one of:
- Open : ZNodes will be open to any ZooKeeper client.
- CreatorOnly : ZNodes will be accessible only by the creator. The creator will have full access to create children, read, write, delete, and administer the ZNodes.
This option is available only if access to ZooKeeper is secured via Kerberos or if a Username and Password are set.
<property name="Connect String">l4513t.sss.se.com:2181,l4514t.sss.se.com:2181</property>
-->
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">l4373t.sss.se.com:2181,l4283t.sss.se.com:2181,l4284t.sss.se.com:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider>
</stateManagement>
... View more
Labels:
- Labels:
-
Apache NiFi
05-26-2017
02:02 PM
Yes, your guess is right ! There was an upgrade from 1.1 The tip seems crucial, will try that ...
... View more
05-26-2017
09:04 AM
1 Kudo
NiFi 1.2.0, two nodes, kerberized. In the previous version, the custom processor executed properly. The process I follow for deployment : Place the nar file in lib of both the nodes(l4513t.sss.se.com and l4514t.sss.se.com, NiFi installations) Restart one NiFi instance, at a time I am getting issues when I begin with one of the NiFi nodes(l4513t.sss.se.com) First, I got the following error when I placed the nar file in lib and tried to restart NiFi : Failed to connect node to cluster because local flow is different than cluster flow.
org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow.
at org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:934)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:515)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:790)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.apache.nifi.controller.UninheritableFlowException: Proposed configuration is not inheritable by the flow controller because of flow differences: Found difference in Flows:
Local Fingerprint: 91910-015b-1000-9677-68b017463306com.datalake.processors.SQLServerCDCProcessorNO_VALUEdefaultnifi-NiFiCDCPoC-narunversionedDatabase Connection Pooling Service=fe153649-5193-1a68-ffff-ffffc37686
Cluster Fingerprint: 91910-015b-1000-9677-68b017463306com.datalake.processors.SQLServerCDCProcessorNO_VALUEdefaultunknownunversionedDatabase Connection Pooling Service=fe153649-5193-1a68-ffff-ffffc37686c6containerD
at org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:259)
at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1544)
at org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:84)
at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:720)
at org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:909)
... 4 common frames omitted
2017-05-26 09:05:06,199 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator l4513t.sss.se.com:9443 requested disconnection from cluster due to org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow.
2017-05-26 09:05:06,199 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator Status of l4513t.sss.se.com:9443 changed from NodeConnectionStatus[nodeId=l4513t.sss.se.com:9443, state=CONNECTING, updateId=368] to NodeConnectionStatus[nodeId=l4513t.sss.se.com:9443, state=DISCONNECTED, Disconnect Code=Node's Flow did not Match Cluster Flow, Disconnect Reason=org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow., updateId=368]
2017-05-26 09:05:06,395 ERROR [main] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for l4513t.sss.se.com:9443 -- Node disconnected from cluster due to org.apache.nifi.controller.UninheritableFlowException: Failed to connect node to cluster because local flow is different than cluster flow.
2017-05-26 09:05:06,395 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager Cannot unregister Leader Election Role 'Primary Node' becuase that role is not registered
2017-05-26 09:05:06,395 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.IllegalStateException: Already closed or has not been started
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at org.apache.curator.framework.recipes.leader.LeaderSelector.close(LeaderSelector.java:270)
at org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager.unregister(CuratorLeaderElectionManager.java:151)
at org.apache.nifi.controller.FlowController.setClustered(FlowController.java:3667)
at org.apache.nifi.controller.StandardFlowService.handleConnectionFailure(StandardFlowService.java:554)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:518)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:790)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267) I flipped through several existing threads and deleted the flow.xml.gz from a node and attempted to restart, now I am getting the following : 2017-05-26 09:20:30,803 INFO [Process Cluster Protocol Request-1] o.a.n.c.c.node.NodeClusterCoordinator Status of l4513t.sss.se.com:9443 changed from null to NodeConnectionStatus[nodeId=l4513t.sss.se.com:9443, state=CONNECTING, updateId=370]
2017-05-26 09:20:30,811 INFO [Process Cluster Protocol Request-1] o.a.n.c.p.impl.SocketProtocolListener Finished processing request 25e40b0f-b0ab-4f92-9e7f-440abb116999 (type=NODE_STATUS_CHANGE, length=1071 bytes) from l4514t.sss.se.com in 117 millis
2017-05-26 09:20:30,914 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator Resetting cluster node statuses from {l4513t.sss.se.com:9443=NodeConnectionStatus[nodeId=l4513t.sss.se.com:9443, state=CONNECTING, updateId=370]} to {l4514t.sss.se.com:9443=NodeConnectionStatus[nodeId=l4514t.sss.se.com:9443, state=CONNECTED, updateId=360], l4513t.sss.se.com:9443=NodeConnectionStatus[nodeId=l4513t.sss.se.com:9443, state=CONNECTING, updateId=370]}
2017-05-26 09:20:31,348 ERROR [main] o.a.nifi.controller.StandardFlowService Failed to load flow from cluster due to: org.apache.nifi.controller.MissingBundleException: Failed to connect node to cluster because cluster flow contains bundles that do not exist on the current node
org.apache.nifi.controller.MissingBundleException: Failed to connect node to cluster because cluster flow contains bundles that do not exist on the current node
at org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:936)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:515)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:790)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: org.apache.nifi.controller.MissingBundleException: com.datalake.processors.SQLServerCDCProcessor from default:unknown:unversioned is not known to this NiFi instance.
at org.apache.nifi.controller.StandardFlowSynchronizer.checkBundleCompatibility(StandardFlowSynchronizer.java:445)
at org.apache.nifi.controller.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:253)
at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1544)
at org.apache.nifi.persistence.StandardXMLFlowConfigurationDAO.load(StandardXMLFlowConfigurationDAO.java:84)
at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:720)
at org.apache.nifi.controller.StandardFlowService.loadFromConnectionResponse(StandardFlowService.java:909)
... 4 common frames omitted
Caused by: java.lang.IllegalStateException: com.datalake.processors.SQLServerCDCProcessor from default:unknown:unversioned is not known to this NiFi instance.
at org.apache.nifi.util.BundleUtils.findCompatibleBundle(BundleUtils.java:55)
at org.apache.nifi.util.BundleUtils.getBundle(BundleUtils.java:98)
at org.apache.nifi.controller.StandardFlowSynchronizer.checkBundleCompatibility(StandardFlowSynchronizer.java:443)
... 9 common frames omitted
2017-05-26 09:20:31,348 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator l4513t.sss.se.com:9443 requested disconnection from cluster due to org.apache.nifi.controller.MissingBundleException: Failed to connect node to cluster because cluster flow contains bundles that do not exist on the current node
2017-05-26 09:20:31,348 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator Status of l4513t.sss.se.com:9443 changed from NodeConnectionStatus[nodeId=l4513t.sss.se.com:9443, state=CONNECTING, updateId=370] to NodeConnectionStatus[nodeId=l4513t.sss.se.com:9443, state=DISCONNECTED, Disconnect Code=Node was missing bundle used by Cluster Flow, Disconnect Reason=org.apache.nifi.controller.MissingBundleException: Failed to connect node to cluster because cluster flow contains bundles that do not exist on the current node, updateId=370]
2017-05-26 09:20:31,448 INFO [main] o.a.n.c.c.node.NodeClusterCoordinator Event Reported for l4513t.sss.se.com:9443 -- Node disconnected from cluster due to org.apache.nifi.controller.MissingBundleException: Failed to connect node to cluster because cluster flow contains bundles that do not exist on the current node
2017-05-26 09:20:31,448 INFO [main] o.a.n.c.l.e.CuratorLeaderElectionManager Cannot unregister Leader Election Role 'Primary Node' becuase that role is not registered
2017-05-26 09:20:31,448 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
java.lang.IllegalStateException: Already closed or has not been started
at com.google.common.base.Preconditions.checkState(Preconditions.java:173)
at org.apache.curator.framework.recipes.leader.LeaderSelector.close(LeaderSelector.java:270)
at org.apache.nifi.controller.leader.election.CuratorLeaderElectionManager.unregister(CuratorLeaderElectionManager.java:151)
at org.apache.nifi.controller.FlowController.setClustered(FlowController.java:3667)
at org.apache.nifi.controller.StandardFlowService.handleConnectionFailure(StandardFlowService.java:554)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:518)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:790)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
2017-05-26 09:20:31,450 INFO [Thread-1] org.apache.nifi.NiFi Initiating shutdown of Jetty web server...
2017-05-26 09:20:31,456 INFO [Thread-1] o.eclipse.jetty.server.AbstractConnector Stopped ServerConnector@73bd146c{SSL,[ssl, http/1.1]}{l4513t.sss.se.com:9443}
2017-05-26 09:20:31,456 INFO [Thread-1] org.eclipse.jetty.server.session Stopped scavenging
2017-05-26 09:20:31,457 DEBUG [Thread-1] org.apache.jasper.servlet.JspServlet JspServlet.destroy() What is that I am missing ?
... View more
Labels:
- Labels:
-
Apache NiFi
04-26-2017
12:26 PM
HDP-2.5.3.0. A custom processor uses the State api to persist some data, this thread has all the details. The code snippet that uses the State api : try {
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : " + stateMapProperties);
...
...
...
} catch (IOException ioe) {
logger.error("Couldn't load the state map", ioe);
throw new ProcessException(ioe);
} The processor works fine on my local machine's NiFi but when I put it on our (kerberized)dev cluster which has 2 NiFi nodes, it fails with the following error : java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420) ~[na:na]
at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63) ~[na:na]
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:480) [nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:191) [nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /nifi/components/d7fff389-015a-1000-ffff-ffffd04d1279
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) ~[na:na]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) ~[na:na]
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403) ~[na:na]
.
.
.
.
.
.
.
.
.
org.apache.nifi.processor.exception.ProcessException: java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:493) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:191) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.2.jar:1.1.2]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.2.jar:1.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_112]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_112]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_112]
Caused by: java.io.IOException: Failed to obtain value from ZooKeeper for component with ID d7fff389-015a-1000-ffff-ffffd04d1279 with exception code NOAUTH
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:420) ~[na:na]
at org.apache.nifi.controller.state.StandardStateManager.getState(StandardStateManager.java:63) ~[na:na]
at com.datalake.processors.SQLServerCDCProcessor.getDataFromChangeTables(SQLServerCDCProcessor.java:480) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
... 13 common frames omitted
Caused by: org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth for /nifi/components/d7fff389-015a-1000-ffff-ffffd04d1279
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113) ~[na:na]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) ~[na:na]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184) ~[na:na]
at org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider.getState(ZooKeeperStateProvider.java:403) ~[na:na]
... 15 common frames omitted Attached is a bit more of the NiFi app log statemap-zk-error.txt Following are the entries in the state-management.xml <cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">l4373t.sss.se.com:2181,l4283t.sss.se.com:2181,l4284t.sss.se.com:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">CreatorOnly</property>
</cluster-provider>
</stateManagement> Any ideas ? *****Edit-2***** Providing the existing kafka-jaas.conf bash-4.2$ cat kafka-jaas.conf
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
renewTicket=true
useTicketCache=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com@GLOBAL.SCD.COM";
};
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
useTicketCache=true
renewTicket=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com@GLOBAL.SCD.COM";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
useTicketCache=true
serviceName="kafka"
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
principal="nifi/l4513t.sss.se.com@GLOBAL.SCD.COM";
}; *****Edit-1***** As per the NiFi state management doc., added the zk jaas configuration, still the issue persists. <code>bash-4.2$ cat zookeeper-jaas.conf
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/usr/local/nifi/keys/nifi_l4513t.sss.se.com.keytab"
storeKey=true
useTicketCache=true
principal="nifi/l4513t.sss.se.com@GLOBAL.SCD.COM";
};
The entry(as 'java.arg.16') in the bootstrap.conf file : <code>bash-4.2$ vi bootstrap.conf
#
# Java command to use when running NiFi
java=java
# Username to use when running NiFi. This value will be ignored on Windows.
run.as=
# Configure where NiFi's lib and conf directories live
lib.dir=./lib
conf.dir=./conf
# How long to wait after telling NiFi to shutdown before explicitly killing the Process
graceful.shutdown.seconds=20
# Disable JSR 199 so that we can use JSP's without running a JDK
java.arg.1=-Dorg.apache.jasper.compiler.disablejsr199=true
# JVM memory settings
java.arg.2=-Xms1024m
java.arg.3=-Xmx2048m
# Enable Remote Debugging
#java.arg.debug=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8000
java.arg.4=-Djava.net.preferIPv4Stack=true
# allowRestrictedHeaders is required for Cluster/Node communications to work properly
java.arg.5=-Dsun.net.http.allowRestrictedHeaders=true
java.arg.6=-Djava.protocol.handler.pkgs=sun.net.www.protocol
java.arg.7=-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi
# The G1GC is still considered experimental but has proven to be very advantageous in providing great
# performance without significant "stop-the-world" delays.
java.arg.13=-XX:+UseG1GC
#Set headless mode by default
java.arg.14=-Djava.awt.headless=true
java.arg.15=-Djava.security.auth.login.config=/usr/local/nifi/conf/kafka-jaas.conf
java.arg.16=-Djava.security.auth.login.config=/usr/local/nifi/conf/zookeeper-jaas.conf
# Master key in hexadecimal format for encrypted sensitive configuration values
nifi.bootstrap.sensitive.key=
###
# Notification Services for notifying interested parties when NiFi is stopped, started, dies
###
... View more
Labels:
- Labels:
-
Apache NiFi
03-23-2017
08:13 AM
Did what - the configuration or ... ?
The original question has link to another question that has the config details.
... View more
03-20-2017
11:41 AM
NiFi 1.1.1 I am trying to persist a byte [] using the State Manager. private byte[] lsnUsedDuringLastLoad;
@Override
public void onTrigger(final ProcessContext context,
final ProcessSession session) throws ProcessException {
...
...
...
final StateManager stateManager = context.getStateManager();
try {
StateMap stateMap = stateManager.getState(Scope.CLUSTER);
final Map<String, String> newStateMapProperties = new HashMap<>();
newStateMapProperties.put(ProcessorConstants.LAST_MAX_LSN,
new String(lsnUsedDuringLastLoad));
logger.debug("Persisting stateMap : "
+ newStateMapProperties);
stateManager.replace(stateMap, newStateMapProperties,
Scope.CLUSTER);
} catch (IOException ioException) {
logger.error("Error while persisting the state to NiFi",
ioException);
throw new ProcessException(
"The state(LSN) couldn't be persisted", ioException);
}
...
...
...
} I don't get any exception or even a log error entry, the processor continues to run. The following load code always returns a null value(Retrieved the statemap : {})for the persisted field : try {
stateMap = stateManager.getState(Scope.CLUSTER);
stateMapProperties = new HashMap<>(stateMap.toMap());
logger.debug("Retrieved the statemap : "+stateMapProperties);
lastMaxLSN = (stateMapProperties
.get(ProcessorConstants.LAST_MAX_LSN) == null || stateMapProperties
.get(ProcessorConstants.LAST_MAX_LSN).isEmpty()) ? null
: stateMapProperties.get(
ProcessorConstants.LAST_MAX_LSN).getBytes();
logger.debug("Attempted to load the previous lsn from NiFi state : "
+ lastMaxLSN);
} catch (IOException ioe) {
logger.error("Couldn't load the state map", ioe);
throw new ProcessException(ioe);
}
I am wondering if the ZK is at fault or have I missed something while using the State Map !
... View more
Labels:
- Labels:
-
Apache NiFi
03-20-2017
10:56 AM
I have circumvented the issue by using a select instead of a CallableStatement 😞
... View more
03-17-2017
03:57 PM
NiFi 1.1.1 tested on both Windows 7 and RHEL 7. The background thread can be found here. I have created a DBCPConnectionPool controller service pointing to a SQL Server db, I am able to fetch data from a table and write it to the local disk(ExecuteSQL -> ConvertAvroToJSON -> PutFile). The challenge arises when I use the pool as a property in my custom processor. In the processor's code, I need to invoke a function in the db but this lands into a SQLException pointing to the JDBC driver. Note that the same driver functions properly in a standalone Java code(provided in the background thread to avoid cluttering this post) and I get the return value from the function. Process or SQL exception in <configure logger template to pick the code location>
2017-03-17 09:25:30,717 ERROR [Timer-Driven Process Thread-6] c.s.d.processors.SQLServerCDCProcessor
org.apache.nifi.processor.exception.ProcessException: Coudln't retrieve the max lsn for the db test
at com.datalake.processors.SQLServerCDCProcessor$SQLServerCDCUtils.getMaxLSN(SQLServerCDCProcessor.java:692) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.getChangedTableQueries(SQLServerCDCProcessor.java:602) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:249) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.1.jar:1.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_71]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71]
Caused by: java.sql.SQLFeatureNotSupportedException: registerOutParameter not implemented
at java.sql.CallableStatement.registerOutParameter(CallableStatement.java:2613) ~[na:1.8.0_71]
at com.datalake.processors.SQLServerCDCProcessor$SQLServerCDCUtils.getMaxLSN(SQLServerCDCProcessor.java:677) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
... 14 common frames omitted I suspect that the Controller Service is not configured properly - it can execute select queries but when a code invokes a function, it throws an Exception. What am I missing ?
... View more
Labels:
- Labels:
-
Apache NiFi
03-15-2017
08:22 AM
Yeah I am using a DBCPConnectionPool to get the Connection to the SQL Server database. I tested a sample flow using it and could retrieve data from a table and write it to a local file(PutFile processor). Can you check :
The link to the previous post mentioned in the original question - it has screenshots. I have created a DBCPConnectionPool controller service to a SQL Server db and trying to use it as a property in my custom processor 'Edit-1' in the original question
... View more
03-13-2017
02:12 PM
NiFi 1.1.1 Microsoft seems to have released a jdbc driver recently. I am using the sqljdbc42.jar(that is downloaded using that link) in standalone code as well as added the same to the NiFi lib. In a stand-alone Java class, the following code(reading the return value of a SQL Server built-in function) works fine : public byte[] getMaxLSN(Connection connection, String containerDB) {
String dbMaxLSN = "{? = CALL sys.fn_cdc_get_max_lsn()}";
byte[] maxLSN = null;
try (final CallableStatement cstmt = connection.prepareCall(dbMaxLSN);) {
cstmt.registerOutParameter(1, java.sql.JDBCType.BINARY);
cstmt.execute();
if (cstmt.getBytes(1) == null || cstmt.getBytes(1).length <= 0) {
System.out.println("Coudln't retrieve the max lsn for the db "
+ containerDB);
} else {
maxLSN = cstmt.getBytes(1);
}
} catch (SQLException sqlException) {
System.out.println("sqlException !!!");
sqlException.printStackTrace();
}
return maxLSN;
} This previous thread has the processor information. I added the same code to my custom processor but when I start the processor, it fails quoting the feature is not supported by the driver : *****Edit-1***** Some questions about the ways to include the JDBC drivers : I want to avoid adding the jdbc jar in the NiFi lib and add/bundle it in the nar file itself. Is this possible and is this the standard/recommended way ? By looking at the code of the standard processors like ExecuteSQL, I couldn't establish how these processors use the JDBC drivers, any pointers ? Is storing the absolute path of the JDBC driver in the DBCPConnectionPool controller service a standard way ? 2017-03-13 14:09:39,705 ERROR [Timer-Driven Process Thread-8] c.s.d.processors.SQLServerCDCProcessor SQLServerCDCProcessor[id=c7c8f0a8-015a-1000-71e1-09ab42e46c55] Coudln't retrieve the max lsn for the db test
2017-03-13 14:09:39,706 ERROR [Timer-Driven Process Thread-8] c.s.d.processors.SQLServerCDCProcessor
java.sql.SQLFeatureNotSupportedException: registerOutParameter not implemented
at java.sql.CallableStatement.registerOutParameter(CallableStatement.java:2613) ~[na:1.8.0_71]
at com.datalake.processors.SQLServerCDCProcessor$SQLServerCDCUtils.getMaxLSN(SQLServerCDCProcessor.java:677) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.getChangedTableQueries(SQLServerCDCProcessor.java:602) [nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.1.jar:1.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_71]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71]
2017-03-13 14:09:39,714 ERROR [Timer-Driven Process Thread-8] c.s.d.processors.SQLServerCDCProcessor SQLServerCDCProcessor[id=c7c8f0a8-015a-1000-71e1-09ab42e46c55] Process or SQL exception in <configure logger template to pick the code location>
2017-03-13 14:09:39,715 ERROR [Timer-Driven Process Thread-8] c.s.d.processors.SQLServerCDCProcessor
org.apache.nifi.processor.exception.ProcessException: Coudln't retrieve the max lsn for the db test
at com.datalake.processors.SQLServerCDCProcessor$SQLServerCDCUtils.getMaxLSN(SQLServerCDCProcessor.java:692) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.getChangedTableQueries(SQLServerCDCProcessor.java:602) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at com.datalake.processors.SQLServerCDCProcessor.onTrigger(SQLServerCDCProcessor.java:249) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) [nifi-api-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1099) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:136) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) [nifi-framework-core-1.1.1.jar:1.1.1]
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:132) [nifi-framework-core-1.1.1.jar:1.1.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_71]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [na:1.8.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [na:1.8.0_71]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_71]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_71]
Caused by: java.sql.SQLFeatureNotSupportedException: registerOutParameter not implemented
at java.sql.CallableStatement.registerOutParameter(CallableStatement.java:2613) ~[na:1.8.0_71]
at com.datalake.processors.SQLServerCDCProcessor$SQLServerCDCUtils.getMaxLSN(SQLServerCDCProcessor.java:677) ~[nifi-NiFiCDCPoC-processors-1.0-SNAPSHOT.jar:1.0-SNAPSHOT]
... View more
- Tags:
- NiFi
- nifi-processor
Labels:
- Labels:
-
Apache NiFi