Member since
09-21-2018
18
Posts
0
Kudos Received
0
Solutions
12-22-2019
10:53 PM
i was trying to stream rsyslog log data to apache metron using asa parser. the log look like down below
2019-12-20T07:06:41-05:00 ab TESTING: Fri 20 Dec 2019 07:06:41 AM EST the log 2019-12-20T07:06:41-05:00 ab rsyslogd: action 'action-13-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.1911.0 try https://www.rsyslog.com/e/2359 ] 2019-12-20T07:08:04-05:00 ab TESTING: Fri 20 Dec 2019 07:08:04 AM EST 2019-12-20T07:08:05-05:00 ab TESTING: Fri 20 Dec 2019 07:08:05 AM EST 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:09:01-05:00 ab CRON[3175]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session closed for user root 2019-12-20T07:09:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T07:09:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T07:09:01-05:00 ab systemd[1]: Started Clean php session files. 2019-12-20T07:10:04-05:00 ab TESTING: Fri 20 Dec 2019 07:10:04 AM EST 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST 2019-12-20T07:10:06-05:00 ab TESTING: Fri 20 Dec 2019 07:10:06 AM EST 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopping System Logging Service... 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd" swVersion="8.1911.0" x-pid="3071" x-info="https://www.rsyslog.com"] exiting on signal 15. 2019-12-20T07:10:15-05:00 ab systemd[1]: rsyslog.service: Succeeded. 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopped System Logging Service. 2019-12-20T07:10:15-05:00 ab systemd[1]: Starting System Logging Service... 2019-12-20T07:10:15-05:00 ab rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.1911.0] 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd" swVersion="8.1911.0" x-pid="3270" x-info="https://www.rsyslog.com"] start 2019-12-20T07:10:15-05:00 ab systemd[1]: Started System Logging Service. 2019-12-20T07:10:18-05:00 ab TESTING: Fri 20 Dec 2019 07:10:18 AM EST 2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:15:01-05:00 ab CRON[3284]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session closed for user root 2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:17:01-05:00 ab CRON[3324]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) 2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session closed for user root 2019-12-20T07:25:01-05:00 ab CRON[3333]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:25:01-05:00 ab CRON[3334]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:25:01-05:00 ab CRON[3333]: pam_unix(cron:session): session closed for user root 2019-12-20T07:29:38-05:00 ab snapd[666]: storehelpers.go:436: cannot refresh: snap has no updates available: "barrier", "barrier-kvm", "gtk-common-themes", "notepad-plus-plus", "snapd", "wine-platform-3-stable" 2019-12-20T07:34:26-05:00 ab smartd[665]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 67 to 66 2019-12-20T07:34:26-05:00 ab smartd[665]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 110 to 109 2019-12-20T07:35:01-05:00 ab CRON[3450]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:35:01-05:00 ab CRON[3451]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:35:01-05:00 ab CRON[3450]: pam_unix(cron:session): session closed for user root 2019-12-20T07:39:01-05:00 ab CRON[3460]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:39:01-05:00 ab CRON[3461]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T07:39:01-05:00 ab CRON[3460]: pam_unix(cron:session): session closed for user root 2019-12-20T07:39:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T07:39:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T07:39:01-05:00 ab systemd[1]: Started Clean php session files. 2019-12-20T07:45:01-05:00 ab CRON[3525]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:45:01-05:00 ab CRON[3526]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:45:01-05:00 ab CRON[3525]: pam_unix(cron:session): session closed for user root 2019-12-20T07:55:01-05:00 ab CRON[3549]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:55:01-05:00 ab CRON[3550]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:55:01-05:00 ab CRON[3549]: pam_unix(cron:session): session closed for user root 2019-12-20T08:05:01-05:00 ab CRON[3575]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T08:05:01-05:00 ab CRON[3576]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T08:05:01-05:00 ab CRON[3575]: pam_unix(cron:session): session closed for user root 2019-12-20T08:09:01-05:00 ab CRON[3586]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T08:09:01-05:00 ab CRON[3587]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T08:09:01-05:00 ab CRON[3586]: pam_unix(cron:session): session closed for user root 2019-12-20T08:09:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T08:09:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T08:09:01-05:00 ab systemd[1]: Started Clean php session files THIS IS THE ERROR FOUND IN STORM UI parserBolt java.lang.RuntimeException: [Metron] Message '2019-12-20T07:06:41-05:00 ab TESTING: Fri 20 Dec 2019 07:06:41 AM EST 2019-12-20T07:06:41-05:00 ab rsyslogd: action 'action-13-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.1911.0 try https://www.rsyslog.com/e/2359 ] 2019-12-20T07:08:04-05:00 ab TESTING: Fri 20 Dec 2019 07:08:04 AM EST 2019-12-20T07:08:05-05:00 ab TESTING: Fri 20 Dec 2019 07:08:05 AM EST 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:09:01-05:00 ab CRON[3175]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session closed for user root 2019-12-20T07:09:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T07:09:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T07:09:01-05:00 ab systemd[1]: Started Clean php session files. 2019-12-20T07:10:04-05:00 ab TESTING: Fri 20 Dec 2019 07:10:04 AM EST 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST 2019-12-20T07:10:06-05:00 ab TESTING: Fri 20 Dec 2019 07:10:06 AM EST 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopping System Logging Service... 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd" swVersion="8.1911.0" x-pid="3071" x-info="https://www.rsyslog.com"] exiting on signal 15. 2019-12-20T07:10:15-05:00 ab systemd[1]: rsyslog.service: Succeeded. 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopped System Logging Service. 2019-12-20T07:10:15-05:00 ab systemd[1]: Starting System Logging Service... 2019-12-20T07:10:15-05:00 ab rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.1911.0] 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd" swVersion="8.1911.0" x-pid="3270" x-info="https://www.rsyslog.com"] start 2019-12-20T07:10:15-05:00 ab systemd[1]: Started System Logging Service. 2019-12-20T07:10:18-05:00 ab TESTING: Fri 20 Dec 2019 07:10:18 AM EST 2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:15:01-05:00 ab CRON[3284]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session closed for user root 2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:17:01-05:00 ab CRON[3324]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) 2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session closed for user root 2019-12-20T07:25:01-05:00 ab CRON[3333]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:25:01-05:00 ab CRON[3334]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:25:01-05:00 ab CRON[3333]: pam_unix(cron:session): session closed for user root 2019-12-20T07:29:38-05:00 ab snapd[666]: storehelpers.go:436: cannot refresh: snap has no updates available: "barrier", "barrier-kvm", "gtk-common-themes", "notepad-plus-plus", "snapd", "wine-platform-3-stable" 2019-12-20T07:34:26-05:00 ab smartd[665]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 67 to 66 2019-12-20T07:34:26-05:00 ab smartd[665]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 110 to 109 2019-12-20T07:35:01-05:00 ab CRON[3450]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:35:01-05:00 ab CRON[3451]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:35:01-05:00 ab CRON[3450]: pam_unix(cron:session): session closed for user root 2019-12-20T07:39:01-05:00 ab CRON[3460]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:39:01-05:00 ab CRON[3461]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T07:39:01-05:00 ab CRON[3460]: pam_unix(cron:session): session closed for user root 2019-12-20T07:39:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T07:39:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T07:39:01-05:00 ab systemd[1]: Started Clean php session files. 2019-12-20T07:45:01-05:00 ab CRON[3525]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:45:01-05:00 ab CRON[3526]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:45:01-05:00 ab CRON[3525]: pam_unix(cron:session): session closed for user root 2019-12-20T07:55:01-05:00 ab CRON[3549]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:55:01-05:00 ab CRON[3550]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:55:01-05:00 ab CRON[3549]: pam_unix(cron:session): session closed for user root 2019-12-20T08:05:01-05:00 ab CRON[3575]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T08:05:01-05:00 ab CRON[3576]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T08:05:01-05:00 ab CRON[3575]: pam_unix(cron:session): session closed for user root 2019-12-20T08:09:01-05:00 ab CRON[3586]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T08:09:01-05:00 ab CRON[3587]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T08:09:01-05:00 ab CRON[3586]: pam_unix(cron:session): session closed for user root 2019-12-20T08:09:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T08:09:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T08:09:01-05:00 ab systemd[1]: Started Clean php session files. ' does not match pattern '%{CISCO_TAGGED_SYSLOG}' at org.apache.metron.parsers.asa.BasicAsaParser.parse(BasicAsaParser.java:184) at org.apache.metron.parsers.interfaces.MessageParser.parseOptional(MessageParser.java:54) at org.apache.metron.parsers.interfaces.MessageParser.parseOptionalResult(MessageParser.java:67) at org.apache.metron.parsers.ParserRunnerImpl.execute(ParserRunnerImpl.java:144) at org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:257) at org.apache.storm.daemon.executor$fn__10195$tuple_action_fn__10197.invoke(executor.clj:735) at org.apache.storm.daemon.executor$mk_task_receiver$fn__10114.invoke(executor.clj:466) at org.apache.storm.disruptor$clojure_handler$reify__4137.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:472) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:451) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__10195$fn__10208$fn__10263.invoke(executor.clj:855) at org.apache.storm.util$async_loop$fn__1221.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: [Metron] Message '2019-12-20T07:06:41-05:00 ab TESTING: Fri 20 Dec 2019 07:06:41 AM EST 2019-12-20T07:06:41-05:00 ab rsyslogd: action 'action-13-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.1911.0 try https://www.rsyslog.com/e/2359 ] 2019-12-20T07:08:04-05:00 ab TESTING: Fri 20 Dec 2019 07:08:04 AM EST 2019-12-20T07:08:05-05:00 ab TESTING: Fri 20 Dec 2019 07:08:05 AM EST 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST 2019-12-20T07:08:06-05:00 ab TESTING: Fri 20 Dec 2019 07:08:06 AM EST 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST 2019-12-20T07:08:08-05:00 ab TESTING: Fri 20 Dec 2019 07:08:08 AM EST 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST 2019-12-20T07:08:09-05:00 ab TESTING: Fri 20 Dec 2019 07:08:09 AM EST 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:09:01-05:00 ab CRON[3175]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T07:09:01-05:00 ab CRON[3174]: pam_unix(cron:session): session closed for user root 2019-12-20T07:09:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T07:09:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T07:09:01-05:00 ab systemd[1]: Started Clean php session files. 2019-12-20T07:10:04-05:00 ab TESTING: Fri 20 Dec 2019 07:10:04 AM EST 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST 2019-12-20T07:10:05-05:00 ab TESTING: Fri 20 Dec 2019 07:10:05 AM EST 2019-12-20T07:10:06-05:00 ab TESTING: Fri 20 Dec 2019 07:10:06 AM EST 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST 2019-12-20T07:10:07-05:00 ab TESTING: Fri 20 Dec 2019 07:10:07 AM EST 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST 2019-12-20T07:10:08-05:00 ab TESTING: Fri 20 Dec 2019 07:10:08 AM EST 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST 2019-12-20T07:10:09-05:00 ab TESTING: Fri 20 Dec 2019 07:10:09 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:10-05:00 ab TESTING: Fri 20 Dec 2019 07:10:10 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:11-05:00 ab TESTING: Fri 20 Dec 2019 07:10:11 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:12-05:00 ab TESTING: Fri 20 Dec 2019 07:10:12 AM EST 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST 2019-12-20T07:10:13-05:00 ab TESTING: Fri 20 Dec 2019 07:10:13 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:14-05:00 ab TESTING: Fri 20 Dec 2019 07:10:14 AM EST 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopping System Logging Service... 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd" swVersion="8.1911.0" x-pid="3071" x-info="https://www.rsyslog.com"] exiting on signal 15. 2019-12-20T07:10:15-05:00 ab systemd[1]: rsyslog.service: Succeeded. 2019-12-20T07:10:15-05:00 ab systemd[1]: Stopped System Logging Service. 2019-12-20T07:10:15-05:00 ab systemd[1]: Starting System Logging Service... 2019-12-20T07:10:15-05:00 ab rsyslogd: imuxsock: Acquired UNIX socket '/run/systemd/journal/syslog' (fd 3) from systemd. [v8.1911.0] 2019-12-20T07:10:15-05:00 ab rsyslogd: [origin software="rsyslogd" swVersion="8.1911.0" x-pid="3270" x-info="https://www.rsyslog.com"] start 2019-12-20T07:10:15-05:00 ab systemd[1]: Started System Logging Service. 2019-12-20T07:10:18-05:00 ab TESTING: Fri 20 Dec 2019 07:10:18 AM EST 2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:15:01-05:00 ab CRON[3284]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:15:01-05:00 ab CRON[3283]: pam_unix(cron:session): session closed for user root 2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:17:01-05:00 ab CRON[3324]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) 2019-12-20T07:17:01-05:00 ab CRON[3323]: pam_unix(cron:session): session closed for user root 2019-12-20T07:25:01-05:00 ab CRON[3333]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:25:01-05:00 ab CRON[3334]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:25:01-05:00 ab CRON[3333]: pam_unix(cron:session): session closed for user root 2019-12-20T07:29:38-05:00 ab snapd[666]: storehelpers.go:436: cannot refresh: snap has no updates available: "barrier", "barrier-kvm", "gtk-common-themes", "notepad-plus-plus", "snapd", "wine-platform-3-stable" 2019-12-20T07:34:26-05:00 ab smartd[665]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 67 to 66 2019-12-20T07:34:26-05:00 ab smartd[665]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 110 to 109 2019-12-20T07:35:01-05:00 ab CRON[3450]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:35:01-05:00 ab CRON[3451]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:35:01-05:00 ab CRON[3450]: pam_unix(cron:session): session closed for user root 2019-12-20T07:39:01-05:00 ab CRON[3460]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:39:01-05:00 ab CRON[3461]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T07:39:01-05:00 ab CRON[3460]: pam_unix(cron:session): session closed for user root 2019-12-20T07:39:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T07:39:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T07:39:01-05:00 ab systemd[1]: Started Clean php session files. 2019-12-20T07:45:01-05:00 ab CRON[3525]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:45:01-05:00 ab CRON[3526]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:45:01-05:00 ab CRON[3525]: pam_unix(cron:session): session closed for user root 2019-12-20T07:55:01-05:00 ab CRON[3549]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T07:55:01-05:00 ab CRON[3550]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T07:55:01-05:00 ab CRON[3549]: pam_unix(cron:session): session closed for user root 2019-12-20T08:05:01-05:00 ab CRON[3575]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T08:05:01-05:00 ab CRON[3576]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1) 2019-12-20T08:05:01-05:00 ab CRON[3575]: pam_unix(cron:session): session closed for user root 2019-12-20T08:09:01-05:00 ab CRON[3586]: pam_unix(cron:session): session opened for user root by (uid=0) 2019-12-20T08:09:01-05:00 ab CRON[3587]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d /run/systemd/system ]; then /usr/lib/php/sessionclean; fi) 2019-12-20T08:09:01-05:00 ab CRON[3586]: pam_unix(cron:session): session closed for user root 2019-12-20T08:09:01-05:00 ab systemd[1]: Starting Clean php session files... 2019-12-20T08:09:01-05:00 ab systemd[1]: phpsessionclean.service: Succeeded. 2019-12-20T08:09:01-05:00 ab systemd[1]: Started Clean php session files. ' does not match pattern '%{CISCO_TAGGED_SYSLOG}' at org.apache.metron.parsers.asa.BasicAsaParser.parse(BasicAsaParser.java:178) ... 14 more
i need your help???? as always
... View more
- Tags:
- Metron
Labels:
07-12-2019
08:20 AM
this is the error
... View more
07-12-2019
08:08 AM
@https://community.hortonworks.com/users/3418/jsensharma.html i did what you say but no change
... View more
07-11-2019
08:43 AM
https://community.hortonworks.com/users/3418/jsensharma.html please help me bro
... View more
07-10-2019
12:52 PM
2019-07-10 05:48:38,266 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1 2019-07-10 05:48:38,291 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf 2019-07-10 05:48:38,648 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1 2019-07-10 05:48:38,660 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf 2019-07-10 05:48:38,662 - Group['hdfs'] {} 2019-07-10 05:48:38,665 - Group['hadoop'] {} 2019-07-10 05:48:38,666 - Group['users'] {} 2019-07-10 05:48:38,667 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2019-07-10 05:48:38,669 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None} 2019-07-10 05:48:38,671 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None} 2019-07-10 05:48:38,673 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None} 2019-07-10 05:48:38,675 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2019-07-10 05:48:38,678 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2019-07-10 05:48:38,691 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if 2019-07-10 05:48:38,692 - Group['hdfs'] {} 2019-07-10 05:48:38,693 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']} 2019-07-10 05:48:38,695 - FS Type: 2019-07-10 05:48:38,695 - Directory['/etc/hadoop'] {'mode': 0755} 2019-07-10 05:48:38,729 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-07-10 05:48:38,731 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2019-07-10 05:48:38,764 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2019-07-10 05:48:38,779 - Skipping Execute[('setenforce', '0')] due to not_if 2019-07-10 05:48:38,780 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2019-07-10 05:48:38,785 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2019-07-10 05:48:38,787 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2019-07-10 05:48:38,795 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2019-07-10 05:48:38,799 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2019-07-10 05:48:38,809 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2019-07-10 05:48:38,832 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2019-07-10 05:48:38,833 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2019-07-10 05:48:38,835 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2019-07-10 05:48:38,845 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644} 2019-07-10 05:48:38,853 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2019-07-10 05:48:39,469 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf 2019-07-10 05:48:39,474 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1 2019-07-10 05:48:39,521 - Using hadoop conf dir: /usr/hdp/2.6.5.1175-1/hadoop/conf 2019-07-10 05:48:39,554 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'} 2019-07-10 05:48:39,564 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644} 2019-07-10 05:48:39,566 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...} 2019-07-10 05:48:39,583 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-policy.xml 2019-07-10 05:48:39,583 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2019-07-10 05:48:39,601 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...} 2019-07-10 05:48:39,615 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-client.xml 2019-07-10 05:48:39,616 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2019-07-10 05:48:39,627 - Directory['/usr/hdp/2.6.5.1175-1/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'} 2019-07-10 05:48:39,629 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...} 2019-07-10 05:48:39,644 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/secure/ssl-client.xml 2019-07-10 05:48:39,644 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2019-07-10 05:48:39,654 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...} 2019-07-10 05:48:39,667 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-server.xml 2019-07-10 05:48:39,667 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2019-07-10 05:48:39,679 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...} 2019-07-10 05:48:39,691 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/hdfs-site.xml 2019-07-10 05:48:39,691 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'} 2019-07-10 05:48:39,763 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.5.1175-1/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...} 2019-07-10 05:48:39,774 - Generating config: /usr/hdp/2.6.5.1175-1/hadoop/conf/core-site.xml 2019-07-10 05:48:39,774 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'} 2019-07-10 05:48:39,807 - File['/usr/hdp/2.6.5.1175-1/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'} 2019-07-10 05:48:39,808 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.5.1175-1 -> 2.6.5.1175-1 2019-07-10 05:48:39,813 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751} 2019-07-10 05:48:39,814 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755} 2019-07-10 05:48:39,824 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/freezer', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/dev/hugepages', '/dev/mqueue', '/sys/kernel/debug', '/boot', '/var/lib/nfs/rpc_pipefs', '/run/user/0']. 2019-07-10 05:48:39,824 - Mount point for directory /hadoop/hdfs/data is / 2019-07-10 05:48:39,825 - Mount point for directory /hadoop/hdfs/data is / 2019-07-10 05:48:39,825 - Forcefully ensuring existence and permissions of the directory: /hadoop/hdfs/data 2019-07-10 05:48:39,826 - Directory['/hadoop/hdfs/data'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0750, 'owner': 'hdfs'} 2019-07-10 05:48:39,827 - Changing permission for /hadoop/hdfs/data from 755 to 750 2019-07-10 05:48:39,837 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/perf_event', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/freezer', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/dev/hugepages', '/dev/mqueue', '/sys/kernel/debug', '/boot', '/var/lib/nfs/rpc_pipefs', '/run/user/0']. 2019-07-10 05:48:39,838 - Mount point for directory /hadoop/hdfs/data is / 2019-07-10 05:48:39,838 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/hadoop/hdfs/data,/\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2019-07-10 05:48:39,842 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755} 2019-07-10 05:48:39,842 - Changing owner for /var/run/hadoop from 0 to hdfs 2019-07-10 05:48:39,843 - Changing group for /var/run/hadoop from 0 to hadoop 2019-07-10 05:48:39,843 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True} 2019-07-10 05:48:39,844 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True} 2019-07-10 05:48:39,845 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'} 2019-07-10 05:48:39,878 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] 2019-07-10 05:48:39,879 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.5.1175-1/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.5.1175-1/hadoop/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.5.1175-1/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'} 2019-07-10 05:48:44,152 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'} ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.5 <== Error: could not find libjava.so Error: Could not find Java SE Runtime Environment. ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 97256 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.4 <== Error: could not find libjava.so Error: Could not find Java SE Runtime Environment. ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 97256 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.3 <== Error: could not find libjava.so Error: Could not find Java SE Runtime Environment. ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 97256 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.2 <== Error: could not find libjava.so Error: Could not find Java SE Runtime Environment. ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 97256 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out.1 <== Error: could not find libjava.so Error: Could not find Java SE Runtime Environment. ulimit -a for user hdfs core file size (blocks, -c) 10000 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 97256 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited ==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-worker2.sip.com.out <== Error: could not find libjava.so Error: Could not find Java SE Runtime Environment. ulimit -a for user hdfs core file size (blocks, -c) 10000 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 97256 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Command failed after 1 tries
... View more
Labels:
07-03-2019
02:04 PM
Running: /usr/jdk64/jdk1.8.0_77/bin/java -client -Ddaemon.name= -Dstorm.options= -Dstorm.home=/usr/hdp/2.5.0.0-1245/storm -Dstorm.log.dir=/var/log/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib -Dstorm.conf.file= -cp /usr/hdp/2.5.0.0-1245/storm/lib/asm-5.0.3.jar:/usr/hdp/2.5.0.0-1245/storm/lib/clojure-1.7.0.jar:/usr/hdp/2.5.0.0-1245/storm/lib/disruptor-3.3.2.jar:/usr/hdp/2.5.0.0-1245/storm/lib/kryo-3.0.3.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-api-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-core-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-over-slf4j-1.6.6.jar:/usr/hdp/2.5.0.0-1245/storm/lib/log4j-slf4j-impl-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/minlog-1.3.0.jar:/usr/hdp/2.5.0.0-1245/storm/lib/objenesis-2.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/reflectasm-1.10.1.jar:/usr/hdp/2.5.0.0-1245/storm/lib/ring-cors-0.1.5.jar:/usr/hdp/2.5.0.0-1245/storm/lib/servlet-api-2.5.jar:/usr/hdp/2.5.0.0-1245/storm/lib/slf4j-api-1.7.7.jar:/usr/hdp/2.5.0.0-1245/storm/lib/storm-core-1.0.1.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/storm/lib/storm-rename-hack-1.0.1.2.5.0.0-1245.jar:/usr/hdp/2.5.0.0-1245/storm/lib/zookeeper.jar:/usr/hdp/2.5.0.0-1245/storm/lib/ambari-metrics-storm-sink.jar:/tmp/b694b2f09d9211e9818d000c297dc2ae.jar:/usr/hdp/current/storm-supervisor/conf:/usr/hdp/2.5.0.0-1245/storm/bin -Dstorm.jar=/tmp/b694b2f09d9211e9818d000c297dc2ae.jar org.apache.metron.parsers.topology.ParserTopologyCLI -k master.sip.com:6667,worker1.sip.com:6667,worker2.sip.com:6667 -z master.sip.com:2181,worker3.sip.com:2181,worker1.sip.com:2181,worker2.sip.com:2181 -s bro -ksp PLAINTEXT 1641 [main] INFO o.a.c.f.i.CuratorFrameworkImpl - Starting 1823 [main-EventThread] INFO o.a.c.f.s.ConnectionStateManager - State change: CONNECTED 3333 [main] WARN o.a.m.c.c.ConfigurationsUtils - Unable to update global config when updating indexing configs: Cannot cast java.util.LinkedHashMap to java.util.List java.lang.ClassCastException: Cannot cast java.util.LinkedHashMap to java.util.List at java.lang.Class.cast(Class.java:3369) ~[?:1.8.0_77] at org.apache.metron.common.configuration.FieldValidator$Config.get(FieldValidator.java:50) ~[b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.common.configuration.FieldValidator.readValidations(FieldValidator.java:113) ~[b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.common.configuration.Configurations.updateGlobalConfig(Configurations.java:70) ~[b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.common.configuration.Configurations.updateGlobalConfig(Configurations.java:64) ~[b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.common.configuration.Configurations.updateGlobalConfig(Configurations.java:59) ~[b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.common.configuration.ConfigurationsUtils.updateConfigsFromZookeeper(ConfigurationsUtils.java:185) ~[b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.common.configuration.ConfigurationsUtils.updateConfigsFromZookeeper(ConfigurationsUtils.java:201) [b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.common.configuration.ConfigurationsUtils.updateParserConfigsFromZookeeper(ConfigurationsUtils.java:217) [b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.parsers.topology.ParserTopologyBuilder.getSensorParserConfig(ParserTopologyBuilder.java:378) [b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.parsers.topology.ParserTopologyBuilder.build(ParserTopologyBuilder.java:120) [b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.parsers.topology.ParserTopologyCLI.getParserTopology(ParserTopologyCLI.java:571) [b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.parsers.topology.ParserTopologyCLI.createParserTopology(ParserTopologyCLI.java:540) [b694b2f09d9211e9818d000c297dc2ae.jar:?] at org.apache.metron.parsers.topology.ParserTopologyCLI.main(ParserTopologyCLI.java:601) [b694b2f09d9211e9818d000c297dc2ae.jar:?] java.lang.ClassCastException: Cannot cast java.util.LinkedHashMap to java.util.List at java.lang.Class.cast(Class.java:3369) at org.apache.metron.common.configuration.FieldValidator$Config.get(FieldValidator.java:50) at org.apache.metron.common.configuration.FieldValidator.readValidations(FieldValidator.java:113) at org.apache.metron.common.configuration.Configurations.updateGlobalConfig(Configurations.java:70) at org.apache.metron.common.configuration.Configurations.updateGlobalConfig(Configurations.java:64) at org.apache.metron.common.configuration.Configurations.updateGlobalConfig(Configurations.java:59) at org.apache.metron.common.configuration.ConfigurationsUtils.updateConfigsFromZookeeper(ConfigurationsUtils.java:185) at org.apache.metron.common.configuration.ConfigurationsUtils.updateConfigsFromZookeeper(ConfigurationsUtils.java:201) at org.apache.metron.common.configuration.ConfigurationsUtils.updateParserConfigsFromZookeeper(ConfigurationsUtils.java:217) at org.apache.metron.parsers.topology.ParserTopologyBuilder.getSensorParserConfig(ParserTopologyBuilder.java:378) at org.apache.metron.parsers.topology.ParserTopologyBuilder.build(ParserTopologyBuilder.java:120) at org.apache.metron.parsers.topology.ParserTopologyCLI.getParserTopology(ParserTopologyCLI.java:571) at org.apache.metron.parsers.topology.ParserTopologyCLI.createParserTopology(ParserTopologyCLI.java:540) at org.apache.metron.parsers.topology.ParserTopologyCLI.main(ParserTopologyCLI.java:601)
... View more
Labels:
07-03-2019
10:57 AM
how to clear parser zookeeper metadata directory or where
... View more
05-23-2019
01:13 PM
ERROR Controller 1003 epoch 51 encountered error while electing leader for partition [__consumer_offsets,1] due to: Preferred replica 1002 for partition [__consumer_offsets,1] is either not alive or not in the isr. Current leader and ISR: [{"leader":1004,"leader_epoch":93,"isr":[1004]}]. (state.change.logger) [2019-05-22 23:56:03,878] ERROR Controller 1003 epoch 51 initiated state change for partition [__consumer_offsets,1] from OnlinePartition to OnlinePartition failed (state.change.logger) kafka.common.StateChangeFailedException: encountered error while electing leader for partition [__consumer_offsets,1] due to: Preferred replica 1002 for partition [__consumer_offsets,1] is either not alive or not in the isr. Current leader and ISR: [{"leader":1004,"leader_epoch":93,"isr":[1004]}]. at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:368) at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:207) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:146) at kafka.controller.PartitionStateMachine$$anonfun$handleStateChanges$2.apply(PartitionStateMachine.scala:145) at scala.collection.immutable.Set$Set1.foreach(Set.scala:74) at kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:145) at kafka.controller.KafkaController.onPreferredReplicaElection(KafkaController.scala:665) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply$mcV$sp(KafkaController.scala:1228) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1223) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18$$anonfun$apply$5.apply(KafkaController.scala:1223) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1220) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4$$anonfun$apply$18.apply(KafkaController.scala:1218) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1218) at kafka.controller.KafkaController$$anonfun$kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance$4.apply(KafkaController.scala:1197) at scala.collection.immutable.Map$Map4.foreach(Map.scala:181) at kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerPartitionRebalance(KafkaController.scala:1197) at kafka.controller.KafkaController$$anonfun$onControllerFailover$1.apply$mcV$sp(KafkaController.scala:347) at kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:56) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: kafka.common.StateChangeFailedException: Preferred replica 1002 for partition [__consumer_offsets,1] is either not alive or not in the isr. Current leader and ISR: [{"leader":1004,"leader_epoch":93,"isr":[1004]}] at kafka.controller.PreferredReplicaPartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:159) at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345) ... 31 more
... View more
03-21-2019
02:08 PM
dfsIndexingBolt --LOCAL_OR_SHUFFLE--> indexingErrorBolt
--------------------------------------
4543 [main] INFO o.a.s.f.Flux - Running remotely...
4544 [main] INFO o.a.s.f.Flux - Deploying topology in an ACTIVE state...
4587 [main] INFO o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -6007092214841415612:-7169112016178865033
4721 [main] INFO o.a.s.s.a.AuthUtils - Got AutoCreds []
Exception in thread "main" java.lang.RuntimeException: Topology with name `batch_indexing` already exists on cluster
at org.apache.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:234)
at org.apache.storm.StormSubmitter.submitTopology(StormSubmitter.java:310)
at org.apache.storm.flux.Flux.runCli(Flux.java:171)
at org.apache.storm.flux.Flux.main(Flux.java:98)
Command failed after 1 tries
... View more
Labels:
03-20-2019
08:33 AM
Connection failed: [Errno 111] Connection refused to FQDN:56431 i have 4 nodes(1 master and 3 slave )
... View more
Labels:
03-18-2019
09:08 PM
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/hcp/1.6.1.0-23/metron/bin/zk_load_configs.sh --zk_quorum master.xxx.com:2181,node1.xxx.com:2181,node2.xxx.com:2181 --mode PUSH --input_dir /usr/hcp/1.6.1.0-23/metron/config/zookeeper' returned 1. Exception in thread "main" org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /metron/topology/global at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1270)
at org.apache.curator.framework.imps.SetDataBuilderImpl$4.call(SetDataBuilderImpl.java:274)
at org.apache.curator.framework.imps.SetDataBuilderImpl$4.call(SetDataBuilderImpl.java:270)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107)
at org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:267)
at org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:253)
at org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:41)
at org.apache.metron.common.configuration.ConfigurationsUtils.writeToZookeeper(ConfigurationsUtils.java:178)
at org.apache.metron.common.configuration.ConfigurationsUtils.writeGlobalConfigToZookeeper(ConfigurationsUtils.java:84)
at org.apache.metron.common.configuration.ConfigurationsUtils.uploadConfigsToZookeeper(ConfigurationsUtils.java:518)
at org.apache.metron.common.configuration.ConfigurationsUtils.uploadConfigsToZookeeper(ConfigurationsUtils.java:431)
at org.apache.metron.common.cli.ConfigurationManager.push(ConfigurationManager.java:227)
at org.apache.metron.common.cli.ConfigurationManager.run(ConfigurationManager.java:263)
at org.apache.metron.common.cli.ConfigurationManager.run(ConfigurationManager.java:244)
at org.apache.metron.common.cli.ConfigurationManager.main(ConfigurationManager.java:360)
... View more
03-18-2019
12:00 PM
you r the best
... View more
03-15-2019
07:24 PM
cat /var/log/hadoop/hdfs/hadoop-hdfs-secondarynamenode-sipdatanode-1.novalocal.out ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128570 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
... View more
Labels:
03-15-2019
08:08 AM
su -l hdfs -c "/usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh start namenode" -bash: /usr/hdp/current/hadoop-hdfs-namenode/../hadoop/sbin/hadoop-daemon.sh: No such file or directory
... View more
03-15-2019
07:21 AM
stdout: /var/lib/ambari-agent/data/output-387.txt
2019-03-15 06:50:05,225 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:05,242 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,406 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:05,411 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,412 - Group['hdfs'] {}
2019-03-15 06:50:05,413 - Group['hadoop'] {}
2019-03-15 06:50:05,413 - Group['users'] {}
2019-03-15 06:50:05,414 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,415 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,415 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2019-03-15 06:50:05,416 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2019-03-15 06:50:05,417 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,417 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2019-03-15 06:50:05,418 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-03-15 06:50:05,419 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-03-15 06:50:05,427 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-03-15 06:50:05,428 - Group['hdfs'] {}
2019-03-15 06:50:05,428 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2019-03-15 06:50:05,428 - FS Type:
2019-03-15 06:50:05,428 - Directory['/etc/hadoop'] {'mode': 0755}
2019-03-15 06:50:05,446 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 06:50:05,447 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-03-15 06:50:05,464 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-03-15 06:50:05,478 - Skipping Execute[('setenforce', '0')] due to only_if
2019-03-15 06:50:05,478 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-03-15 06:50:05,482 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-03-15 06:50:05,483 - Changing owner for /var/run/hadoop from 1004 to root
2019-03-15 06:50:05,483 - Changing group for /var/run/hadoop from 1002 to root
2019-03-15 06:50:05,484 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-03-15 06:50:05,489 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-03-15 06:50:05,491 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-03-15 06:50:05,497 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-03-15 06:50:05,506 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 06:50:05,507 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-03-15 06:50:05,507 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-03-15 06:50:05,511 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-03-15 06:50:05,518 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-03-15 06:50:05,608 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-03-15 06:50:05,634 - call returned (0, '2.6.2.0-205\n2.6.5.1050-37')
2019-03-15 06:50:05,911 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,912 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:05,932 - Using hadoop conf dir: /usr/hdp/2.6.2.0-205/hadoop/conf
2019-03-15 06:50:05,947 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-03-15 06:50:05,951 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-03-15 06:50:05,952 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:05,962 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-policy.xml
2019-03-15 06:50:05,962 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:05,971 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:05,978 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/ssl-client.xml
2019-03-15 06:50:05,978 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:05,984 - Directory['/usr/hdp/2.6.2.0-205/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-03-15 06:50:05,985 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:05,992 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/secure/ssl-client.xml
2019-03-15 06:50:05,992 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:05,998 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {}, 'configurations': ...}
2019-03-15 06:50:06,005 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/ssl-server.xml
2019-03-15 06:50:06,006 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:06,012 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2019-03-15 06:50:06,021 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/hdfs-site.xml
2019-03-15 06:50:06,021 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2019-03-15 06:50:06,073 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/2.6.2.0-205/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-15 06:50:06,081 - Generating config: /usr/hdp/2.6.2.0-205/hadoop/conf/core-site.xml
2019-03-15 06:50:06,081 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-15 06:50:06,102 - File['/usr/hdp/2.6.2.0-205/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-03-15 06:50:06,103 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=2.6.2.0-205 -> 2.6.2.0-205
2019-03-15 06:50:06,109 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-15 06:50:06,109 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2019-03-15 06:50:06,112 - Called service start with upgrade_type: None
2019-03-15 06:50:06,112 - Ranger Hdfs plugin is not enabled
2019-03-15 06:50:06,114 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2019-03-15 06:50:06,114 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-15 06:50:06,115 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-15 06:50:06,115 - Options for start command are:
2019-03-15 06:50:06,115 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-03-15 06:50:06,115 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-03-15 06:50:06,116 - Changing group for /var/run/hadoop from 0 to hadoop
2019-03-15 06:50:06,116 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-15 06:50:06,116 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-15 06:50:06,117 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-15 06:50:06,137 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2019-03-15 06:50:06,138 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/2.6.2.0-205/hadoop/sbin/hadoop-daemon.sh --config /usr/hdp/2.6.2.0-205/hadoop/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/2.6.2.0-205/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-15 06:50:10,317 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 128569
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903150628 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25488852k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-15T06:28:52.955+0000: 1.012: [GC (GCLocker Initiated GC) 2019-03-15T06:28:52.955+0000: 1.012: [ParNew: 104960K->9633K(118016K), 0.0159947 secs] 104960K->9633K(1035520K), 0.0161429 secs] [Times: user=0.06 sys=0.00, real=0.02 secs]
Heap
par new generation total 118016K, used 76324K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 63% used [0x00000000c0000000, 0x00000000c4120a48, 0x00000000c6680000)
from space 13056K, 73% used [0x00000000c7340000, 0x00000000c7ca86e0, 0x00000000c8000000)
to space 13056K, 0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 18260K, capacity 18612K, committed 18816K, reserved 1064960K
class space used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
2019-03-15T06:30:05.713+0000: 85.667: [GC (Allocation Failure) 2019-03-15T06:30:05.713+0000: 85.667: [ParNew: 176325K->16223K(184320K), 0.0422067 secs] 176325K->20146K(1028096K), 0.0423734 secs] [Times: user=0.09 sys=0.01, real=0.04 secs]
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-sipnamenode.novalocal.log <==
at org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink.putMetrics(HadoopTimelineMetricsSink.java:353)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:186)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
at org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:134)
at org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
2019-03-15 06:49:33,316 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 37 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:34,317 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 38 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:35,319 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 39 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:36,320 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 40 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:37,322 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 41 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:38,323 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 42 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:39,324 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 43 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:40,326 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 44 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:41,327 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 45 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:42,328 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 46 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:43,329 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 47 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:44,330 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 48 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:45,332 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 49 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:45,333 WARN datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(227)) - Problem connecting to server: sipnamenode.novalocal/10.0.35.134:8020
2019-03-15 06:49:51,335 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:52,336 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:53,338 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:54,339 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:55,340 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:56,342 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:57,343 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:58,344 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:49:59,346 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:00,347 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:01,348 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 10 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:02,350 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 11 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:03,351 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 12 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:04,352 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 13 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:05,354 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 14 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:06,355 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 15 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:07,356 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 16 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:08,358 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 17 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:09,359 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 18 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-15 06:50:10,360 INFO ipc.Client (Client.java:handleConnectionFailure(906)) - Retrying connect to server: sipnamenode.novalocal/10.0.35.134:8020. Already tried 19 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.log <==
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
... 9 more
2019-03-15 06:50:08,171 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2019-03-15 06:50:08,172 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-03-15 06:50:08,173 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2019-03-15 06:50:08,173 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(606)) - NameNode metrics system shutdown complete.
2019-03-15 06:50:08,173 ERROR namenode.NameNode (NameNode.java:main(1774)) - Failed to start namenode.
java.net.BindException: Port in use: sipnamenode.novalocal:50070
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1000)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1023)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1080)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:937)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:170)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:933)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:746)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:992)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:976)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1701)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1769)
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:988)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1019)
... 9 more
2019-03-15 06:50:08,175 INFO util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with status 1
2019-03-15 06:50:08,176 INFO timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(278)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2019-03-15 06:50:08,177 INFO namenode.NameNode (LogAdapter.java:info(47)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sipnamenode.novalocal/10.0.35.134
************************************************************/
==> /var/log/hadoop/hdfs/gc.log-201903150635 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25422016k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-15T06:35:56.264+0000: 1.064: [GC (Allocation Failure) 2019-03-15T06:35:56.264+0000: 1.064: [ParNew: 104960K->9551K(118016K), 0.0271395 secs] 104960K->9551K(1035520K), 0.0273200 secs] [Times: user=0.09 sys=0.00, real=0.03 secs]
Heap
par new generation total 118016K, used 77297K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 64% used [0x00000000c0000000, 0x00000000c4228530, 0x00000000c6680000)
from space 13056K, 73% used [0x00000000c7340000, 0x00000000c7c93f60, 0x00000000c8000000)
to space 13056K, 0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 18279K, capacity 18612K, committed 18816K, reserved 1064960K
class space used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/gc.log-201903150638 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25420080k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-15T06:38:51.156+0000: 1.088: [GC (Allocation Failure) 2019-03-15T06:38:51.156+0000: 1.088: [ParNew: 104960K->9549K(118016K), 0.0323991 secs] 104960K->9549K(1035520K), 0.0325641 secs] [Times: user=0.10 sys=0.01, real=0.04 secs]
Heap
par new generation total 118016K, used 77303K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 64% used [0x00000000c0000000, 0x00000000c422a9c0, 0x00000000c6680000)
from space 13056K, 73% used [0x00000000c7340000, 0x00000000c7c93570, 0x00000000c8000000)
to space 13056K, 0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 18261K, capacity 18612K, committed 18816K, reserved 1064960K
class space used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.3 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 128569
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.2 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 128569
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out.1 <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 128569
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-sipnamenode.novalocal.out <==
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
g signals (-i) 128569
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 128000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903150650 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32948312k(25308796k free), swap 0k(0k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
2019-03-15T06:50:07.432+0000: 1.074: [GC (Allocation Failure) 2019-03-15T06:50:07.432+0000: 1.074: [ParNew: 104960K->9545K(118016K), 0.0239215 secs] 104960K->9545K(1035520K), 0.0240610 secs] [Times: user=0.07 sys=0.01, real=0.02 secs]
Heap
par new generation total 118016K, used 77366K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
eden space 104960K, 64% used [0x00000000c0000000, 0x00000000c423b640, 0x00000000c6680000)
from space 13056K, 73% used [0x00000000c7340000, 0x00000000c7c92470, 0x00000000c8000000)
to space 13056K, 0% used [0x00000000c6680000, 0x00000000c6680000, 0x00000000c7340000)
concurrent mark-sweep generation total 917504K, used 0K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
Metaspace used 18261K, capacity 18612K, committed 18816K, reserved 1064960K
class space used 2246K, capacity 2360K, committed 2432K, reserved 1048576K
2019-03-15 06:50:10,496 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2019-03-15 06:50:10,526 - call returned (0, '2.6.2.0-205\n2.6.5.1050-37')
2019-03-15 06:50:10,526 - The 'hadoop-hdfs-namenode' component did not advertise a version. This may indicate a problem with the component packaging.
Command failed after 1 tries
... View more
Labels:
11-01-2018
12:41 PM
In Apache-metron logical architecture, there are modules which is processed by apache storm and i understand the concepts how apache storm normalize and parse log data which accepted from kafka, but i am not clear about how apache storm tag and validate ?
... View more
Labels:
10-19-2018
01:04 PM
In Apache-metron logical architecture, there are modules which is processed by apache storm and i understand the concepts how apache storm normalize and parse log data which accepted from kafka, but i am not clear about how apache storm tag and validate ?
... View more
10-17-2018
10:14 AM
In Apache-metron logical architecture, there are modules which is
processed by apache storm and i understand the concepts how apache storm
normalize and parse log data which accepted from kafka, but i am not
clear about how apache storm tag and validate ?
... View more