Member since
01-26-2018
34
Posts
1
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
790 | 07-12-2018 03:34 PM | |
926 | 02-21-2018 05:44 AM | |
892 | 02-21-2018 05:07 AM |
10-03-2018
08:42 AM
@Robert Levas : Thanks a lot for the solution, 'unsupported type' issue has been resolved after kerberose config changes. I have raised another issue for storm spenago load balancer related here.
... View more
08-14-2018
07:51 PM
@Akhil S Naik : Thanks for the reply. Yes it was indeed table not found. But could not find how the table came missing as it was running perfect before enabling kerberose. Just creating the table in hbase solved the issue. I have to create metron_update,profiler,user_settings tables in hbase.
... View more
08-13-2018
03:15 PM
I have setup a HCP 1.6.0 cluster using ambari 2.6.2.2. After enabling kerberose in my cluster via Ambari UI referring this link. All my services starts fine except metron. I am getting following errors in metron-indexing service. 2018-08-13 14:15:53,017 - Setting Indexing ACL configured to True 2018-08-13 14:15:53,017 - File['/usr/hcp/1.6.0.0-7/metron/config/zookeeper/../metron_indexing_acl_configured'] {'owner': 'metron', 'content': 'This file created on: 2018-08-13 14:15:53', 'mode': 0755} 2018-08-13 14:15:53,018 - Writing File['/usr/hcp/1.6.0.0-7/metron/config/zookeeper/../metron_indexing_acl_configured'] because it doesn't exist 2018-08-13 14:15:53,018 - Changing owner for /usr/hcp/1.6.0.0-7/metron/config/zookeeper/../metron_indexing_acl_configured from 0 to metron 2018-08-13 14:15:53,018 - Changing permission for /usr/hcp/1.6.0.0-7/metron/config/zookeeper/../metron_indexing_acl_configured from 644 to 755 2018-08-13 14:15:53,018 - Setting HBase ACLs for indexing 2018-08-13 14:15:53,018 - kinit command: /usr/bin/kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-sdssystembed@MYCOMPANY.COM; as user: hbase 2018-08-13 14:15:53,018 - Execute['/usr/bin/kinit -kt /etc/security/keytabs/hbase.headless.keytab hbase-sdssystembed@MYCOMPANY.COM; '] {'user': 'hbase'} 2018-08-13 14:15:53,060 - Execute['echo "grant 'metron', 'RW', 'metron_update'" | hbase shell -n'] {'logoutput': False, 'path': ['/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin'], 'tries': 3, 'user': 'hbase', 'try_sleep': 5} 2018-08-13 14:15:58,726 - Retrying after 5 seconds. Reason: Execution of 'echo "grant 'metron', 'RW', 'metron_update'" | hbase shell -n' returned 1. ERROR ArgumentError: Can't find a table: metron_update 2018-08-13 14:16:09,206 - Retrying after 5 seconds. Reason: Execution of 'echo "grant 'metron', 'RW', 'metron_update'" | hbase shell -n' returned 1. ERROR ArgumentError: Can't find a table: metron_update 2018-08-13 14:16:19,705 - Skipping stack-select on METRON because it does not exist in the stack-select package structure. Am I missing any step ? I could see the keytab file for metron service in my metron node. I tried manually the steps below without any luck which I found in metron documentation. Step I tried su metron kinit -kt /etc/security/keytabs/metron.headless.keytab metron@EXAMPLE.COM
... View more
Labels:
- Labels:
-
Apache Metron
07-12-2018
03:34 PM
1 Kudo
@Lija Mohan, Seems this is metron maven build related issue in windows platofrm. This is due to metron-config & metron-alerts UI module npm build issue. Could you try commenting out those under metron-interface module's pom as a work around ? Hope you don't need UI projects built in your local for this. I will post here If I found a tweak or fix for the UI module builds.
... View more
07-10-2018
02:02 PM
Metron's latest version (0.5.0) supports custom parsers deployment as plugin as per this reference. When can we expect HCP release which supports latest metron ? This will make custom parser deployment much easier.
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
06-08-2018
03:39 AM
Unfortunately I terminated a slave instance in my hcp cluster which was hosting hive server 2, hive metastore and mysql db. In my ambari UI I am getting heart beat lost issue in the services which were in that instance. To fix this I tried adding new host to bring back my services using Host - > Add new Host. I followed below steps for this.
1 - Create new EC2 instance - Cent os 7 - Same as my other instances.
2- Installed yum update & epel repo adding
3- Setup password less authentication from Ambari server to the new Host
4- Filled step 1 parameters - Private key and host ip for the new instance
After step 4 ambari UI is stuck ( Screen catpure - suck.png) and not going to futher step. I checked both ambari-agent & ambari server log but couldn't find any issues. What could be the reason for this ? How can I resolve or futher investigate ?
Ambari agent log : INFO 2018-06-08 03:30:29,933 Controller.py:304 - Heartbeat (response id = 30) with server is running...
INFO 2018-06-08 03:30:29,933 Controller.py:311 - Building heartbeat message
INFO 2018-06-08 03:30:29,934 Heartbeat.py:90 - Adding host info/state to heartbeat message.
INFO 2018-06-08 03:30:29,989 logger.py:75 - Testing the JVM's JCE policy to see it if supports an unlimited key length.
INFO 2018-06-08 03:30:30,001 Hardware.py:176 - Some mount points were ignored: /, /dev, /dev/shm, /run, /sys/fs/cgroup, /run/user/1000, /run/user/0
INFO 2018-06-08 03:30:30,001 Controller.py:320 - Sending Heartbeat (id = 30)
INFO 2018-06-08 03:30:30,003 Controller.py:333 - Heartbeat response received (id = 31)
INFO 2018-06-08 03:30:30,003 Controller.py:342 - Heartbeat interval is 10 seconds
INFO 2018-06-08 03:30:30,003 Controller.py:380 - Updating configurations from heartbeat
INFO 2018-06-08 03:30:30,003 Controller.py:389 - Adding cancel/execution commands
INFO 2018-06-08 03:30:30,003 Controller.py:406 - Adding recovery commands
INFO 2018-06-08 03:30:30,003 Controller.py:475 - Waiting 9.9 for next heartbeat
INFO 2018-06-08 03:30:39,904 Controller.py:482 - Wait for next heartbeat over
Ambari server log : and will be failed
08 Jun 2018 03:26:00,590 INFO [ambari-action-scheduler] ActionScheduler:809 - Removing command from queue, host=ip-172-31-18-247.ec2.internal, commandId=1326-0
08 Jun 2018 03:26:00,590 WARN [ambari-action-scheduler] ExecutionCommandWrapper:225 - Unable to lookup the cluster by ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
08 Jun 2018 03:26:01,593 WARN [ambari-action-scheduler] ActionScheduler:782 - Host: ip-172-31-18-247.ec2.internal, role: check_host, actionId: 1326-0 expired and will be failed
08 Jun 2018 03:26:01,595 INFO [ambari-action-scheduler] ActionScheduler:809 - Removing command from queue, host=ip-172-31-18-247.ec2.internal, commandId=1326-0
08 Jun 2018 03:26:01,595 WARN [ambari-action-scheduler] ExecutionCommandWrapper:225 - Unable to lookup the cluster by ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
08 Jun 2018 03:26:02,077 INFO [qtp-ambari-agent-44] HeartBeatHandler:292 - HeartBeatHandler.sendCommands: sending ExecutionCommand for host ip-172-31-27-147.ec2.internal, role check_host, roleCommand ACTIONEXECUTE, and command ID 1326-0, task ID 12200
08 Jun 2018 03:26:02,599 WARN [ambari-action-scheduler] ActionScheduler:782 - Host: ip-172-31-18-247.ec2.internal, role: check_host, actionId: 1326-0 expired and will be failed
08 Jun 2018 03:26:02,601 INFO [ambari-action-scheduler] ActionScheduler:809 - Removing command from queue, host=ip-172-31-18-247.ec2.internal, commandId=1326-0
08 Jun 2018 03:26:02,601 WARN [ambari-action-scheduler] ExecutionCommandWrapper:225 - Unable to lookup the cluster by ID; assuming that there is no cluster and therefore no configs for this execution command: Cluster not found, clusterName=clusterID=-1
... View more
Labels:
05-10-2018
12:05 PM
@jasper: I am having similar issue. Could you please take a look ? https://community.hortonworks.com/questions/190931/metron-indexing-stops-when-records-processed-reach.html
... View more
05-09-2018
02:19 PM
I have ambari managed 10 node hdp cluster ( 2.5.3.0 ). I am trying to index my netflow log ( 1 million records ) file using csv parser. Since my cluster nodes are lesser RAM 8 GB, I have controlled indexing traffic using spout.maxUncommittedOffsets - 100000 & topology_max_spout_pending - 100 to avoid resource contention in my storm workers. My storm configurations :
Number of kafka partition :5 Number of workers : 5
Storm supervisors : 5
Number of executors : 5
When I try to indexing netflow log file using metron sensor ( CSV parser ) It hangs when it reaches around .5 million. If I reduce the maxUncommittedOffsets to 10000 hanging comes near to 50k. So it seems it is depending on the maxuncommittedoffsets. Each time this issue happens if I delete kafka topics and restart storm and metron things will start work again and hangs if it reaches the same level. When I monitor the enrichment & indexing input topics using kafka consumer messages stop coming to main sensor input topic.
I have disabled both solr & elastic indexing using indexing configuration in metron UI. Could this be effecting offsetcommit ?
What happens in storm topology if messages reaches spout.maxUncommittedOffsets ? Does this check the committed value from zookeeper ? I had tried lot of times and when this issue comes I see the below log entry in my worker log continuously could this be the problem ? topic-partition [netflow-1] has unexpected offset [1118]. Current committed Offset [400359] Indexing config : {
"hdfs": {
"batchSize": 50,
"enabled": true,
"index": "netflow"
},
"elasticsearch": {
"batchSize": 1,
"enabled": false,
"index": "netflow"
},
"solr": {
"batchSize": 1,
"enabled": false,
"index": "netflow"
}
}
sput.config {
"poll.timeout.ms": 100000,
"session.timeout.ms": 39000,
"max.poll.records": 2000,
"spout.pollTimeoutMs": 20000,
"spout.maxUncommittedOffsets": 100000,
"spout.offsetCommitPeriodMs": 30000
}
... View more
Labels:
- Labels:
-
Apache Metron
-
Apache Storm
04-24-2018
12:00 PM
I have an ambari cluster with 10 nodes as in the standard deployment architecture documentation with each node having 8 GB of memory. Smaller files indexing works fine and when I tried with large files .1 million records ( CSV file with csv parser / json with grok ) Indexing stops after writing around 90k records. Some issue with kafka indexing topology. How can I properly debug the issue ? I could not find any errors in storm UI except few warnings. I tried looking at the worker log file of storm where I could see below log, Whic I suppose causing the issue. 2018-04-24 06:13:36.787 o.a.s.k.s.i.OffsetManager Thread-17-kafkaSpout-executor[9 9] [WARN] topic-partition [newindexing-0] has unexpected offset [393]. Current committed Offset [1809252]
2018-04-24 06:13:36.787 o.a.s.k.s.i.OffsetManager Thread-17-kafkaSpout-executor[9 9] [WARN] topic-partition [newindexing-0] has unexpected offset [394]. Current committed Offset [1809252]
2018-04-24 06:13:36.787 o.a.s.k.s.i.OffsetManager Thread-17-kafkaSpout-executor[9 9] [WARN] topic-partition [newindexing-0] has unexpected offset [395]. Current committed Offset [1809252]
2018-04-24 06:13:36.787 o.a.s.k.s.i.OffsetManager Thread-17-kafkaSpout-executor[9 9] [WARN] topic-partition [newindexing-0] has unexpected offset [396]. Current committed Offset [1809252]
2018-04-24 06:13:36.787 o.a.s.k.s.i.OffsetManager Thread-17-kafkaSpout-executor[9 9] [WARN] topic-partition [newindexing-0] has unexpected offset [397]. Current committed Offset [1809252]
2018-04-24 06:13:36.787 o.a.s.k.s.i.OffsetManager Thread-17-kafkaSpout-executor[9 9] [WARN] topic-partition [newindexing-0] has unexpected offset [398]. Current committed Offset [1809252]
2018-04-24 06:13:36.800 o.a.s.d.executor Thread-5-kafkaSpout-executor[8 8] [INFO] Deactivating spout kafkaSpout:(8)
Where can I find more details in logs ? I got the id of the topology from storm UI and looked under worker artifacts where the storm topology is running. I also checked the storm supervisor logs where I could see below log lines.
/hdp/current/storm-supervisor/conf:/hadoop/storm/supervisor/stormdist/dnslog-10-1524558325/stormjar.jar:/etc/hbase/conf:/etc/hadoop/conf' 'or$
.apache.storm.daemon.worker' 'dnslog-10-1524558325' 'bbacde45-e453-4322-bdb9-27d41ef269b0' '6705' 'b66ba6e7-08be-486a-a6cc-27ec96ccd877'
2018-04-24 08:25:29.064 o.a.s.config Thread-3 [INFO] SET worker-user b66ba6e7-08be-486a-a6cc-27ec96ccd877 storm
2018-04-24 08:25:29.065 o.a.s.d.supervisor Thread-3 [INFO] Creating symlinks for worker-id: b66ba6e7-08be-486a-a6cc-27ec96ccd877 storm-id: dns$
og-10-1524558325 to its port artifacts directory
2018-04-24 08:25:29.066 o.a.s.d.supervisor Thread-3 [INFO] Creating symlinks for worker-id: b66ba6e7-08be-486a-a6cc-27ec96ccd877 storm-id: dns$
og-10-1524558325 for files(1): ("resources")
2018-04-24 08:25:29.067 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:29.567 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:30.068 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:30.568 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:31.068 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:31.569 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:32.069 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:32.569 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:33.070 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:33.570 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:34.072 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:34.578 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:25:35.081 o.a.s.d.supervisor Thread-3 [INFO] b66ba6e7-08be-486a-a6cc-27ec96ccd877 still hasn't started
2018-04-24 08:29:51.674 o.a.s.c.healthcheck timer [INFO] ()
2018-04-24 08:34:51.675 o.a.s.c.healthcheck timer [INFO] ()
2018-04-24 08:39:51.675 o.a.s.c.healthcheck timer [INFO] ()
For indexing I have 1 worker running with Assaigned 832 MB as per storm UI. If this is some meory resource issue I should get these somewhere in logs right ? Also I have created kafka topics with 4 partition ( Indexing & input topics ).
... View more
Labels:
- Labels:
-
Apache Metron
04-20-2018
01:49 PM
I am trying to add one node to my ambari cluster using blue print API which is created using ambari UI. I created blue print and try to add the node to the host group using ambari blue print REST API as below. curl -i -H "X-Requested-By: ambari" -u admin:pswd -X POST -d @new-host.json <serverurl>:8080/api/v1/clusters/hdpCluster/hosts/<host-id> I get the below error for this curl request. {
"status" : 400,
"message" : "Topology validation failed: org.apache.ambari.server.topology.InvalidTopologyException: Unable to retrieve cluster topology for cluster. This is most likely a result of trying to scale a cluster via the API which was created using the Ambari UI. At this time only clusters created via the API using a blueprint can be scaled with this API. If the cluster was originally created via the API as described above, please file a Jira for this matter."
}
Is it not possible to add node to cluster created using ambari UI ? Is there any other way for the same ?
... View more
Labels:
- Labels:
-
Apache Ambari
03-20-2018
08:47 AM
@asubramanian : Does the HS is Hive Server ? If not. Where is the optimum node to put Hive server ? I have pretty much lower resources in every node for testing purpose ( 8GB RAM & Dual core ).
... View more
03-01-2018
11:34 AM
@D M Provide an example log format. You can use grok parser with a custom grok statement.
... View more
02-21-2018
05:44 AM
I was able to resolve my issue. Thanks a lot @Jay Kumar SenSharma. I have added the solution here
... View more
02-21-2018
05:07 AM
I figured out the root cause and it solved my issue. Root cause was the 5th point in this link. Seems after I bring down EBS volume available space I had to decrease the 'Reserved space for HDFS' in ambari hdfs service advanced configuration. This is the dfs.datanode.du.reserved property. This was higher than the available space. Once I brought it down everything is back to normal 🙂
... View more
02-20-2018
04:57 PM
I have an ambari managed 10 node hdp cluster (2.5.0) deployed in amazon EC2 instance centos 7. I have mounted an EBS volume under /data mount point and configured that as namenode and datanode directories. Everything was working fine. For some reason I have to change the EBS volume. So I followed the below steps. 1- Stop all service from ambari 2- Mount the new EBS volume under /data mount point 3- Restart all amazon ec2 instances 4- Start services using ambari. After 4 th step my hdfs is not working properly and hence hbase service is also failing. I am not getting any errors in either datanode or namenode start. And sees the status in ambari as green. When I do hdfs dfsadmin -report I get following output.
[hdfs@ip-172-31-29-141 ~]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 131072 (128 KB)
DFS Remaining: 0 (0 B)
DFS Used: 131072 (128 KB)
DFS Used%: 100.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (4):
Name: 172.31.31.118:50010 (ip-172-31-31-118.ec2.internal)
Hostname: ip-172-31-31-118.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.31.114:50010 (ip-172-31-31-114.ec2.internal)
Hostname: ip-172-31-31-114.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.18.247:50010 (ip-172-31-18-247.ec2.internal)
Hostname: ip-172-31-18-247.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.28.137:50010 (ip-172-31-28-137.ec2.internal)
Hostname: ip-172-31-28-137.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
[hdfs@ip-172-31-29-141 ~]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 131072 (128 KB)
DFS Remaining: 0 (0 B)
DFS Used: 131072 (128 KB)
DFS Used%: 100.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
-------------------------------------------------
Live datanodes (4):
Name: 172.31.31.118:50010 (ip-172-31-31-118.ec2.internal)
Hostname: ip-172-31-31-118.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.31.114:50010 (ip-172-31-31-114.ec2.internal)
Hostname: ip-172-31-31-114.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.18.247:50010 (ip-172-31-18-247.ec2.internal)
Hostname: ip-172-31-18-247.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
Name: 172.31.28.137:50010 (ip-172-31-28-137.ec2.internal)
Hostname: ip-172-31-28-137.ec2.internal
Decommission Status : Normal
Configured Capacity: 0 (0 B)
DFS Used: 32768 (32 KB)
Non DFS Used: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used%: 100.00%
DFS Remaining%: 0.00%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 2
Last contact: Tue Feb 20 16:26:29 UTC 2018
The issue is my hbase service is not starting. Error I get in hbase log file is as follows 8-02-20 11:56:44,465 WARN [Thread-70] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hbase/data/.tmp/hbase.version could only be replicated to 0 nodes instea
d of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB
.java:500)
Also I get the similiar error when I try to put some file in hdfs via command line. Error I get for the command 'hdfs dfs -put ./x2.txt /' e.hadoop.ipc.RemoteException(java.io.IOException): File /x2.txt._COPYING_ could only be replicated to 0 nodes instead of minReplicatio
n (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB
.java:500)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtoco
lProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1552)
at org.apache.hadoop.ipc.Client.call(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1396)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:457)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
What could be causing this ?
... View more
Labels:
02-20-2018
11:04 AM
@Jay Kumar SenSharma In addition to that my hbase service is not able to start due to the below error. The EBS volume I have changed is the data node directory I configured. And I had data in hbase before doing that. In order to avoid that do I need to anything else ? 8-02-20 11:56:44,465 WARN [Thread-70] hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /apps/hbase/data/.tmp/hbase.version could only be replicated to 0 nodes instea
d of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1649)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3198)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3122)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:843)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB
.java:500)
... View more
02-20-2018
05:00 AM
@Jay Kumar SenSharma
When I stop all and start all services I get the below error in zepplin notebook start service. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/package/scripts/master.py", line 522, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/package/scripts/master.py", line 254, in start
self.create_zeppelin_dir(params)
File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0/package/scripts/master.py", line 89, in create_zeppelin_dir
replace_existing_files=True,
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
self.action_delayed("create")
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
self.get_hdfs_resource_executor().action_delayed(action_name, self)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 336, in action_delayed
self._create_resource()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 352, in _create_resource
self._create_file(self.main_resource.resource.target, source=self.main_resource.resource.source, mode=self.mode)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 467, in _create_file
self.util.run_command(target, 'CREATE', method='PUT', overwrite=True, assertable_result=False, file_to_put=source, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
return self._run_command(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 235, in _run_command
_, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/current/zeppelin-server/interpreter/spark/dep/zeppelin-spark-dependencies-0.6.0.2.5.3.0-37.jar -H 'Content-Type: application/octet-stream' 'http://ip-172-31-31-102.ec2.internal:50070/webhdfs/v1/apps/zeppelin/zeppelin-spark-dependencies-0.6.0.2.5.3.0-37.jar?op=CREATE&user.name=hdfs&overwrite=True&permission=444' 1>/tmp/tmp0f3h5s 2>/tmp/tmpLIZ7_n' returned 55. curl: (55) Send failure: Connection reset by peer
201
... View more
02-19-2018
06:27 PM
I have set up ambari cluster with 9 nodes. Everything was working fine perfectly. For some reason I have to change data directory I have given for hdfs to another disc. I have stopped all the services and mounted new hard disc properly to the same old mount point. I have rebooted all the nodes after mounting new device. Now when I start all services again. All the services getting failed. What could be the reason ? Do I need to do any step here ?
... View more
Labels:
- Labels:
-
Apache Ambari
02-16-2018
12:29 PM
@George Vetticaden : I have tried the above steps in my hcp cluster with hdp - 2.5.3.0 along with metron UI manager. I don't need to do step 2 right ? This is the same as the enrichment configuration done via metron UI right ? My enrichment configuration json is as follows. This will suffice here for step 2 right ? I ran the file loader script without -n option. /usr/metron/0.1BETA/bin/flatfile_loader.sh -i whois_ref.csv -t enrichment -c t -e extractor_config.json {
"enrichment": {
"fieldMap": {},
"fieldToTypeMap": {
"url": [
"whois"
]
},
"config": {}
},
"threatIntel": {
"fieldMap": {},
"fieldToTypeMap": {},
"config": {},
"triageConfig": {
"riskLevelRules": [],
"aggregator": "MAX",
"aggregationConfig": {}
}
},
"configuration": {}
}
<br> Unfortunately my enrichment is not working. My kafka topic message coming in indexing topic is as follows. {"code":200,"method":"GET","enrichmentsplitterbolt.splitter.end.ts":"1518783891207","enrichmentsplitterbolt.splitter.begin.ts":"1518783891207","is_alert":"true","url":"https:\/\/www.woodlandworldwide.com\/","source.type":"newtest","elapsed":2033,"ip_dst_addr":"182.71.43.17","original_string":"1518783890.244 2033 127.0.0.1 TCP_MISS\/200 49602 GET https:\/\/www.woodlandworldwide.com\/ - HIER_DIRECT\/182.71.43.17 text\/html\n","threatintelsplitterbolt.splitter.end.ts":"1518783891211","threatinteljoinbolt.joiner.ts":"1518783891213","bytes":49602,"enrichmentjoinbolt.joiner.ts":"1518783891209","action":"TCP_MISS","guid":"40ff89bf-71a1-4eec-acfd-d89886c9ce7f","threatintelsplitterbolt.splitter.begin.ts":"1518783891211","ip_src_addr":"127.0.0.1","timestamp":1518783890244}
<br> I have tried adding both https:www.woodlandworldwide.com and just woodlandworldwide.com as in your example. But no luck. How metron queries hbase table ? Will it query to get domain similiar to url ?
... View more
02-07-2018
05:04 AM
@Otto Fowler Thanks a lot for your support. I finally resolved the issue. Root cause was in the sensor configuration json we have to add "timestampField": "timestamp" explicitly under key 'parserConfig'. If I add that and stopping and starting sensor using metron UI solved the issue.
Then only like you said parser will create proper timestamp in long. Now I get the below error in indexing bolt when checked via storm UI. Any idea about what could be causing this ? java.io.FileNotFoundException:
/apps/metron/indexing/indexed/test/enrichment-hdfsIndexingBolt-3-0-1517978596427.json
(No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:101)
at
org.apache.metron.writer.hdfs.SourceHandler.createOutputFile(SourceHandler.java:156)
at
... View more
02-07-2018
04:34 AM
@Otto Fowler How can I restart parser topology in storm ? I could not see an option in storm UI. I restarted both storm server & metron components using ambari after changing the sensor configuration using metron UI. But the same error message is coming. Do I need to restart any other component or missing any step ?
... View more
02-06-2018
02:17 PM
@Otto Fowler I tried. But same validation failure message. Log coming in worker.log 2018-02-06 13:19:50.949 o.a.m.p.GrokParser Thread-13-parserBolt-executor[5 5] [DEBUG] Grok parser did not validate message: {elapsed=2288, code
=200, ip_dst_addr=182.71.43.17, original_string=1517923190.328 2288 127.0.0.1 TCP_MISS/200 48615 GET https://www.woodlandworldwide.com/ - HIE
R_DIRECT/182.71.43.17 text/html
, method=GET, bytes=48615, action=TCP_MISS, guid=a279b46f-ad0b-4d65-b6be-6e8a3e94e79f, ip_src_addr=127.0.0.1, url=https://www.woodlandworldwide
.com/, timestamp=1517923190.328, source.type=test}
Configuration in given in metron UI. SQUID_DELIMITED %{NUMBER:timestamp}[^0-9]*%{INT:elapsed} %{IP:ip_src_addr} %{WORD:action}/%{NUMBER:code} %{NUMBER:bytes} %{WORD:method} %{NOTSPACE:url}[^0-9]*(%{IP:ip_dst_addr})? My squid log message is 1517923190.328 2288 127.0.0.1 TCP_MISS/200 48615 GET https://www.woodlandworldwide.com/ - HIER_DIRECT/182.71.43.17 text/html
... View more
02-06-2018
01:41 PM
@Otto Fowler I have added the sensor configuration directly via command line without using metron UI after seeing this issue. I am talking about step 4 in this link. Parser bolt worked correctly. It pushed parsed message to configured kafka topic 'indexing'. But Indexing bolt did not work. It gave me the below error in 'indexing' topic. "error_type":"indexing_error","message":"\/apps\/metron\/indexing\/indexed\/error\/enrichment-hdfsIndexingBolt-3-0-1517911841271.json (No such file or directory)<br>
... View more
02-06-2018
10:54 AM
@Otto Fowler Thanks for the reply. As per the code snippet it will invalidate if <0 plus not instance of long right ? My squid log message is 1517911793.839 2186 127.0.0.1 TCP_MISS/200 48493 GET https://www.woodlandworldwide.com/ - HIER_DIRECT/182.71.43.17 text/html And my grok parser expression is SQUID_DELIMITED %{NUMBER:timestamp}.*%{INT:elapsed} %{IP:ip_src_address} %{WORD:action}/%{NUMBER:code} %{NUMBER:bytes} %{WORD:method} %{NOTSPACE:url}.*%{IP:ip_dst_addr} It creates the timestamp as 1517911793.839 . So here the failing check would be of long right ? Why a check like that is present ? Squid's timestamp format is like this only right ? What should I do here in such case ? I even tried directly pushing a message without decimal in timestamp. But that also failed the validation. Another interesting thing is If I add storm config directly ( Not vial UI ) Validation works fine.
... View more
02-05-2018
04:43 PM
Hi,
I am trying to run metron on my Ambari managed hcp cluster with hdp 2.5 & Metron 0.4.0. I have configured storm topology using metron UI. When I populated my input kafka topic with squid client logs using nifi, Nothing coming in indexing / enrichments kafka topics. I am following this link. Grok statement is : SQUID_DELIMITED %{NUMBER:timestamp}.*%{INT:elapsed} %{IP:ip_src_address} %{WORD:action}/%{NUMBER:code} %{NUMBER:bytes} %{WORD:method} %{NOTSPACE:url}.*%{IP:ip_dst_addr} Sensor parser configuration is as below : {
"parserClassName": "org.apache.metron.parsers.GrokParser",
"filterClassName": null,
"sensorTopic": "newcheck",
"writerClassName": null,
"errorWriterClassName": null,
"invalidWriterClassName": null,
"readMetadata": false,
"mergeMetadata": false,
"numWorkers": null,
"numAckers": null,
"spoutParallelism": 1,
"spoutNumTasks": 1,
"parserParallelism": 1,
"parserNumTasks": 1,
"errorWriterParallelism": 1,
"errorWriterNumTasks": 1,
"spoutConfig": {},
"securityProtocol": null,
"stormConfig": {},
"parserConfig": {
"grokPath": "/apps/metron/patterns/newcheck",
"patternLabel": "NEWCHECK"
},
"fieldTransformations": [
{
"input": [],
"output": [
"ip_dst_addr_copy"
],
"transformation": "STELLAR",
"config": {
"ip_dst_addr_copy": "DOMAIN_TO_TLD(DOMAIN_REMOVE_SUBDOMAINS(ip_dst_addr))"
}
}
]
} Enrichment configuration : {
"enrichment": {
"fieldMap": {},
"fieldToTypeMap": {},
"config": {}
},
"threatIntel": {
"fieldMap": {},
"fieldToTypeMap": {},
"config": {},
"triageConfig": {
"riskLevelRules": [],
"aggregator": "MAX",
"aggregationConfig": {}
}
},
"configuration": {}
} Indexing configuration : {
"hdfs": {
"batchSize": 1,
"enabled": true,
"index": "newcheck"
},
"elasticsearch": {
"batchSize": 1,
"enabled": true,
"index": "newcheck"
},
"solr": {
"batchSize": 1,
"enabled": true,
"index": "newcheck"
}
} When I checked the topology worker logs ( Using id from storm UI ) I get
the below messages when debug level logging is configured. 2018-02-05 15:11:43.991 o.a.s.k.s.KafkaSpoutRetryExponentialBackoff Thread-15-kafkaSpout-executor[4 4] [DEBUG] Topic partitions with entries re
ady to be retried [[]]
2018-02-05 15:11:43.993 o.a.m.p.GrokParser Thread-13-parserBolt-executor[5 5] [DEBUG] Grok parser parsing message: 1517843503.788 1008 127.0.
0.1 TCP_MISS/200 48493 GET https://www.woodlandworldwide.com/ - HIER_DIRECT/182.71.43.17 text/html
2018-02-05 15:11:43.994 o.a.m.p.GrokParser Thread-13-parserBolt-executor[5 5] [DEBUG] Grok parser parsed message: {elapsed=8, code=200, ip_dst_
addr=182.71.43.17, original_string=1517843503.788 1008 127.0.0.1 TCP_MISS/200 48493 GET https://www.woodlandworldwide.com/ - HIER_DIRECT/182.
71.43.17 text/html
, ip_src_address=127.0.0.1, method=GET, bytes=48493, action=TCP_MISS, url=https://www.woodlandworldwide.com/, timestamp=1517843503.788}
2018-02-05 15:11:43.994 o.a.m.p.GrokParser Thread-13-parserBolt-executor[5 5] [DEBUG] Grok parser validating message: {code=200, ip_src_address
=127.0.0.1, method=GET, url=https://www.woodlandworldwide.com/, source.type=newcheck, elapsed=8, ip_dst_addr=182.71.43.17, original_string=1517
843503.788 1008 127.0.0.1 TCP_MISS/200 48493 GET https://www.woodlandworldwide.com/ - HIER_DIRECT/182.71.43.17 text/html
, ip_dst_addr_copy=null, bytes=48493, action=TCP_MISS, guid=f02b21fe-9a5b-4114-921a-82bf64ebb79b, timestamp=1517843503.788}
2018-02-05 15:11:43.994 o.a.m.p.GrokParser Thread-13-parserBolt-executor[5 5] [DEBUG] Grok parser did not validate message: {code=200, ip_src_a
ddress=127.0.0.1, method=GET, url=https://www.woodlandworldwide.com/, source.type=newcheck, elapsed=8, ip_dst_addr=182.71.43.17, original_strin
g=1517843503.788 1008 127.0.0.1 TCP_MISS/200 48493 GET https://www.woodlandworldwide.com/ - HIER_DIRECT/182.71.43.17 text/html
, ip_dst_addr_copy=null, bytes=48493, action=TCP_MISS, guid=f02b21fe-9a5b-4114-921a-82bf64ebb79b, timestamp=1517843503.788} Is the 'Grok parser did not validate message' in above log causing this issue ? Do I need to check anywhere else for getting something ?
... View more
Labels:
- Labels:
-
Apache Metron
02-05-2018
06:56 AM
@Jay Kumar SenSharma : Thanks for the response. It solved the issue. I moved the lib jar to /usr/local/bin and gave ownership and permission to the metron user account. Now it worked.
... View more
01-27-2018
04:03 AM
ooh. Was made by mistake. Will close this one. , oh sorry. Will close this one.
... View more
01-26-2018
09:13 PM
Metron rest service getting stopped once started. I am trying this on cent os 7 amazon ec2 instance. Error I get in the configured host is caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.apache.tomcat.jdbc.pool.DataSource]: Factory method 'dataSource' threw exception; nested exception is java.lang.IllegalStateException: Cannot load driver class: com.mysql.jdbc.Driver at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189) at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588) ... 41 more Caused by: java.lang.IllegalStateException: Cannot load driver class: com.mysql.jdbc.Driver at org.springframework.util.Assert.state(Assert.java:392) at org.springframework.boot.autoconfigure.jdbc.DataSourceProperties.determineDriverClassName(DataSourceProperties.java:214) at org.springframework.boot.autoconfigure.jdbc.DataSourceProperties.initializeDataSourceBuilder(DataSourceProperties.java:174) at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.createDataSource(DataSourceConfiguration.java:42) at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Tomcat.dataSource(DataSourceConfiguration.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Nati Seems the jdbc driver class cannot be loaded. I have downloaded the mysql connector jar in the host and the jar path is configured in Metron JDBC client path. screenshot-from-2018-01-26-22-09-07.png
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
01-26-2018
04:52 PM
I am trying to start metron rest in ambari cluster with cent os 7 ec2 amazon instance nodes. When I start metron rest I get the below error in metorn rest node log file. at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202)
at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:207)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1128)
at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1056)
at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:835)
at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:741)
... 28 more
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.apache.tomcat.jdbc.pool.DataSource]: Factory method
'dataSource' threw exception; nested exception is java.lang.IllegalStateException: Cannot load driver class: com.mysql.jdbc.Driver
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 41 more
Caused by: java.lang.IllegalStateException: Cannot load driver class: com.mysql.jdbc.Driver
at org.springframework.util.Assert.state(Assert.java:392)
at org.springframework.boot.autoconfigure.jdbc.DataSourceProperties.determineDriverClassName(DataSourceProperties.java:214)
at org.springframework.boot.autoconfigure.jdbc.DataSourceProperties.initializeDataSourceBuilder(DataSourceProperties.java:174)
at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration.createDataSource(DataSourceConfiguration.java:42)
at org.springframework.boot.autoconfigure.jdbc.DataSourceConfiguration$Tomcat.dataSource(DataSourceConfiguration.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
... 42 more
What could be the issue. I have installed and configured mysql database proerly on the node. And mysql jdbc jar has been downloaded in the node and configured the path in configuration as below.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Metron