Member since
09-26-2016
16
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1489 | 05-26-2017 09:17 PM |
07-20-2017
05:33 PM
@Sebastien Lehuede: No, I didn't find the root cause. But in my case, it was fairly easy to use the CSV parser instead of the Grok parser, with slight modifications to my telemetry source. You can find more details about the CSV parser here: https://metron.apache.org/current-book/metron-platform/metron-parsers/index.html Hope this helps.
... View more
05-26-2017
11:48 PM
So, I used 'global' instead of 'my_entity_id' with the same settings. After sometime, I can see parser_score values (though they don't seem to be computed correctly). So the issue with not getting scores seems to be related to how 'my_entity_id' values are parsed/interpreted or my potentially incorrect use of it. I still get the same exception in Stellar shell though. As a side note, all my entities (as visible by elastic search head plugin) have the same value for 'my_entity_id' in my simple test. So there's no semantic difference between using 'global' or 'my_entity_id' as the entity id in the profile. Apart from the problem with the shell, I'm curious about what I'm not doing right when using 'my_entity_id'. In other words, is there anything wrong with defining the profile below? {
"profiles": [
{
"profile": "my_profile_name",
"foreach": "my_entity_id",
"onlyif": "true",
"init" : {
"s": "OUTLIER_MAD_STATE_MERGE(PROFILE_GET('my_profile_name','my_entity_id', PROFILE_FIXED(2, 'MINUTES')))"
},
"update": {
"s": "OUTLIER_MAD_ADD(s, my_double_value)"
},
"result": "s"
}
]
} and/or the following enrichment config: {
"enrichment": {
"fieldMap": {
"stellar" : {
"config" : {
"parser_score" : "OUTLIER_MAD_SCORE(OUTLIER_MAD_STATE_MERGE(PROFILE_GET( 'my_profile_name', 'my_entity_id',
PROFILE_FIXED(5, 'MINUTES')) ), my_double_value)"
,"is_alert" : "if parser_score > 3.5 then true else is_alert"
}
}
}
,"fieldToTypeMap": { }
},
"threatIntel": {
"fieldMap": { },
"fieldToTypeMap": { },
"triageConfig" : {
"riskLevelRules" : [
{
"rule" : "parser_score > 3.5",
"score" : 10
}
],
"aggregator" : "MAX"
}
}
} Thanks in advance for your help!
... View more
05-26-2017
10:51 PM
The enrichment configuration discussed in the this link seems to have an error: {
"index": "mad",
"batchSize": 1,
"enrichment": {
"fieldMap": {
"stellar" : {
"config" : {
"parser_score" : "OUTLIER_MAD_SCORE(OUTLIER_MAD_STATE_MERGE(
PROFILE_GET( 'sketchy_mad', 'global', PROFILE_FIXED(10, 'MINUTES')) ), value)"
,"is_alert" : "if parser_score > 3.5 then true else is_alert"
}
}
}
,"fieldToTypeMap": { }
},
"threatIntel": {
"fieldMap": { },
"fieldToTypeMap": { },
"triageConfig" : {
"riskLevelRules" : [
{
"rule" : "parser_score > 3.5",
"score" : 10
}
],
"aggregator" : "MAX"
}
}
} I think the first two lines should appear in the corresponding indexing configuration and not the enrichment configuration. Using the enrichment config as is results in a parse error when pushing the configuration to zookeeper. I just wanted to confirm this is the case. If not, what do those first two lines exactly mean?
... View more
Labels:
- Labels:
-
Apache Metron
05-26-2017
09:17 PM
I eventually got around this problem by using the CSV parser instead of grok. I was using metron 3.0 at the time and am not sure if the issue with grok got resolved or not, but I don't have any problem with the CSV parser in the current version , 3.1.
... View more
05-26-2017
09:11 PM
Using Metron 3.1 release, I am following the instructions here to create a profiler for my ingested data stream and perform statistical outlier analysis of it. I can confirm that the data is parsed, enriched, and indexed in elastic search correctly. The parsing, enrichment, indexing, and profiler topologies are all running without any errors and I can see that the profiler is writing into HBase. However, my "parser_score" as stored in elasticsearch is null, even when an outlier is pushed into the stream, and I'm trying to debug why. I'm using the Stellar shell for that purpose. Once I get the message that all functions are loaded successfully, and after waiting sufficiently (20 minutes) for the data to be populated in HBase, I run the following command to get a profile: PROFILE_GET('my_profile_name','my_entity_id',PROFILE_FIXED(2,'MINUTES')) which results in the following exception in Stellar shell: [!] Unable to execute: Found interface org.objectweb.asm.MethodVisitor, but class was expected
org.apache.metron.common.dsl.ParseException: Unable to execute: Found interface org.objectweb.asm.MethodVisitor, but class was expected
at org.apache.metron.common.stellar.StellarCompiler.getResult(StellarCompiler.java:409)
at org.apache.metron.common.stellar.BaseStellarProcessor.parse(BaseStellarProcessor.java:127)
at org.apache.metron.common.stellar.shell.StellarExecutor.execute(StellarExecutor.java:275)
at org.apache.metron.common.stellar.shell.StellarShell.executeStellar(StellarShell.java:373)
at org.apache.metron.common.stellar.shell.StellarShell.handleStellar(StellarShell.java:276)
at org.apache.metron.common.stellar.shell.StellarShell.execute(StellarShell.java:412)
at org.jboss.aesh.console.AeshProcess.run(AeshProcess.java:53)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IncompatibleClassChangeError: Found interface org.objectweb.asm.MethodVisitor, but class was expected
at com.esotericsoftware.reflectasm.ConstructorAccess.insertConstructor(ConstructorAccess.java:124)
at com.esotericsoftware.reflectasm.ConstructorAccess.get(ConstructorAccess.java:95)
at org.apache.metron.common.utils.SerDeUtils$DefaultInstantiatorStrategy.newInstantiatorOf(SerDeUtils.java:129)
at com.esotericsoftware.kryo.Kryo.newInstantiator(Kryo.java:1078)
at com.esotericsoftware.kryo.Kryo.newInstance(Kryo.java:1087)
at com.esotericsoftware.kryo.serializers.FieldSerializer.create(FieldSerializer.java:570)
at com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:546)
at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790)
at org.apache.metron.common.utils.SerDeUtils.fromBytes(SerDeUtils.java:249)
at org.apache.metron.profiler.client.HBaseProfilerClient.lambda$get$4(HBaseProfilerClient.java:160)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at org.apache.metron.profiler.client.HBaseProfilerClient.get(HBaseProfilerClient.java:160)
at org.apache.metron.profiler.client.HBaseProfilerClient.fetch(HBaseProfilerClient.java:139)
at org.apache.metron.profiler.client.stellar.GetProfile.apply(GetProfile.java:170)
at org.apache.metron.common.stellar.StellarCompiler.exitTransformationFunc(StellarCompiler.java:267)
at org.apache.metron.common.stellar.generated.StellarParser$TransformationFuncContext.exitRule(StellarParser.java:1689)
at org.antlr.v4.runtime.Parser.triggerExitRuleEvent(Parser.java:422)
at org.antlr.v4.runtime.Parser.exitRule(Parser.java:632)
at org.apache.metron.common.stellar.generated.StellarParser.functions(StellarParser.java:1712)
at org.apache.metron.common.stellar.generated.StellarParser.arithmetic_operands(StellarParser.java:1846)
at org.apache.metron.common.stellar.generated.StellarParser.arithmetic_expr_mul(StellarParser.java:1609)
at org.apache.metron.common.stellar.generated.StellarParser.arithmetic_expr(StellarParser.java:1469)
at org.apache.metron.common.stellar.generated.StellarParser.transformation_expr(StellarParser.java:308)
at org.apache.metron.common.stellar.generated.StellarParser.transformation(StellarParser.java:149)
at org.apache.metron.common.stellar.BaseStellarProcessor.parse(BaseStellarProcessor.java:126)
... 8 more I have tried double-quotes instead of single-quotes in the command but the result is the same. Any thought about what might be causing the issue? My profiler config is very similar to what is described here , except that instead of capturing a 'global' statistical state for my 'value', I'm capturing that state per 'my_entity_id'. Also, I'm setting the following in my profiler.properties without changing the other configuration parameters: profiler.period.duration=1
profiler.period.duration.units=MINUTES I'm not sure if the issue I'm facing in Stellar shell is related with why I'm not getting parser scores, but looks like something is not working correctly when calling "PROFILE_GET". I appreciate your help.
... View more
Labels:
- Labels:
-
Apache Metron
03-29-2017
12:23 AM
@asubramanian I can confirm that escaping the braces does not wok. Here's a simplified version of my problem: Telemetry Input (I'm using NiFi with JoltTransform processor to run this test): {"hostname":"my.machine","timestamp":"1490746284"} Grok Pattern 1: {"hostname":"%{HOSTNAME:hostname}","timestamp":"%{NUMBER:timestamp}"} Grok Pattern 2: \{"hostname":"%{HOSTNAME:hostname}","timestamp":"%{NUMBER:timestamp}"\} They both result in the following error in Metron's parser bolt: java.lang.IllegalStateException: Grok parser Error: Grok statement produced a null message. Original message was: {"hostname":"my.machine","timestamp":"1490746284"} and the parsed message was: {} Note that Pattern 1 is a match according to the grok constructor website: http://grokconstructor.appspot.com/ I'm not sure what's going on. Is this a bug in Metron grok parser or am I doing something wrong? I greatly appreciate your help.
... View more
03-28-2017
10:43 PM
@asubramanian, Strange... After restarting everything I'm getting the initial parse error again.
... View more
03-28-2017
06:46 PM
@asubramanian, I forgot to add my indexing configuration to the previous post. Here it goes: {
"elasticsearch": {
"index": "my_telemetry_source",
"batchSize": 5,
"enabled" : true
},
"hdfs": {
"index": "my_telemetry_source",
"batchSize": 5,
"enabled" : true
}
}
... View more
03-28-2017
06:41 PM
Hi @asubramanian, Thanks for your response. I made your suggested change and I'm not getting the parsing error any more. I can't see the data getting indexed though in elastic search. This is likely unrelated to the parsing error, but I'm not sure. I'm not getting any data corresponding to my telemetry source, but am getting the following error for the snort sensor from hdfsIndexingBolt: org.apache.hadoop.security.AccessControlException: Permission denied: user=storm, access=WRITE, inode="/apps/metron/indexing/indexed/snort/enrichment-null-0-0-1490726017771.json":metron:metron:drwxrwxr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2598) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2533) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2417) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) at sun.reflect.GeneratedConstructorAccessor31.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1628) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1703) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1638) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:776) at org.apache.metron.writer.hdfs.SourceHandler.createOutputFile(SourceHandler.java:142) at org.apache.metron.writer.hdfs.SourceHandler.initialize(SourceHandler.java:98) at org.apache.metron.writer.hdfs.SourceHandler.<init>(SourceHandler.java:65) at org.apache.metron.writer.hdfs.HdfsWriter.getSourceHandler(HdfsWriter.java:102) at org.apache.metron.writer.hdfs.HdfsWriter.write(HdfsWriter.java:77) at org.apache.metron.writer.BulkWriterComponent.write(BulkWriterComponent.java:138) at org.apache.metron.writer.bolt.BulkMessageWriterBolt.execute(BulkMessageWriterBolt.java:115) at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=storm, access=WRITE, inode="/apps/metron/indexing/indexed/snort/enrichment-null-0-0-1490726017771.json":metron:metron:drwxrwxr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2598) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2533) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2417) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:729) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:405) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) at org.apache.hadoop.ipc.Client.call(Client.java:1476) at org.apache.hadoop.ipc.Client.call(Client.java:1407) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy46.create(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296) at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy48.create(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1623) ... 28 more Do I need to setup permissions for storm user on hdfs? How do you suggest I test that the data from my telemetry source successfully passes the parsing phase? Thanks again for your help.
... View more
03-24-2017
10:24 PM
Hi, I'm getting the following error from Storm's ParserBolt in a single node deployment of Metron 0.3.1: java.lang.IllegalStateException: Grok parser Error: Grok statement produced a null message. Original message was: {"hostname":"myhost","timestamp":"1010101010","cpu_user":"8.522706","cpu_sys":"29.210946","cpu_nice":"0.000000","cpu_idle":"62.266348","cpu_wait":"0.000000","cpu_irq":"0.000000","cpu_soft_irq":"0.000000","cpu_stolen":"0.000000","load_one":"4.945312","load_five":"4.391113","load_fifteen":"4.385254"} and the parsed message was: {} . Check the pattern at: /apps/metron/patterns/my_telemetry_source on {"hostname":"myhost","timestamp":"1010101010","cpu_user":"8.522706","cpu_sys":"29.210946","cpu_nice":"0.000000","cpu_idle":"62.266348","cpu_wait":"0.000000","cpu_irq":"0.000000","cpu_soft_irq":"0.000000","cpu_stolen":"0.000000","load_one":"4.945312","load_five":"4.391113","load_fifteen":"4.385254"} at org.apache.metron.parsers.GrokParser.parse(GrokParser.java:174) at org.apache.metron.parsers.interfaces.MessageParser.parseOptional(MessageParser.java:45) at org.apache.metron.parsers.bolt.ParserBolt.execute(ParserBolt.java:132) at org.apache.storm.daemon.executor$fn__6573$tuple_action_fn__6575.invoke(executor.clj:734) at org.apache.storm.daemon.executor$mk_task_receiver$fn__6494.invoke(executor.clj:466) at org.apache.storm.disruptor$clojure_handler$reify__6007.onEvent(disruptor.clj:40) at org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:451) at org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:430) at org.apache.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:73) at org.apache.storm.daemon.executor$fn__6573$fn__6586$fn__6639.invoke(executor.clj:853) at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:484) at clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.RuntimeException: Grok statement produced a null message. Original message was: {"hostname":"myhost","timestamp":"1010101010","cpu_user":"8.522706","cpu_sys":"29.210946","cpu_nice":"0.000000","cpu_idle":"62.266348","cpu_wait":"0.000000","cpu_irq":"0.000000","cpu_soft_irq":"0.000000","cpu_stolen":"0.000000","load_one":"4.945312","load_five":"4.391113","load_fifteen":"4.385254"} and the parsed message was: {} . Check the pattern at: /apps/metron/patterns/my_telemetry_source at org.apache.metron.parsers.GrokParser.parse(GrokParser.java:152) ... 12 more My input lines are of the following form: {"hostname":"myhost","timestamp":"1010101010","cpu_user":"8.522706","cpu_sys":"29.210946","cpu_nice":"0.000000","cpu_idle":"62.266348","cpu_wait":"0.000000","cpu_irq":"0.000000","cpu_soft_irq":"0.000000","cpu_stolen":"0.000000","load_one":"4.945312","load_five":"4.391113","load_fifteen":"4.385254"}
{"hostname":"myhost","timestamp":"1012101010","cpu_user":"8.522706","cpu_sys":"29.210946","cpu_nice":"0.000000","cpu_idle":"62.266348","cpu_wait":"0.000000","cpu_irq":"0.000000","cpu_soft_irq":"0.000000","cpu_stolen":"0.000000","load_one":"4.945312","load_five":"4.391113","load_fifteen":"4.385254"} My Grok Pattern is: MY_TELEMETRY_SOURCE_DELIMITED %{GREEDYDATA:UNWANTED}{"hostname":"%{HOSTNAME:hostname}","timestamp":"%{NUMBER:timestamp}","cpu_user":"%{NUMBER:cpu_user}","cpu_sys":"%{NUMBER:cpu_sys}","cpu_nice":"%{NUMBER:cpu_nice}","cpu_idle":"%{NUMBER:cpu_idle}","cpu_wait":"%{NUMBER:cpu_wait}","cpu_irq":"%{NUMBER:cpu_irq}","cpu_soft_irq":"%{NUMBER:cpu_soft_irq}","cpu_stolen":"%{NUMBER:cpu_stolen}","load_one":"%{NUMBER:load_one}","load_five":"%{NUMBER:load_five}","load_fifteen":"%{NUMBER:load_fifteen}"}%{GREEDYDATA:UNWANTED} Which according to the GrokConstructor (http://grokconstructor.appspot.com) matches correctly. All Metron services are up and the pattern is staged at /apps/metron/patterns/my_telemetry_source as evident by the error log. My parser configuration for the parsing topology is located at /usr/metron/0.3.1/config/zookeeper/parsers/my_telemetry_source.json and consists of the following: {
"parserClassName": "org.apache.metron.parsers.GrokParser",
"sensorTopic": "my_telemetry_source",
"parserConfig": {
"grokPath": "/apps/metron/patterns/my_telemetry_source",
"patternLabel": "MY_TELEMETRY_SOURCE_DELIMITED",
"timestampField": "timestamp"
}
} I have tried this setup both with and without applying the patternLabel, MY_TELEMETRY_SOURCE_DELIMITED, getting the same error message in both cases. I appreciate your help.
... View more
Labels:
- Labels:
-
Apache Metron
03-08-2017
11:45 PM
Hi,
I was wondering about the best way to ingest logs in XML format into Metron. Parsing with Grok doesn't seem to be the way to go in this case. Remaining options seem to be to 1) either utilize NiFi to turn the XML into a format that Metron expects, 2) or develop a java parser for parsing XML, (that is Metron-288 issue which is not of high priority: https://issues.apache.org/jira/browse/METRON-288 ). Are these two options the only possibilities?
... View more
Labels:
- Labels:
-
Apache Metron
09-27-2016
06:44 PM
Thanks for your response. The issue was resolved and Metron is successfully deployed.
... View more
09-27-2016
12:19 AM
I should add that I receive the mentioned error during the following task: TASK [ambari_config : Start the ambari cluster - wait] *************************
... View more
09-26-2016
11:26 PM
Hi, I'm trying to install Metron on a single node, following the instructions here: https://github.com/apache/incubator-metron/tree/master/metron-deployment/vagrant/full-dev-platform I receive the following error, indicating an issue with Ambari: fatal: [node1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"blueprint_name": null, "blueprint_var": null, "cluster_name": "metron_cluster", "cluster_state": "started", "configurations": null, "host": "node1", "password": "admin", "port": 8080, "username": "admin", "wait_for_complete": true}, "module_name": "ambari_cluster_state"}, "msg": "Ambari client exception occurred: No JSON object could be decoded"} Note that Ambari is up and running. I can log into it using a browser. Here's the output of platform-info.sh: Metron 0.2.0BETA
--
* master
--
commit c85c74269327487a5fe607ea85ebc56d3b5650ef
Author: cstella <cestella@gmail.com>
Date: Mon Sep 26 15:14:53 2016 -0400
METRON-452: Add rudimentary configuration management functions to Stellar closes apache/incubator-metron#269
--
metron-deployment/vagrant/full-dev-platform/Vagrantfile | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--
ansible 2.0.0.2
config file = <MY_FOLDER>/incubator-metron/metron-deployment/vagrant/full-dev-platform/ansible.cfg
configured module search path = ../../extra_modules
--
Vagrant 1.8.5
--
Python 2.7.10
--
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T08:41:47-08:00)
Maven home: /usr/local/Cellar/maven/3.3.9/libexec
Java version: 1.8.0_73, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_73.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.10.5", arch: "x86_64", family: "mac"
--
Darwin <MY_MAC>.local 14.5.0 Darwin Kernel Version 14.5.0: Mon Aug 29 21:14:16 PDT 2016; root:xnu-2782.50.6~1/RELEASE_X86_64 x86_64 It is perhaps unrelated but worth mentioning that I have made the following changes in my Vagrantfile: 1) set the number of cpus to 2 instead of the default 4. 2) set config.ssh.insert_key = false I appreciate your help.
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Metron