Member since
01-17-2016
42
Posts
50
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3542 | 04-21-2016 10:41 PM | |
993 | 04-15-2016 03:22 AM | |
1400 | 04-13-2016 04:03 PM | |
4430 | 04-12-2016 01:59 PM |
01-17-2017
06:43 PM
7 Kudos
If you have ever tried to spawn multiple cloudbreak shells you may have run into an error. That is because the default "cbd util cloudbreak-shell" uses docker containers. The fastest work around of this is to use the Jars directly. These Jars can be remotely run from your personal machine or run on the cloudbreak machine itself. Prepping the cloudbreak machine(only needed if running jars locally on the AWS image) Log into your cloudbreak instance and go to /etc/yum.repos.d Remove the Centos-Base.repo file (this is a redhat machine and this can cause conflicts) Install java-8 (yum install java-1.8.0*) Change directory back to /home/cloudbreak Downloading the Jar Set a global variable equal to your cloudbreak version (export CB_SHELL_VERSION=1.6.1) Download the jar (curl -o cloudbreak-shell.jar https://s3-eu-west-1.amazonaws.com/maven.sequenceiq.com/releases/com/sequenceiq/cloudbreak-shell/$CB_SHELL_VERSION/cloudbreak-shell-$CB_SHELL_VERSION.jar) Using the Jar Interactive mode (java -jar ./cloudbreak-shell.jar --cloudbreak.address=https://<your-public-hostname> --sequenceiq.user=admin@example.com --sequenceiq.password=cloudbreak --cert.validation=false) Using a command file (java -jar ./cloudbreak-shell.jar --cloudbreak.address=https://<your-public-hostname> --sequenceiq.user=admin@example.com --sequenceiq.password=cloudbreak --cert.validation=false --cmdfile=<your-FILE>)
... View more
Labels:
11-08-2016
02:27 AM
1 Kudo
The easiest "hack" is to give it a filename ending in .xml it could be update attribute Filename | ${Filename}.xml
... View more
10-02-2016
04:20 PM
Is it possible to make autoscaling policies using either the CLI or REST-API? I was reviewing the documentation and I was unable to find anything.
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
10-02-2016
04:08 PM
Thanks a ton. This really clarifies things for me. One side question, just for my own understanding, the fixed 50gb volumes are for non hdfs storage only right? If i understand you hdfs does not go on there.
... View more
10-01-2016
09:13 PM
All servers seem to be starting with a 50GiB EBS volume as the root device. Is it possible to change this to just use ephemeral storage on nodes that have substantial ephemeral storage. Below is a picture i took from my 4 node cluster
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
09-15-2016
11:28 AM
6 Kudos
In this article I will review the steps required to enrich and filter logs. It is assumed that the logs are landing one at a time as a stream into the nifi cluster. The steps involved
Extract Attributes - IP and Action Extract Attributes - IP and Action Cold Store non ip logs GeoEnrich the IP address Cold store local IP addresses Route the remaining logs based on threat level Store the low threat logs in HDFS Place high threat logs into an external table Extract IP Address and Action - ExtractText Processor This processor will evaluate each log and parse the information into attributes. To create a new attribute add a property and give it a name(soon to be attribute name) and a java-style regex command. As the processor runs it will evaluate the regex and create an attribute with the result.
If there is no match it will be sent to the 'unmatched' result which is a simple way of filtering out different logs. GeoEnrichIP - GeoEnrichIP Processor This processor takes the ipaddr attribute generated in the previous step and compares it to a geo-database('mmdb'). I am using the GeoLite - City Database found here Route on Threat - RouteOnAttribute Processor This processor takes the IsDenied attribute from the previous step and tests to see if it is there. This will only exist if the "Extract IP Address" Processor found "iptables denied" in the log. It is then routed to a connectionw ith that property's name. More properties can be added with thier own rules following the nifi expression language
Note I plan on adding location filtering but did not want to obscure the demo in too many steps. Cold and Medium Storage - Processor Groups These two processor groups are very similar in function. Eventually they could be combined into one shared group using attributes for rules but for now they are separate. Merge Content - This processor takes each individual line and combines them into a larger aggregated file. This helps avoid the too many small files problem that arises in large clusters Compress Content - Simply saves disk space by compressing them Set Filename As Timestamp - UpdateAttribute Processor - This takes each aggragate and sets the attribute 'filename' to the current time. This will allow us to sort the aggregates by when they were written for later review PutHDFS Processor - Takes the aggregate and saves it to HDFS High Threat - Processor Group In order to be read by a hive external table we need to convert the data to a JSON format and save it to the correct directory. Rename Attributes - UpdateAttribute Processor - This renames the fields to match the hive field format Put Into JSON - AttributesToJSON - Takes the renamed fields and saves them in a JSON string that the hive SerDe can read natively Set Filename As Timestamp - UpdateAttribute Processor - Once again this sets the filename to the timestamp. This may be better served as systemname + timestamp moving forward PutHDFS - Stores the data to the hive external file location Hive Table Query Using the ambari hive view I am able to now query my logs and use sql-style queries to get results CREATE TABLE `securitylogs`( `ctime` varchar(255) COMMENT 'from deserializer', `country` varchar(255) COMMENT 'from deserializer', `city` varchar(255) COMMENT 'from deserializer', `ipaddr` varchar(255) COMMENT 'from deserializer', `fullbody` varchar(5000) COMMENT 'from deserializer') ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' LOCATION 'hdfs://sandbox.hortonworks.com:8020/user/nifi/High_Threat'
... View more
Labels:
09-06-2016
08:01 PM
8 Kudos
I was recently tinkering with the walmart rest-api. This is publicly available interface and can be used for a quick price look up for products. The overall goal of the project is to keep track of the cost of specific shopping carts day to day but this intermediate step provides an interesting example-case. The requirements of this stage: Use UPC codes to provide a lookup Avoid having the pass the walmart API key to the internal client making the call Extract features such as price in preparation for a JDBC database entry. Core Concepts in Nifi Nifi has the ability to serve as a custom restful api which is managed with the "HandleHttpRequest" and "HandleHttpResponce" processors. This can be a Get/Post or any of the other common types Nifi can make calls to an external rest API via the "InvokeHTTP" Processor XML Data can be extracted with the "EvaluateXPath" Processor The HandleHttpRequest Processor This processor receives the incoming rest call and makes a flow file with information pertaining to the headers. As you can see in the image below it is listening on port 9091 and only responding to the path '/lookup'. Additionally the flow file it created has an attribute value for all of the headers it received, particularly "upc_code" And the flow file The InvokeHTTP Processor This processor takes the header and makes a call to the walmart API directly. As you can see i am using the attribute for upc_code received from the request handler. This then sends an XML file in the body of the flow file to the next stage The EvaluateXPath Processor In this article I covered how the xpath processor works in more detail. I am extracting key attributes for later analysis. https://community.hortonworks.com/articles/25720/parsing-xml-logs-with-nifi-part-1-of-3.html HandleHTTPResponse (Code 200 or Dead Letter) After successfully extracting the attributes I send a response code of 200(success) back to the rest client along with the xml that walmart provided. In my above example if i do not successfully extract the values the message goes to a dead letter que. This is not ideal and in a production setting I would send the appropriate HTTP error code. Closing Thoughts This process group provides a solid basis for my pricing engine. I still need to write in the error handling but this start provides a feature-rich flow file to the next stage of my project.
... View more
Labels:
08-26-2016
07:58 PM
Its probably reading the same file repeatedly without permissions to delete it. On the get file processor configure it to only run every 5 seconds. Then in flow view right click and refresh the page and you will probably see the outbound que with a file. If you don't refresh the view you may not see the flow files building up and hten it builds up enough and you run out of memory.
... View more
08-10-2016
04:22 PM
A quick thing to check before we dig into any other problems. The processor needs to be in the stopped state before an update is attempted.
... View more