Member since
09-04-2017
19
Posts
1
Kudos Received
0
Solutions
02-01-2019
10:09 AM
I have a GetMongo processor that is reading a collection and then passing the data into Kafka topic. But when I read the data fed into Kafka , the data looks shuffled. How to make sure the order or data is maintained
... View more
Labels:
05-25-2018
07:02 AM
@Bryan Bende I tried the multiple files thing before GetFile, and it worked really fast. Although I have 4 additional questions. Two are observations and 2 research questions. 1) Now that I have a flow now like "GetFile-> PublishKafka0_11->PutFile"(see attached pic) horton.png , the folder that contains 2000 files(originally one large file was csv, now it does not seem to have a csv extension or any extension at all) is read in GetFile processor. And then after reading straightaway published to Kafka topic called new_profiler_testing and if it is successful, it should send it to PutFile, where it puts all these files read into a Folder called output. This happens to generate the log files in kafka-logs, as I wanted to check the topic "new_profiler_testing" Now, if I count the number of lines in this log file they are 1231264 00000000000000062200.log and if I check for number of files written into the output folder they are 1561. You may have observed from the picture, that there is congestion that again happens at the PutFile end after success message is delivered from PublishKafka0_11. I want to check for my 2million records in kafka topic. How do I do that? Because when I open the log file in kafka-logs, it seems to have gibberrish content as well. Do you think I should simultaneously open a consumer console and pipe it via wc -l? or is there a way by which I can do it in Nifi 2) I ran this process twice in order to make sure its running, and the first time something strange happened. The output folder contained files like these xzbcl.json xzbcm.json xzbcn.json xzbco.json xzbcp.json xzbcq.json xzbcr.json xzbcs.json xzbcy.json xzbcz.json and also xzbcl, xzbcm, xzbcn, xzbco, xzbcp, xzbcq, xzbcr, xzbcs, xzbcy, xzbcz, along with other normal files. And they were in the format of json when I opened them. Here is a snippet [{"timestamp":1526590200,"tupvalues":[0.1031287352688188,0.19444490419773347,0.06724761719024923,0.008715105948727752,0.273251449860885,0.09916421288937546,0.12308943665971132,0.017852488055395015,0.05039765141148139,0.11335172723104833,0.03305334889471589,0.041821925451222756,0.08485309865154911,0.09606502178530299,0.06843417769071786,0.024991363178388175,
0.2800309262376106,0.1926730050165331,0.2785879089696489,0.211383486088693...]}] Why did this happen and how? Also the size of the then created log was 785818 00000000000000060645.log. Is it possible that the number of records written into a topic varies over time and is susceptible to change? Also, this is the format I would ideally want my kafka topic to be in(ie. json format). But have not been able to get around that, as mentioned in this post by me https://community.hortonworks.com/questions/191753/csv-to-json-conversion-error.html?childToView=191615#answer-191615 3) If I have ten files in Nifi being read from same folder, how is the data read? Is it read one after the other in the order and pushed to kafka? Or is it randomly sent? I want to know because I have a program written in kafka-streams that needs to group by on timestamp values. For eg. today 10am-11am data from all ten folders to be averaged for their CPU usages. 4) Is there a way to time my output into kafka topic. I would like to know how much time it takes for GetFile to read the files and then send to kafka topic completely till it has 2million records?
... View more
05-23-2018
02:29 PM
hortonworks2.png Here is the node A processor picture that I have attached. Ideally I want one input topic to receive 20 million records from a local file or sent via nifi processor. I think your idea of splitting it into chunks of multiple files should do too.
... View more
05-23-2018
02:07 PM
I have waited overnight, and still has been stuck in this state only. Should I increase the said value of 1GB in the back pressure to 2GB and then check?
... View more
05-23-2018
04:19 AM
hortonworks-1.png Hey @Bryan Bende thanks for replying. This is the flow in the image, you may be able to tell better if you see I thought. Before taking the dump, I tried to start the publish kafka processor but could not do so, as I receive the error "No eligible components are selected. Please select the components to be started and ensure they are no longer running." And the start option is not available when I right click on the processor. However I still took the dump as asked. Attaching the dump file here. dump.txt Please suggest me a method to send the data.
... View more
05-22-2018
09:10 AM
I have a node "A" with RPG process that reads from a file of size 1.2GB roughly containing 20 million records and at the node "B" this file is received via an input port to pass on further to PublishKafka0_11. However, as soon as I do this the data gets sent from A and received till B but appears permanently in the state of queued before the "PublishKafka" processor. To check if my flow was right I tried to read a file of 53.4kB and that gets sent to the processor succesfully and also into the topic named "INPUT_TOPIC" . Here are the problems: 1) with 1.2 GB sized file, it does not seem to send the data into topic 2) After using 1.2 GB, the input port hangs or stops responding, also the processor "PublishKafka0_11" stops responding. 3) I used cat command manually to right into the topic "INPUT_TOPIC" to read into the consumer in the command line interface. However, when I check the logs for that "INPUT_TOPIC logs, there are two logs created both of which contain different texts in between (Almost binary gibberish) and the wc -l reads different numbers on both logs, adding to more than 20 million lines. I have tried this by removing the topic and doing afresh as well. But still the same type of output. Can someone help me in this situation. My purpose is to load an input topic of kafka with my 20 million records. NO more, no less than 20 million.
... View more
Labels:
05-21-2018
04:01 AM
Hi @Matt Burgess, the output is still in one line instead of multiple lines. Even though I have tried using what you mentioned above. I have used Replace text and in place of regex : (\[)(\{\"timestamp\"\:15123[0-9]+),(\"tupvalues\"\:\[([0-9]+\.[a-zA-Z0-9-]+),([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)([0-9]+\.[a-zA-Z0-9-]+)\]\})(\,) For replacement values $2, $3. Followed by Split text in order to split line by line. But the output is still the same. I even tried the solution given by you on post https://community.hortonworks.com/questions/109064/nifi-replace-text-how-t0-replace-string-with.html And tried substituting the expression [\[\]](\{|\}) but this gives me an output which has no square brackets in the beginning and inside the array. I know its been a week almost, but still have not got a hang of it.
... View more
05-17-2018
12:39 PM
@Matt Burgess I was able to do this and worked perfectly. However, there is just one small request. The data that I finally receive in PUTfile, is all in one line. I tried to insert newlines after each record ends. However in PublishKafka_0_11 there is message demarcator where Shift+Ctrl is also not helping my situation. I figured that it is because [{"timestamp":1512.., "tupvalues":[1,2,3,4...]}, {[{"timestamp":1512.., "tupvalues":[1,2,3,4...]}, [{"timestamp":1512.., "tupvalues":[1,2,3,4...]}, [{"timestamp":1512.., "tupvalues":[1,2,3,4...]}.....] The square bracket is right at the very end. Whereas the required output is somewhat like this: {"timestamp":"1512312021","tupvalues":[0.8,0.0,18244.0,3176.0,0.0,122.0,11.0,0.0,0.0,100052.0,1783.0,4.0,59.0,1.0,3252224.0,1.8681856E7,2777088.0,999424.0,0.0,524288.0,0.0,487424.0,740352.0,0.0,1.0,0.04,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0]}
{"timestamp":"1512312022","tupvalues":[207.8,0.2,3778460.0,309000.0,0.0,22342.0,27.0,0.0,0.0,1.06732936E8,25623.0,36.0,749.0,110.0,3.19459328E8,3.87224371E9,1.17956608E8,7110656.0,0.0,2.87654298E9,0.0,2.0957184E8,2.46372352E8,0.0,3.0,1.95,1.23,0.0,0.0,3.0,6.0,0.0,0.0,0.0,0.0]} any suggestions? Do u think split json or split record should be now be introduced?
... View more
05-17-2018
12:31 PM
I would like to know if the Run schedule stands for "rate at which the processor is publishing or writing into another processor like "Put File" I am publishing kafka into a topic from where kafka streams is called and then so on. For performance testing, I would like to fix the rate at which the log is written into topic. Can anybody suggest me how? For eg. 100 records/log lines per second.
... View more
Labels:
05-12-2018
01:07 PM
1 Kudo
I have a csv file of the format - 1512340821, 26,576.09, 39824, 989459.009.. and so on 35 total fields Each of these columns is a long or double format in avro format. Now I have used the Convert Record processor in nifi, which forst converts or uses avro schema and then produces the json format data. My goal is to have data coming out of json format like the following {"timestamp":"1512312024","tupvalues":[112.5,0.0,1872296.0,134760.0,0.0,7134.0,19.0,0.0,0.0,3.8136152E7,13703.0,18.0,111.0,37.0,1.38252288E8,1.91762842E9,5.9564032E7,4055040.0,0.0,1.41528269E9,0.0,8.0539648E7,9.5470592E7,0.0,2.0,0.76,0.44,0.0,0.0,2.0,2.0,0.0,0.0,0.0,0.0]} My original data does not have headers, but I expect the data to be of the format {key, value} where key is the timestamp and values are the other column numbers. Here is the avro schema that i put in the avro registry { "type": "record", "namespace": "testavro.schema", "name": "test", "fields": [ { "type": "double", "name": "timestamp" }, { "name": "tupvalues", "type" : { "type": "array", "items": "double" } } ] } I used this website -- " https://json-schema-validator.herokuapp.com/avro.jsp" to check the conversion and it reads success. But when applied in the avro registry, the data is not picked. I get an error of the following order-- "Cannot create value [26] of type java.lang.string to object array for field tupvalues. Any sort of help is appreciated. I am a newbie to avro schema writing, I have a feeling thats where I am wrong.
... View more
Labels:
04-16-2018
04:47 AM
I too tried something in the meanwhile. Here is the screenshot of the flow. In this flow I simple split them using regular expression and then extracted what was needed using the success connectors. Will surely try one of the methods mentioned by you as well and get back here. @Shu
... View more
04-11-2018
05:45 AM
I have a file with 36 numeric data fields. Like CPU usage, memory usage, date time, IP address etc. which looks like-- 10.8.x.y, 151490..., 45.00, 95.00, 8979.09, 3984.90, ... (36 fields) Now there is a stream (GetFile --> Publish_kafka) which is sent to a single kafka topic (Publish Kafka 0_11) I want to be able to now split this stream into 4 and 30 field two streams and then into two different kafka topics. That is: to kafka topic 1;;; 10.8.x.y, 151490..., 45.00,95.00 And to kafka topic 2;;; 8979.09, 3984.90, ... etc. (30 fields) How do I do that? Split text just sems to split into line count.
... View more
Labels:
01-10-2018
06:46 AM
Hi @slachterman I am planning to use just Nifi, to pass csv data for data cleansing. The data cleansing involves filling in missing timestamps, rows of data, correct corrupt timestamps etc. Will just Nifi be enough, to fill in missing data as well?
... View more
11-27-2017
04:44 AM
@pshah I solved this Problem, by mirroring the website on our localhost's webserver and then changing the version number to 452* instead of 453 in the repositories. Because when I checked the online repositories there is -453* available and that is why the error wa popping.
... View more
11-16-2017
05:17 AM
Here is the value of maven.repo.url : hwx-public^http://repo.hortonworks.com/content/groups/public/,hwx-private^http://nexus-private.hortonworks.com/nexus/content/groups/public/
... View more
11-15-2017
03:35 AM
No its not behind a proxy. We have bypassed it. And yes, it has internet
... View more
11-14-2017
09:52 AM
I have just deployed the example mentioned in Sandbox of hortonworks and then tried to deploy the application. However it fails saying that it cannot find some storm-pmml:jar.1.1.0.3.0.0.0-452 file and hbase file. Attached is the screenshot of the error received. After receiving this error I made changes in the pom.xml in /usr/hdf/3.0.0.0-452/storm/external/storm-pmml-examples/ and changed the version manually to 1.1.0 which is the one available online.And restarted the streamline from Ambari server and then tried deploying the application, However, the error did not change and I still could not deploy the application. As a workaround now I found all the pom.xml and found one in /tmp folder where I manually downloaded and inserted the jar file into their jar folders: storm-pmml.1.1.0.jar file into /tmp/storm-artifacts/"name-of-my-application"/jars/ and also in /tmp/hsperfdata_streamline/local-repo/org/apache/storm/storm-pmml/1.1.0.3.0.0.0-452/ But still no luck. Any sort of suggestions to see my PMML example running are welcome. Attaching the log:
INFO [08:55:54.800] [main] o.e.j.s.h.ContextHandler - Started i.d.j.MutableServletContextHandler@6fde6f05{/,null,AVAILABLE} INFO [08:55:54.888] [main] o.e.j.s.AbstractConnector - Started application@3a2e9f5b{HTTP/1.1,[http/1.1]}{0.0.0.0:17777} INFO [08:55:54.971] [main] o.e.j.s.AbstractConnector - Started admin@7da34b26{HTTP/1.1,[http/1.1]}{0.0.0.0:17778} INFO [08:55:55.056] [main] o.e.j.s.Server - Started @7540ms WARN
[08:56:27.693] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.786] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.786] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.789] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.790] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.795] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.888] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type DRUID not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineSink WARN
[08:56:27.889] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type DRUID not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineSink WARN
[08:56:27.894] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type DRUID not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineSink WARN
[08:56:27.927] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.939] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:27.943] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.058] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type DRUID not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineSink WARN
[08:56:28.070] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type DRUID not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineSink WARN
[08:56:28.126] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.139] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.152] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.158] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.163] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.166] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.170] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.174] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.180] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.185] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type JOIN not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.192] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.196] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type CUSTOM not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineProcessor WARN
[08:56:28.241] [ForkJoinPool-4-worker-11]
c.h.s.s.c.t.c.TopologyComponentFactory - Type DRUID not found in
provider map, returning an instance of
com.hortonworks.streamline.streams.layout.component.StreamlineSink INFO [08:56:38.055] [ForkJoinPool-4-worker-11] c.h.s.s.a.s.t.StormTopologyActionsImpl - Deploying Application Testing_PMML INFO
[08:56:38.056] [ForkJoinPool-4-worker-11]
c.h.s.s.a.s.t.StormTopologyActionsImpl -
/usr/hdf/current/storm-client/bin/storm jar
/tmp/storm-artifacts/streamline-3-Testing_PMML/artifacts/streamline-runtime-storm-0.5.0.3.0.0.0-452.jar
--jars
/tmp/storm-artifacts/streamline-3-Testing_PMML/jars/sam-custom-processor-0.0.5.jar,/tmp/storm-artifacts/streamline-3-Testing_PMML/jars/sam-custom-processor-0.0.5-jar-with-dependencies.jar,/tmp/storm-artifacts/streamline-3-Testing_PMML/jars/notifiers-63349f36-5dc4-41c8-a6ac-b4eddee0e7eb.jar,/tmp/storm-artifacts/streamline-3-Testing_PMML/jars/streamline-functions-04c7a6c1-b5f7-47e9-b19e-8b7818044900.jar,/tmp/storm-artifacts/streamline-3-Testing_PMML/jars/streamline-functions-7d892125-a6f9-446b-95b2-63c116b77a06.jar,/tmp/storm-artifacts/streamline-3-Testing_PMML/jars/sam-custom-processor-0.0.5a.jar
--artifacts
org.apache.kafka:kafka-clients:0.10.2.1,org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients,org.apache.kafka:kafka-clients:0.10.2.1,org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients,org.apache.storm:storm-pmml:1.1.0.3.0.0.0-452,org.apache.storm:storm-druid:1.1.0.3.0.0.0-452,org.scala-lang:scala-library:2.11.8,org.apache.storm:storm-hbase:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.hadoop:hadoop-hdfs:2.7.3.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.hbase:hbase-server:1.1.2.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.hbase:hbase-client:1.1.2.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.storm:storm-hdfs:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.storm:storm-druid:1.1.0.3.0.0.0-452,org.scala-lang:scala-library:2.11.8,org.apache.storm:storm-druid:1.1.0.3.0.0.0-452,org.scala-lang:scala-library:2.11.8
--artifactRepositories
hwx-public^http://repo.hortonworks.com/content/groups/public/,hwx-private^http://nexus-private.hortonworks.com/nexus/content/groups/public/
-c nimbus.host=sandbox-hdf.hortonworks.com -c nimbus.port=6627 -c
nimbus.thrift.max_buffer_size1048576 -c
storm.thrift.transport=org.apache.storm.security.auth.SimpleTransportPlugin
-c
storm.principal.tolocal=org.apache.storm.security.auth.DefaultPrincipalToLocal
org.apache.storm.flux.Flux --remote
/tmp/storm-artifacts/streamline-3-Testing_PMML.yaml ERROR
[08:57:01.934] [ForkJoinPool-4-worker-11]
c.h.s.s.a.s.t.StormTopologyActionsImpl - Topology deploy command failed
- exit code: 1 / output: /usr/hdf/current/storm-client/bin/storm: line
2: /usr/hdf/3.0.0.0-452/etc/default/hadoop: No such file or directory Resolving
dependencies on demand: artifacts
(['org.apache.kafka:kafka-clients:0.10.2.1',
'org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients',
'org.apache.kafka:kafka-clients:0.10.2.1',
'org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients',
'org.apache.storm:storm-pmml:1.1.0.3.0.0.0-452',
'org.apache.storm:storm-druid:1.1.0.3.0.0.0-452',
'org.scala-lang:scala-library:2.11.8',
'org.apache.storm:storm-hbase:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework',
'org.apache.hadoop:hadoop-hdfs:2.7.3.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework',
'org.apache.hbase:hbase-server:1.1.2.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework',
'org.apache.hbase:hbase-client:1.1.2.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework',
'org.apache.storm:storm-hdfs:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework',
'org.apache.storm:storm-druid:1.1.0.3.0.0.0-452',
'org.scala-lang:scala-library:2.11.8',
'org.apache.storm:storm-druid:1.1.0.3.0.0.0-452',
'org.scala-lang:scala-library:2.11.8']) with repositories
(['hwx-public^http://repo.hortonworks.com/content/groups/public/',
'hwx-private^http://nexus-private.hortonworks.com/nexus/content/groups/public/']) DependencyResolver
input - artifacts:
org.apache.kafka:kafka-clients:0.10.2.1,org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients,org.apache.kafka:kafka-clients:0.10.2.1,org.apache.storm:storm-kafka-client:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^log4j:log4j^org.apache.zookeeper:zookeeper^org.apache.kafka:kafka-clients,org.apache.storm:storm-pmml:1.1.0.3.0.0.0-452,org.apache.storm:storm-druid:1.1.0.3.0.0.0-452,org.scala-lang:scala-library:2.11.8,org.apache.storm:storm-hbase:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.hadoop:hadoop-hdfs:2.7.3.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.hbase:hbase-server:1.1.2.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.hbase:hbase-client:1.1.2.2.5.0.9-6^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.storm:storm-hdfs:1.1.0.3.0.0.0-452^org.slf4j:slf4j-log4j12^org.apache.curator:curator-client^org.apache.curator:curator-framework,org.apache.storm:storm-druid:1.1.0.3.0.0.0-452,org.scala-lang:scala-library:2.11.8,org.apache.storm:storm-druid:1.1.0.3.0.0.0-452,org.scala-lang:scala-library:2.11.8 DependencyResolver
input - repositories:
hwx-public^http://repo.hortonworks.com/content/groups/public/,hwx-private^http://nexus-private.hortonworks.com/nexus/content/groups/public/ SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Exception
in thread "main" java.lang.RuntimeException:
org.eclipse.aether.resolution.DependencyResolutionException: The
following artifacts could not be resolved:
org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452,
org.apache.storm:storm-hbase:jar:1.1.0.3.0.0.0-452: Could not find
artifact org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452 in central
(http://repo1.maven.org/maven2/) at org.apache.storm.submit.command.DependencyResolverMain.main(DependencyResolverMain.java:86) Caused
by: org.eclipse.aether.resolution.DependencyResolutionException: The
following artifacts could not be resolved:
org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452,
org.apache.storm:storm-hbase:jar:1.1.0.3.0.0.0-452: Could not find
artifact org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452 in central
(http://repo1.maven.org/maven2/) at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:391) at org.apache.storm.submit.dependency.DependencyResolver.resolve(DependencyResolver.java:95) at org.apache.storm.submit.command.DependencyResolverMain.main(DependencyResolverMain.java:75) Caused
by: org.eclipse.aether.resolution.ArtifactResolutionException: The
following artifacts could not be resolved:
org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452,
org.apache.storm:storm-hbase:jar:1.1.0.3.0.0.0-452: Could not find
artifact org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452 in central
(http://repo1.maven.org/maven2/) at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:444) at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:246) at org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:373) ... 2 more Caused
by: org.eclipse.aether.transfer.ArtifactNotFoundException: Could not
find artifact org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452 in
central (http://repo1.maven.org/maven2/) at org.eclipse.aether.connector.basic.ArtifactTransportListener.transferFailed(ArtifactTransportListener.java:39) at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:355) at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:67) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Traceback (most recent call last): File "/usr/hdf/3.0.0.0-452/storm/bin/storm.py", line 879, in <module> main() File "/usr/hdf/3.0.0.0-452/storm/bin/storm.py", line 876, in main (COMMANDS.get(COMMAND, unknown_command))(*ARGS) File "/usr/hdf/3.0.0.0-452/storm/bin/storm.py", line 290, in jar artifact_to_file_jars = resolve_dependencies(DEP_ARTIFACTS_OPTS, DEP_ARTIFACTS_REPOSITORIES_OPTS) File "/usr/hdf/3.0.0.0-452/storm/bin/storm.py", line 177, in resolve_dependencies raise RuntimeError("dependency handler returns non-zero code: code<%s> syserr<%s>" % (p.returncode, errors)) RuntimeError: dependency handler returns non-zero code: code<1> syserr<None> ERROR
[08:57:01.954] [ForkJoinPool-4-worker-11] c.h.s.s.a.t.s.TopologyStates
- Error while trying to deploy the topology in the streaming engine java.lang.Exception:
Topology could not be deployed successfully: storm deploy command
failed with Exception in thread "main" java.lang.RuntimeException:
org.eclipse.aether.resolution.DependencyResolutionException: The
following artifacts could not be resolved:
org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452,
org.apache.storm:storm-hbase:jar:1.1.0.3.0.0.0-452: Could not find
artifact org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452 in central
(http://repo1.maven.org/maven2/) at
com.hortonworks.streamline.streams.actions.storm.topology.StormTopologyActionsImpl.deploy(StormTopologyActionsImpl.java:254) at com.hortonworks.streamline.streams.actions.topology.state.TopologyStates$5.deploy(TopologyStates.java:120) at com.hortonworks.streamline.streams.actions.topology.state.TopologyContext.deploy(TopologyContext.java:87)
at
com.hortonworks.streamline.streams.actions.topology.service.TopologyActionsService.deployTopology(TopologyActionsService.java:116) at com.hortonworks.streamline.streams.service.TopologyCatalogResource.lambda$deploy$3(TopologyCatalogResource.java:493) at com.hortonworks.streamline.common.util.ParallelStreamUtil.lambda$runAsync$0(ParallelStreamUtil.java:56) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) ERROR [08:57:01.964] [ForkJoinPool-4-worker-11] c.h.s.c.u.ParallelStreamUtil - Got exception while running async task java.lang.RuntimeException:
java.lang.Exception: Topology could not be deployed successfully: storm
deploy command failed with Exception in thread "main"
java.lang.RuntimeException:
org.eclipse.aether.resolution.DependencyResolutionException: The
following artifacts could not be resolved:
org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452,
org.apache.storm:storm-hbase:jar:1.1.0.3.0.0.0-452: Could not find
artifact org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452 in central
(http://repo1.maven.org/maven2/) at com.hortonworks.streamline.common.util.ParallelStreamUtil.lambda$runAsync$0(ParallelStreamUtil.java:58) at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1582) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157) Caused
by: java.lang.Exception: Topology could not be deployed successfully:
storm deploy command failed with Exception in thread "main"
java.lang.RuntimeException:
org.eclipse.aether.resolution.DependencyResolutionException: The
following artifacts could not be resolved:
org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452,
org.apache.storm:storm-hbase:jar:1.1.0.3.0.0.0-452: Could not find
artifact org.apache.storm:storm-pmml:jar:1.1.0.3.0.0.0-452 in central
(http://repo1.maven.org/maven2/) at
com.hortonworks.streamline.streams.actions.storm.topology.StormTopologyActionsImpl.deploy(StormTopologyActionsImpl.java:254) at com.hortonworks.streamline.streams.actions.topology.state.TopologyStates$5.deploy(TopologyStates.java:120) at com.hortonworks.streamline.streams.actions.topology.state.TopologyContext.deploy(TopologyContext.java:87)
at
com.hortonworks.streamline.streams.actions.topology.service.TopologyActionsService.deployTopology(TopologyActionsService.java:116) at com.hortonworks.streamline.streams.service.TopologyCatalogResource.lambda$deploy$3(TopologyCatalogResource.java:493) at com.hortonworks.streamline.common.util.ParallelStreamUtil.lambda$runAsync$0(ParallelStreamUtil.java:56) ... 6 common frames omitted
... View more
Labels:
11-13-2017
09:33 AM
we tried to install the phoenix package mentioned in the GUI url: ie. yum install -d 0 -e 0 phoenix_2_6_* but it would not get connected to the mirror hence we manually downloaded, the package: phoenix_2_6_1_0_129-4.7.0.2.6.1.0-129.noarch.rpm and then copied it to the sandbox and then installed it. After installation we edited the file phoenic_create.sh file to reflect zookepers host ip address. and then the file was made into an executable and then executed. voila!
... View more
11-07-2017
02:28 PM
Also the way I installed phoenix was: downloading from the site of apache Phoenix-4.12.0-Hbase 1.1.-bin.tar.gz then I changed the path of psql.py in phoenix_create.sh. After enabling Phoenix slider in Amabri UI and changing the query timer to 3seconds, the HBAse does not start all the affected area or the region server. I could not find anything in the hbase logs or Ambari-server.logs , instead found : hbase-ams-master-sandbox-hdf.hortonworks.com.log in /var/log/ambari-metrics-collector which reads as follows: ERROR [main] persistence.Util: Last transaction was partial. 2017-11-07 07:09:27,580 ERROR [main] master.HMasterCommandLine: Master exiting java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:576) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:595) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:561) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:643) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:272) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:399) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122) at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:253) at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:188) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:207) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2770) Tue Nov 7 07:15:27 UTC 2017 Starting master on sandbox-hdf.hortonworks.com core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 257635 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited 2017-11-07 07:15:28,245 INFO [main] util.VersionInfo: HBase 1.1.2.2.6.1.0-118 2017-11-07 07:15:28,246 INFO [main] util.VersionInfo: Source code repository git://c66-9277b38c-2/grid/0/jenkins/workspace/HDP-parallel-centos6/SOURCES/hbase revision=718c773662346de98a8ce6fd3b5f64e279cb87d4 2017-11-07 07:15:28,246 INFO [main] util.VersionInfo: Compiled by jenkins on Fri May 26 19:29:36 UTC 2017 2017-11-07 07:15:28,246 INFO [main] util.VersionInfo: From source with checksum 5325f6ee9be058d73a605fd20a4351bb 2017-11-07 07:15:28,702 WARN [main] util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit is deprecated by hbase.regionserver.global.memstore.size 2017-11-07 07:15:28,745 INFO [main] master.HMasterCommandLine: Starting a zookeeper cluster 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:zookeeper.version=3.4.6-118--1, built on 05/26/2017 18:16 GMT 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:host.name=sandbox-hdf.hortonworks.com 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.version=1.8.0_131 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.vendor=Oracle Corporation 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.131-0.b11.el6_9.x86_64/jre 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.class.path=.1.0-118.jar:/usr/lib/ams-hbase//lib/hbase-common-1.1.2.2.6.1.0-118.jar:ar:..........so on.. 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.library.path=/usr/lib/ams-hbase/lib/hadoop-native/ 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.io.tmpdir=/var/lib/ambari-metrics-collector/hbase-tmp 2017-11-07 07:15:28,767 INFO [main] server.ZooKeeperServer: Server environment:java.compiler=<NA> 2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:os.name=Linux 2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:os.arch=amd64 2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:os.version=4.11.4-1.el7.elrepo.x86_64 2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:user.name=ams 2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:user.home=/home/ams 2017-11-07 07:15:28,768 INFO [main] server.ZooKeeperServer: Server environment:user.dir=/home/ams 2017-11-07 07:15:28,795 INFO [main] server.ZooKeeperServer: Created server with tickTime 6000 minSessionTimeout 12000 maxSessionTimeout 120000 datadir /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/zookeeper_0/version-2 snapdir /var/lib/ambari-metrics-collector/hbase-tmp/zookeeper/zookeeper_0/version-2 2017-11-07 07:15:28,825 INFO [main] server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:61181 2017-11-07 07:15:30,437 ERROR [main] persistence.Util: Last transaction was partial. 2017-11-07 07:15:30,438 ERROR [main] master.HMasterCommandLine: Master exiting java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:392) at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63) at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:576) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:595) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:561) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:643) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:158) at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223) at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:272) at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:399) at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:122) at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:253) at org.apache.hadoop.hbase.zookeeper.MiniZooKeeperCluster.startup(MiniZooKeeperCluster.java:188) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:207) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2770) and the error that I receive when i run phoenix_Create.sh is SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.6.1.0-129/apache-phoenix-4.12.0-HBase-1.1-bin/phoenix-4.12.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.6.1.0-129/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 17/11/07 07:09:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/11/07 07:09:23 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. java.sql.SQLException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Tue Nov 07 07:10:11 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68776: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdf.hortonworks.com,16020,1509623739162, seqNum=0 at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2454) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2360) at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76) at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2360) at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255) at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150) at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at java.sql.DriverManager.getConnection(DriverManager.java:664) at java.sql.DriverManager.getConnection(DriverManager.java:208) at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:261) Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Tue Nov 07 07:10:11 UTC 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=68776: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdf.hortonworks.com,16020,1509623739162, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:271) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327) at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302) at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167) at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162) at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797) at org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602) at org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366) at org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:403) at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2388) ... 9 more Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=68776: row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=sandbox-hdf.hortonworks.com,16020,1509623739162, seqNum=0 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:411) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:717) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:897) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:866) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1208) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:328) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831) at org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:379) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:201) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:63) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136) ... 4 more Does anybody have any workaround suggestions. I just need the phoenix functioning, maybe my installation is wrong or something. I assume that the sandbox would be pretty straightforward.
... View more
Labels: