Member since
02-24-2017
23
Posts
0
Kudos Received
0
Solutions
03-03-2017
04:37 PM
I found the file now, downloaded it and changed the entry. How do i get this file now back into my sandbox to the right place ?
... View more
03-03-2017
04:17 PM
@Roger Young
Did you find the files in the Sandbox ? iam searching for them too. I dont know how to reach the path to /etc/hadoop/conf or where to enter the adress. Best Regards, Martin
... View more
03-03-2017
03:59 PM
No iam running the sandbox in win 7. ok i will keep searching for the xml file. Thank you.
... View more
03-03-2017
03:40 PM
Seems not. Result is now: chmod changing permissions of /tmp/yarn: permission denied. user=root is not the owner of the inode=yarn. now i understand less. why yarn ? i found another post, with exactly the same problem i have. He changed the core-site.xml file and it worked afterwards. https://community.hortonworks.com/questions/65057/failed-to-write-to-parent-hdfs-directory.html Do you know how i can edit this file in the sandbox ? i dont find the way to the path of the file.
... View more
03-03-2017
03:12 PM
@Ravi Mutyala how do i get to this path in my sandbox or where to enter the adress ? Can i reach the files through the ambari interface ?
... View more
03-03-2017
03:05 PM
Thanks for your fast answer. yes the steps to modify and where to put the path in the processor i understand. But where can i find this xml file. What is the NiFi node ? iam working in the sandbox. Can i reach this file from the ambari manager ?
... View more
03-03-2017
02:56 PM
@Matt Clarke
i have the same issue. how can i go to this file ? i dont find the way to go to this path and this file. Best Regards, Martin
... View more
03-03-2017
12:23 PM
Where can i find the nifi logs, or better how can i open them ? in the nifi config under "advanced nifi-env" it says: #The directory for NiFi log files
export NIFI_LOG_DIR="{{nifi_log_dir}}" but how do i get there and open them ?
... View more
03-03-2017
12:02 PM
ok i did the 3 steps. the result is in the picture. the first 2 steps seem ok. at the third he said no such file or directory. i really dont know what to do now.
... View more
03-03-2017
10:24 AM
ok i will check this points now. since i installed NiFi, i have also the red symbols and cant restart the services. It is the SNameNode from HDFS, Falcon, Storm, Ambari Infra and Atlas. Check the image. Could this also be a problem according to this ?
... View more
03-02-2017
05:32 PM
Ok now i know what you mean. i downloaded the template in the tutorial for NiFi. when i check the template now and check the process put hdfs it sais failure, 104 queued.
... View more
03-02-2017
05:27 PM
Yes Hive is working as expected. the example table is ok. And i also get the real-time data shown in banana and in solar where i can run querys on it. Only not to hdfs. What is this branch which should write the data to the hdfs folder /tmp/tweets_staging ?
... View more
03-02-2017
04:48 PM
ok now i see i didnt save the date through banana. it only saves the layout of the dashboard. so it cant work-_- but how can i save the data then. which one in the tutorial is the step to save the data ? cause when i was at this step: sudo -u hdfs hadoop fs -chown -R maria_dev /tmp/tweets_staging
sudo -u hdfs hadoop fs -chmod -R 777/tmp/tweets_staging
i got the message that this directory doesnt exist, so i created this directory. is this correct ?
... View more
03-02-2017
04:28 PM
if i try to create the table from the json file without the add jar cmd this is the result of the select on the table: {"trace":"org.apache.hive.service.cli.HiveSQLException:
java.io.IOException: org.apache.hadoop.hive.serde2.SerDeException: Row
is not a valid JSON Object - JSONException: A JSONObject text must end
with \u0027}\u0027 at 2 [character 3 line
1]\n\norg.apache.hive.service.cli.HiveSQLException: java.io.IOException:
org.apache.hadoop.hive.serde2.SerDeException: Row is not a valid JSON
Object - JSONException: A JSONObject text must end with \u0027}\u0027 at
2 [character 3 line 1]\n\tat
org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:264)\n\tat
org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:250)\n\tat
org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:373)\n\tat
org.apache.ambari.view.hive2.actor.ResultSetIterator.getNext(ResultSetIterator.java:119)\n\tat
org.apache.ambari.view.hive2.actor.ResultSetIterator.handleMessage(ResultSetIterator.java:79)\n\tat
org.apache.ambari.view.hive2.actor.HiveActor.onReceive(HiveActor.java:38)\n\tat
akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)\n\tat
akka.actor.Actor$class.aroundReceive(Actor.scala:467)\n\tat
akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)\n\tat
akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)\n\tat
akka.actor.ActorCell.invoke(ActorCell.scala:487)\n\tat
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)\n\tat
akka.dispatch.Mailbox.run(Mailbox.scala:220)\n\tat
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)\n\tat
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\n\tat
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\n\tat
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\n\tat
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)\nCaused
by: org.apache.hive.service.cli.HiveSQLException: java.io.IOException:
org.apache.hadoop.hive.serde2.SerDeException: Row is not a valid JSON
Object - JSONException: A JSONObject text must end with \u0027}\u0027 at
2 [character 3 line 1]\n\tat
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:411)\n\tat
org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:233)\n\tat
org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:780)\n\tat
org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:478)\n\tat
org.apache.hive.service.cli.thrift.ThriftCLIService.FetchResults(ThriftCLIService.java:692)\n\tat
org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1557)\n\tat
org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults.getResult(TCLIService.java:1542)\n\tat
org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)\n\tat
org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)\n\tat
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)\n\tat
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\tat
java.lang.Thread.run(Thread.java:745)\nCaused by: java.io.IOException:
org.apache.hadoop.hive.serde2.SerDeException: Row is not a valid JSON
Object - JSONException: A JSONObject text must end with \u0027}\u0027 at
2 [character 3 line 1]\n\tat
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:520)\n\tat
org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:427)\n\tat
org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:146)\n\tat
org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1762)\n\tat
org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:406)\n\t...
13 more\nCaused by: org.apache.hadoop.hive.serde2.SerDeException: Row
is not a valid JSON Object - JSONException: A JSONObject text must end
with \u0027}\u0027 at 2 [character 3 line 1]\n\tat
org.openx.data.jsonserde.JsonSerDe.onMalformedJson(JsonSerDe.java:412)\n\tat
org.openx.data.jsonserde.JsonSerDe.deserialize(JsonSerDe.java:174)\n\tat
org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:501)\n\t...
17 more\n","message":"Failed to fetch next batch for the
Resultset","status":500}
... View more
03-02-2017
04:20 PM
Hey, no after the normal steps there was no data. then i saved the data in solr banana dashboard at the top right as a json file and uploaded this file to the dirctory and created the table again. but then it was full of rubbish. but i did this without the Add Jar Command ( its not in the tutorial at this point). So now i did again with the Add Jar cmd. i didnt get a error when creating the table but when i select on the table now this is the output: {"trace":"java.lang.Exception: Cannot fetch result for job. Job with id:
428 for instance: AUTO_HIVE_INSTANCE has either not started or has
expired.\n\njava.lang.Exception: Cannot fetch result for job. Job with
id: 428 for instance: AUTO_HIVE_INSTANCE has either not started or has
expired.\n\tat
org.apache.ambari.view.hive2.actor.message.job.FetchFailed.\u003cinit\u003e(FetchFailed.java:28)\n\tat
org.apache.ambari.view.hive2.actor.OperationController.fetchResultActorRef(OperationController.java:200)\n\tat
org.apache.ambari.view.hive2.actor.OperationController.handleMessage(OperationController.java:135)\n\tat
org.apache.ambari.view.hive2.actor.HiveActor.onReceive(HiveActor.java:38)\n\tat
akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.scala:167)\n\tat
akka.actor.Actor$class.aroundReceive(Actor.scala:467)\n\tat
akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)\n\tat
akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)\n\tat
akka.actor.ActorCell.invoke(ActorCell.scala:487)\n\tat
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)\n\tat
akka.dispatch.Mailbox.run(Mailbox.scala:220)\n\tat
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)\n\tat
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\n\tat
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\n\tat
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\n\tat
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)\n","message":"Cannot
fetch result for job. Job with id: 428 for instance: AUTO_HIVE_INSTANCE
has either not started or has expired.","status":500}
... View more
03-02-2017
03:57 PM
Hello together, aim doing the tutorial, and did everything like explained. now iam at that point that i created all the tables. dictionary time_zone_map and tweets_text. but my tweets_text is empty. i connected the data flow and i have my data in the banana dashboard. which step i missed ? how can i get my twitter feeds now into my table ? Greets, Martin
... View more
Labels:
- Labels:
-
Apache Solr
03-02-2017
10:42 AM
Hey @J. D. Bacolod, did you solve your issue ? i have the same problem with solr when i want to start it. but when i hit restart all te solr symbol turns green and i can enter the UI. but there i get the next error. collection1_shard1_replica1:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'hdfs://sandbox.hortonworks.com:8020/solr/collection1/core_node2/data/index/'
of core 'collection1_shard1_replica1' is already locked. The most
likely cause is another Solr server (or another solr core in this
server) also configured to use this directory; other possible causes may
be specific to lockType: hdfs
tweets_shard1_replica1:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'hdfs://sandbox.hortonworks.com:8020/solr/tweets/core_node1/data/index/'
of core 'tweets_shard1_replica1' is already locked. The most likely
cause is another Solr server (or another solr core in this server) also
configured to use this directory; other possible causes may be specific
to lockType: hdfs
collection1_shard2_replica1:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Index dir
'hdfs://sandbox.hortonworks.com:8020/solr/collection1/core_node1/data/index/'
of core 'collection1_shard2_replica1' is already locked. The most
likely cause is another Solr server (or another solr core in this
server) also configured to use this directory; other possible causes may
be specific to lockType: hdfs
Anyone have an idea ? Best Regards, Martin
... View more
03-01-2017
09:53 PM
iam out of safemode now with: sudo -u hdfs hdfs dfsadmin -safemode leave
... View more
03-01-2017
09:50 PM
Hey eorgad, thx for your answer. i tried to leave safemode now with the command hdfs dfsadmin -safemode leave. But now it says access denied for user root. superuser privilege is required. iam logged in as ssh root@127.0.0.1 -p 2222
... View more
03-01-2017
08:58 PM
i mean that now every time i start the sandbox hdfs hbase nifi and some other services are red. i can start nifi and hbase manually. at hdfs only the snamenode is stopped and i cant start it manually. i will just try to continue the tutorial now. Thanks for the answer
... View more
03-01-2017
05:32 PM
Hello together, i want to do the tutorial for sentiment Analysis. But after the first step of installing NiFi my system doesnt work correct anymore. i cant restart all the services and when i try all services got yellow and out of heartbeat. At the moment SNameNode from HDFS is stopped, HBase Master and NiFi Server also. Is it possible to start them through command line or does anyone have an idea how i get this system back to work to continue the tutorial ? iam already afraid of installing solr and need to restart again. Thanks in advance. Best Regards, Martin
... View more
Labels:
- Labels:
-
Apache NiFi
02-24-2017
01:27 PM
ls: cannot access /hadoop/yarn/local/usercache/admin/appcache/application_1487940575822_0002/container_1487940575822_0002_01_000002/hive.tar.gz/hive/lib/slf4j-api-*.jar: No such file or directory
ls: cannot access /hadoop/yarn/local/usercache/admin/appcache/application_1487940575822_0002/container_1487940575822_0002_01_000002/hive.tar.gz/hive/hcatalog/lib/*hbase-storage-handler-*.jar: No such file or directory
WARNING: Use "yarn jar" to launch YARN applications.
17/02/24 13:00:06 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
17/02/24 13:00:06 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
17/02/24 13:00:06 INFO pig.ExecTypeProvider: Trying ExecType : TEZ_LOCAL
17/02/24 13:00:06 INFO pig.ExecTypeProvider: Trying ExecType : TEZ
17/02/24 13:00:06 INFO pig.ExecTypeProvider: Picked TEZ as the ExecType
2017-02-24 13:00:06,556 [main] INFO org.apache.pig.Main - Apache Pig version 0.16.0.2.5.0.0-1245 (rexported) compiled Aug 26 2016, 02:07:35
2017-02-24 13:00:06,556 [main] INFO org.apache.pig.Main - Logging error messages to: /hadoop/yarn/local/usercache/admin/appcache/application_1487940575822_0002/container_1487940575822_0002_01_000002/pig_1487941206542.log
2017-02-24 13:00:11,600 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/yarn/.pigbootup not found
2017-02-24 13:00:12,838 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox.hortonworks.com:8020
2017-02-24 13:00:19,111 [main] INFO org.apache.pig.PigServer - Pig Script ID for the session: PIG-script.pig-a1001e0c-c0ad-4d2a-8c0d-6ef4e646f029
2017-02-24 13:00:22,291 [main] INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
2017-02-24 13:00:44,897 [main] INFO org.apache.pig.backend.hadoop.PigATSClient - Created ATS Hook
2017-02-24 13:00:56,270 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
2017-02-24 13:00:57,367 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://sandbox.hortonworks.com:9083
2017-02-24 13:01:15,064 [main] INFO hive.metastore - Connected to metastore.
2017-02-24 13:01:47,023 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
2017-02-24 13:01:47,049 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://sandbox.hortonworks.com:9083
2017-02-24 13:01:47,075 [main] INFO hive.metastore - Connected to metastore.
2017-02-24 13:01:47,698 [main] WARN org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_FLOAT 1 time(s).
2017-02-24 13:01:47,891 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
2017-02-24 13:01:47,989 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://sandbox.hortonworks.com:9083
2017-02-24 13:01:47,995 [main] INFO hive.metastore - Connected to metastore.
2017-02-24 13:01:49,349 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 0:
<file script.pig, line 9, column 0> Output Location Validation Failed for: 'riskfactor More info to follow:
Pig 'double' type in column 2(0-based) cannot map to HCat 'BIGINT'type. Target filed must be of HCat type {DOUBLE}
Details at logfile: /hadoop/yarn/local/usercache/admin/appcache/application_1487940575822_0002/container_1487940575822_0002_01_000002/pig_1487941206542.log
2017-02-24 13:01:49,382 [main] INFO org.apache.pig.Main - Pig script completed in 1 minute, 44 seconds and 277 milliseconds (104277 ms)
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)