1963
Posts
1217
Kudos Received
123
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
341 | 01-12-2024 08:19 AM | |
234 | 12-07-2023 01:49 PM | |
579 | 08-02-2023 07:30 AM | |
988 | 03-29-2023 01:22 PM | |
2961 | 06-03-2021 07:11 AM |
03-08-2017
07:55 PM
1 Kudo
o install a Mosquitto MQTT Server on Centos7 yum -y install unzip Step 1: Add the CentOS 7 mosquitto repository cd /etc/yum.repos.d wget http://download.opensuse.org/repositories/home:/oojah:/mqtt/CentOS_CentOS-7/home:oojah:mqtt.repo sudo yum update Step 2: Install mosquitto & mosquitto-clients sudo yum install -y mosquitto mosquitto-clients Step 3: Run mosquitto sudo su /usr/sbin/mosquitto -d -c /etc/mosquitto/mosquitto.conf > /var/log/mosquitto.log 2>&1 https://community.hortonworks.com/content/kbentry/55839/reading-sensor-data-from-remote-sensors-on-raspber.html
... View more
03-08-2017
05:58 PM
I restarted zeppelin and have unsecure in there https://community.hortonworks.com/questions/18228/phoenix-hbase-problem-with-hdp-234-and-java.html
... View more
03-08-2017
05:58 PM
I have added the driver to spark. Also the regular %phoenix sql calls work fine from Zeppelin. The node /hbase I am trying to connect as val df = sqlContext.load("org.apache.phoenix.spark",Map("table" -> "tweets", "zkUrl" -> "localhost:2181/hbase-unsecure")) Which works elsewhere import org.apache.phoenix.spark._
defined class Tweet
sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@3b09815b
warning: there were 1 deprecation warning(s); re-run with -deprecation for details
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=35, exceptions:
Wed Mar 08 14:26:36 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:36 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:37 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:37 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:38 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:40 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:44 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:54 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:04 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:14 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:24 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:44 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:28:04 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:28:25 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:28:45 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:29:05 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:29:25 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:29:45 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:30:05 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:30:25 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:30:46 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:31:06 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:31:26 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:31:46 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:32:06 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:32:26 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:32:46 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:33:06 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:33:26 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:33:47 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:34:07 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:34:27 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:34:47 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:35:07 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:35:27 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1063)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1369)
at org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:120)
at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2116)
at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:828)
at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1326)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2279)
at org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2248)
at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2248)
at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:135)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:98)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:57)
at org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:45)
at org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil.getSelectColumnMetadataList(PhoenixConfigurationUtil.java:277)
at org.apache.phoenix.spark.PhoenixRDD.toDataFrame(PhoenixRDD.scala:105)
at org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:57)
at org.apache.spark.sql.execution.datasources.LogicalRelation.<init>(LogicalRelation.scala:37)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:1153)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:43)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:48)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:50)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:52)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:54)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:56)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:58)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:60)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:62)
at $iwC$iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:64)
at $iwC$iwC$iwC$iwC$iwC$iwC.<init>(<console>:66)
at $iwC$iwC$iwC$iwC$iwC.<init>(<console>:68)
at $iwC$iwC$iwC$iwC.<init>(<console>:70)
at $iwC$iwC$iwC.<init>(<console>:72)
at $iwC$iwC.<init>(<console>:74)
at $iwC.<init>(<console>:76)
at <init>(<console>:78)
at .<init>(<console>:82)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:717)
at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:928)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:871)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:864)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
Wed Mar 08 14:26:36 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:36 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:37 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:37 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:38 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:40 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:44 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:26:54 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:04 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:14 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:24 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:27:44 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:28:04 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: org.apache.hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
Wed Mar 08 14:28:25 UTC 2017, RpcRetryingCaller{globalStartTime=1488983196549, pause=100,
... View more
Labels:
03-08-2017
02:02 PM
7 Kudos
We needed to create a data lake of all the companies data, the first set of data was from SQL Server. So using Apache NiFi 1.1.x I ingested it into Hive / ORC. A few of the smaller constantly changing tables need to stay in SQL Server so we need to be able to join tables in Hive with tables in SQL Server. Fortunately, Microsoft provides a very cool extension to SQL Server called Polybase that let's us build external tables pointing to Hadoop. Once those tables are referenced they act like regular tables. So now all the companies data including other data sources loaded into Hadoop Hive ORC tables can be queried and joined. And it's fast!
Step 1: Apache NiFi Magic QueryDatabaseTable: One for each table picking a sequence id primary key the tables have. Could also do timestamp. ConvertAVROtoORC: Point to /etc/hive/conf/hive-site.xml PutHDFS: Store in a separate HDFS, write this down as we need it for polybase. ReplaceText (GenerateHiveDDL): Builds create Hive table string automagically. You can do this manually. PutHiveQL: Runs the Hive table creation DDL. You can do this manually. For my example, I had six tables to do. So I just copied that set of processors and made 5 copies and changed table names and HDFS directories. That's all folks. Step 2: Prepare Polybase Change the yarn-site.xml
file on the SQL Server machine to point to the HDP 2.5. E:\Program
Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Binn\Polybase\Hadoop\conf\yarn-site.xml <property>
<name>yarn.application.classpath</name>
<value>$HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/*</value>
</property> Step 3: Run DDL Necessary for Polybase Access to Hadoop CREATE EXTERNAL DATA SOURCE [HDP2]
WITH( TYPE = HADOOP, LOCATION = 'hdfs://hadoopserver:8020')
CREATE EXTERNAL FILE FORMAT ORC WITH (
FORMAT_TYPE =ORC );
CREATE EXTERNAL
TABLE [dbo].[myTableIsExcellent]
( [myid] int NULL,
[yourid] int NULL,
[theirid] varchar(64) NULL,
[somedata] varchar(255) NULL, [somedata] int NULL)
WITH (LOCATION='/import/mydirectory/',
DATA_SOURCE =HDP2, FILE_FORMAT =ORC );
2b. Create an external data source pointing to your HDFS on HDP 2c. Create an external file format like ORC that your tables use. I recommend ORC. 2d. Create your external tables pointing to their HDFS directories containing ORC files. Step 4: Polybase Federated Query SELECT TOP (1000) [id], hs.[id], hs.[name], c.[description]
FROM [database].[dbo].[MyHiveTable] hs,
[LocalSQLServerTableName] c
WHERE c.id = hs.id
ORDER BY c.name desc
You don't get much easier than that. Looks like a regular table, acts like a regular table, queries like a regular table. Users won't know or care where the data is. They don't have to know you have 100 petabytes of data sitting in a massive Hortonworks Data Platform. References:
https://msdn.microsoft.com/en-us/library/mt163689.aspx http://blog.pragmaticworks.com/sql-server-2016-polybase https://msdn.microsoft.com/en-us/library/dn935026.aspx https://hernandezpaul.wordpress.com/2016/05/29/polybase-query-service-and-hadoop-welcome-sql-server-2016/ https://realizeddesign.blogspot.com/2015/09/setting-up-polybase-for-yarn-in-sql.html https://blogs.msdn.microsoft.com/sqlcat/2016/06/21/polybase-setup-errors-and-possible-solutions/
... View more
Labels:
03-07-2017
06:08 PM
2 Kudos
have you tried to Jolt or EvaluateJSONPath try your expression on jsonpath.com great for figuring those out also you can process twice with JSON. once with evaluateJsonpath processor and then in the next step in each attribute expression language also have jsonpath
... View more
03-07-2017
03:51 PM
6 Kudos
Use Case I want to hide text messages inside images. When the images arrive somewhere else, I want to extract those messages. It let's you hide text in images, binaries in images and images in images. I was interesting in hiding text messages in images. After seeing https://en.wikipedia.org/wiki/Turn:_Washington's_Spies I thought secret messages were cool. So using the library, I take an image and text and hide the text in there. The library produces a new image (PNG) that has the message in it. I have a second script that extracts the text. The images look the same to my eyes. A future test would be to run a deep learning library or image analysis tool on the images to see if they spot the bits. They should be able to. A future NiFi tool would be to spot hidden images. It's a fun exercise to use NiFi and it seems possible that encoding messages in images were passing through Niagra Files back in the NSA days. Step 1: Hide Text (ExecuteStreamCommand) Step 2: Fetch File Step 3: UnHide Text (ExecuteStreamCommand) The left image is the original image and the right PNG is the output image with text. The size on disk has increased at a noticeable level.
The python source code is in github and referenced below: hide.sh wget $1 -O img.jpg
python hidetext.py img.jpg "$2" hidetext.py import cv
from LSBSteg import LSBSteg
import sys
imagename=sys.argv[1]
textstring=sys.argv[2]
carrier = cv.LoadImage(imagename)
steg = LSBSteg(carrier)
steg.hideText(textstring)
steg.saveImage(imagename + ".png")
#Image that contain datas unhide.sh python unhidetext.py $1 unhidetext.py import cv
from LSBSteg import LSBSteg
import sysimagename=sys.argv[1]
im = cv.LoadImage(imagename)steg = LSBSteg(im)
print steg.unhideText() For installation, you need to download LSB-Steganography script. OpenCV pip install cv
Reference: https://en.wikipedia.org/wiki/Steganography https://github.com/tspannhw/spy https://github.com/RobinDavid/LSB-Steganography
... View more
Labels:
03-03-2017
08:37 PM
We could have NiFi rewrite, update and add to the sonicPI code so the music is constantly changing
... View more
03-03-2017
07:17 PM
3 Kudos
Working With S3 Compatible Data Stores (and handling single source failure) With the major outage of S3 in my region, I decided I needed to have an alternative file store. I found a great open source server called Minio that I run on a miniPC running Centos 7. We could also use this solution for connecting to other S3 compatible stores such as RiakCS and Google Cloud Storage. I like to remain cloud and location neutral. In Apache NiFi, it's really easy. You can have two sources and two destinations, instead of just your regular AWS S3, you can have one for AWS S3 and one for another. Or you can use the second as a disaster recovery data backup. Since my Minio box is local, I can store data locally. It's pretty affordable to get a few terabytes connected to a small Linux box to hold some backups. With Apache NiFi, you have queues to buffer a potentially slower ingest/egress. Minio Setup wget https://dl.minio.io/server/minio/release/linux-amd64/minio
chmod 755 minio
nohup ./minio server files & Find the version that matches your hardware and OS. It will report back the endpoint (use this in the NiFi endpoint URL), access key and secret key and region. You enter this information in Apache NiFi and any S3 compatible tool like AWS CLI or S3Cmd. S3 Tool Install pip install awscli
AWS Access Key ID [****************3P2F]: 45454545zfgfgfgfgfgzgggzggggFFF
AWS Secret Access Key [****************Y3TG]: FFFDFDFDFDF7d8f7d87f8&D*F7d*&F78
Default region name [us-east-1]:
Default output format [None]:
aws configure set default.s3.signature_version s3v4
aws --endpoint-url http://192.168.1.155:9000 s3 ls s3://nifi
2017-03-01 16:17:19 13729 Retry_Count_Loop.xml
2017-03-01 16:19:58 19929 tspann7.jpg
aws --endpoint-url http://192.168.1.155:9000 s3 ls
2017-03-01 11:19:58 nifi
These are just for testing connectivity. NiFi Setup
Flow 1: GetTwitter: Ingest twitter data with keywords: AWS Outage, ... EvaluateJSONPath: parse out main Twitter fields from JSON CoreNLPProcessor: my custom processor to run Stanford CoreNLP sentiment analysis on the message. NLPProcessor: my custom processor to run Apache OpenNLP name and location entity resolver on the message. AttributeToJSON: convert all the attributes including output from the two custom processors into one unified JSON file. PutS3Object: Store to my S3 compatible datastore. Here you can tee the data from AttributeToJson to a number of different S3 stores including Amazon S3.
Flow 2:
ListS3: list all the files from S3 compatible data store. This is where you can add additional sources to ingest. You can have Amazon S3, Google Cloud Storage, RiakCS, Minio and others. FetchS3Object: get the actual file from S3. PutFile: store locally
S3.properties file # Setup endpoint
host_base = 192.168.1.155:9000
host_bucket = 192.168.1.155:9000
bucket_location = us-east-1
use_https = True
# Setup access keys
access_key = DF&D*F&*D&F*&DF&DFDF
secret_key = &d7df7f77DDFdjfiqeworsdfFDr34fd
accessKey = DF&D*F&*D&F*&DF&DFDF
secretKey = &d7df7f77DDFdjfiqeworsdfFDr34fd
# Enable S3 v4 signature APIs
signature_v2 = False After sending Twitter JSON files to S3.
References: https://github.com/minio/minio https://www.minio.io/ https://dzone.com/articles/aftermath-of-the-aws-s3-outagean-interview-with-ni https://aws.amazon.com/message/41926/ https://cloud.google.com/storage/docs/interoperability https://docs.minio.io/docs/aws-cli-with-minio https://aws.amazon.com/cli/ http://s3tools.org/s3cmd https://github.com/minio/minio-java
... View more
Labels:
03-02-2017
05:25 PM
3 Kudos
1. Host a Web Page (index.html) via HTTP GET with 200 OK Status
2. Receive POST from that page via AJAX with browser data
3. Extract Content and Attributes
4. Build a JSON file of HTTP data
5. Store it
To accept location in a phone or modern browser you must be running SSL.
So I added that for this HTTP Request.
Use openssl to create your 2048 RSA X509, PKCS12, JKS Keystore, Import Trust Store and import in browser
Your web page can be any web page, just POST back via AJAX or Form Submit.
<html>
<head>
<title>NiFi Browser Data Acquisition</title>
<body>
<script>
// Usage
window.onload = function() {
navigator.getBattery().then(function(battery) {
console.log(battery.level);
battery.addEventListener('levelchange', function() {
console.log(this.level);
});
});
};
////////////// print these
var latitude = "";
var longitude = "";
var ips = "";
var batteryInfo = "";
var screenInfo = screen.width +","+ screen.height + "," +
screen.availWidth +","+ screen.availHeight + "," +
screen.colorDepth + "," + screen.pixelDepth;
var pluginsInfo = "";
var coresInfo = "";
/////////////
////// Set Plugins
for (var i = 0; i < 12; i++) {
if ( typeof window.navigator.plugins[i] !== 'undefined' ) {
pluginsInfo += window.navigator.plugins[i].name + ', ';
}
}
////// Set Cores
if ( window.navigator.hardwareConcurrency > 0 ) {
coresInfo = window.navigator.hardwareConcurrency + " cores";
}
/////////////
/// send the information to the server
function loadDoc() {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
document.getElementById("demo").innerHTML = 'Sent.';
}
};
// /send
xhttp.open("POST", "/send", true);
xhttp.setRequestHeader("Content-type", "application/json");
xhttp.send('{"plugins":"' + pluginsInfo +
'", "screen":"' + screenInfo +
'", "cores":"' + coresInfo +
'", "battery":"' + batteryInfo +
'", "ip":"' + ips +
'", "lat":"' + latitude + '", "lng":"' + longitude + '"}')
}
////////////
function geoFindMe() {
var output = document.getElementById("out");
if (!navigator.geolocation){
output.innerHTML = "<p>Geolocation is not supported by your browser</p>";
return;
}
function success(position) {
latitude = position.coords.latitude;
longitude = position.coords.longitude;
output.innerHTML = '<p>Latitude is ' + latitude + '° <br>Longitude is ' + longitude + '°</p>';
var img = new Image();
img.src="https://maps.googleapis.com/maps/api/staticmap?center=" + latitude + "," + longitude + "&zoom=13&size=300x300&sensor=false";
output.appendChild(img);
}
function error() {
output.innerHTML = "Unable to retrieve your location";
}
output.innerHTML = "<p>Locating…</p>";
navigator.geolocation.getCurrentPosition(success, error);
}
//get the IP addresses associated with an account
function getIPs(callback){
var ip_dups = {};
//compatibility for firefox and chrome
var RTCPeerConnection = window.RTCPeerConnection
|| window.mozRTCPeerConnection
|| window.webkitRTCPeerConnection;
var useWebKit = !!window.webkitRTCPeerConnection;
//bypass naive webrtc blocking using an iframe
if(!RTCPeerConnection){
//NOTE: you need to have an iframe in the page right above the script tag
//
//<iframe id="iframe" sandbox="allow-same-origin" style="display: none"></iframe>
//<script>...getIPs called in here...
//
var win = iframe.contentWindow;
RTCPeerConnection = win.RTCPeerConnection
|| win.mozRTCPeerConnection
|| win.webkitRTCPeerConnection;
useWebKit = !!win.webkitRTCPeerConnection;
}
//minimal requirements for data connection
var mediaConstraints = {
optional: [{RtpDataChannels: true}]
};
var servers = {iceServers: [{urls: "stun:stun.services.mozilla.com"}]};
//construct a new RTCPeerConnection
var pc = new RTCPeerConnection(servers, mediaConstraints);
function handleCandidate(candidate){
//match just the IP address
var ip_regex = /([0-9]{1,3}(\.[0-9]{1,3}){3}|[a-f0-9]{1,4}(:[a-f0-9]{1,4}){7})/
var ip_addr = ip_regex.exec(candidate)[1];
//remove duplicates
if(ip_dups[ip_addr] === undefined)
callback(ip_addr);
ip_dups[ip_addr] = true;
}
//listen for candidate events
pc.onicecandidate = function(ice){
//skip non-candidate events
if(ice.candidate)
handleCandidate(ice.candidate.candidate);
};
//create a bogus data channel
pc.createDataChannel("");
//create an offer sdp
pc.createOffer(function(result){
//trigger the stun server request
pc.setLocalDescription(result, function(){}, function(){});
}, function(){});
//wait for a while to let everything done
setTimeout(function(){
//read candidate info from local description
var lines = pc.localDescription.sdp.split('\n');
lines.forEach(function(line){
if(line.indexOf('a=candidate:') === 0)
handleCandidate(line);
});
}, 1000);
}
window.addEventListener("load", function (ev) {
"use strict";
var log = document.getElementById("log");
// https://dvcs.w3.org/hg/dap/raw-file/tip/sensor-api/Overview.html
window.addEventListener("devicetemperature", function (ev) {
log.textContent += "devicetemperature " + ev.value + "\n";
}, false);
window.addEventListener("devicepressure", function (ev) {
log.textContent += "devicepressure " + ev.value + "\n";
}, false);
window.addEventListener("devicelight", function (ev) {
log.textContent += "devicelight " + ev.value + "\n";
// toy tric
log.style.color = "rgb(" + (255 - 2*ev.value) + ",0,0)";
log.style.backgroundColor = "rgb(0,0," + (2*ev.value) + ")";
}, false);
window.addEventListener("deviceproximity", function (ev) {
log.textContent += "deviceproximity " + ev.value + "\n";
// toy tric
if (ev.value < 3) navigator.vibrate([300, 100, 100]);
}, false);
window.addEventListener("devicenoise", function (ev) {
log.textContent += "devicenoise " + ev.value + "\n";
}, false);
window.addEventListener("devicehumidity", function (ev) {
log.textContent += "devicehumidity " + ev.value + "\n";
}, false);
//https://wiki.mozilla.org/Magnetic_Field_Events
window.addEventListener("devicemagneticfield", function (ev) {
log.textContent += "devicemagneticfield " + [ev.x, ev.y, ev.x]+ "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/default/pressure/Overview.html
window.addEventListener("atmpressure", function (ev) {
log.textContent += "atmpressure " + ev.value + "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/tip/humidity/Overview.html
window.addEventListener("humidity", function (ev) {
log.textContent += "humidity " + ev.value + "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/tip/temperature/Overview.html
window.addEventListener("temperature", function (ev) {
log.textContent += "temperature " + [ev.f, ev.c, ev.k, ev.value] + "\n";
}, false);
// https://dvcs.w3.org/hg/dap/raw-file/tip/battery/Overview.html
try {
if (typeof navigator.getBattery === "function") {
navigator.getBattery().then(function (battery) {
log.textContent += "battery.level " + battery.level + "\n";
log.textContent += "battery.charging " + battery.charging + "\n";
batteryInfo = "battery.level=" + battery.level + "," +
"battery.charging=" + battery.charging;
log.textContent += "battery.chargeTime " + battery.chargeTime + "\n";
log.textContent += "battery.dischargeTime " + battery.dischargeTime + "\n";
battery.addEventListener("levelcharge", function (ev) {
log.textContent += "change battery.level " + battery.level + "\n";
}, false);
}).catch(function (err) {
log.textContent += err.toString() + "\n";
});
} else {
log.textContent += "";
}
} catch (ex) {
log.textContent += ex.toString() + "\n";
}
}, false);
</script>
<p>
<br>
DEMO: Send Data to HDF / Apache NiFi via HandleHTTPRequest
<br>
<p><button onclick="geoFindMe()">Show my location</button></p>
<div id="out"></div>
<div id="demo"></div>
<pre id="log"></pre>
<button type="button" onclick="loadDoc()">Send data to Apache NiFi SSL Server</button>
<iframe id="iframe" sandbox="allow-same-origin" style="display: none"></iframe>
<script>
getIPs(function(ip){ips = ip;});
</script>
</body>
</html>
index.html : A web page to grab user information.
mobile-ingest-v3.xml : Apache NiFi 1.1.x template.
Note: Different browsers, devices, phones, tables and versions will send different values. Users should get a location request pop-up.
JSON Result File
{
"http.request.uri" : "/send",
"http.context.identifier" : "a4f9ae25-5f49-463e-97eb-c8a6bf3be8a7",
"http.remote.host" : "192.xxx.1.xxx",
"http.headers.Host" : "192.xxx.1.xxx:9178",
"http.local.name" : "192.xxx.1.xxx",
"http.headers.DNT" : "1",
"plugins" : "Widevine Content Decryption Module, Shockwave Flash, Chrome PDF Viewer, Native Client, Chrome PDF Viewer, ",
"latitude" : "40.2681799",
"http.headers.Accept" : "*/*",
"battery" : "battery.level=1,battery.charging=true",
"uuid" : "a2f299ae-6ef6-480d-a359-1362d25abe76",
"http.request.url" : "https://192.168.1.151:9178/send",
"http.server.name" : "192.168.1.151",
"http.character.encoding" : "UTF-8",
"path" : "./",
"cores" : "8 cores",
"http.remote.addr" : "192.168.1.151",
"http.headers.User-Agent" : "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36",
"http.method" : "POST",
"http.headers.Connection" : "keep-alive",
"longitude" : "-74.5291745",
"http.server.port" : "9178",
"ip" : "192.168.1.151",
"mime.type" : "application/json",
"http.locale" : "en_US",
"http.headers.Accept-Encoding" : "gzip, deflate, br",
"http.headers.Origin" : "https://192.168.1.151:9178",
"http.servlet.path" : "",
"http.local.addr" : "192.168.1.151",
"filename" : "1082639525534467",
"http.headers.Referer" : "https://192.168.1.151:9178/",
"http.headers.Accept-Language" : "en-US,en;q=0.8",
"http.headers.Content-Length" : "253",
"http.headers.Content-Type" : "application/json",
"RouteOnAttribute.Route" : "isjsonpost"
}
References:
https://github.com/tspannhw/webdataingest
http://webkay.robinlinus.com/
https://github.com/RobinLinus/autofill-phishing
https://github.com/RobinLinus/ubercookie
https://github.com/RobinLinus/socialmedia-leak
https://www.w3schools.com/jsref/prop_screen_availheight.asp
https://community.hortonworks.com/articles/27033/https-endpoint-in-nifi-flow.html
http://www.batchiq.com/nifi-configuring-ssl-auth.html
https://community.hortonworks.com/articles/886/securing-nifi-step-by-step.html
http://mobilehtml5.org/
https://gist.github.com/bellbind/c60d7008e86c34a76aa1
https://github.com/coremob/camera
http://www.girliemac.com/presentation-slides/html5-mobile-approach/deviceAPIs.html?full#23
https://github.com/girliemac/sushi-compass/blob/master/js/app.js
https://github.com/noipfraud/IPLock
http://www.tomanthony.co.uk/blog/detect-visitor-social-networks/
https://appsec-labs.com/html5/#toggle-id-5
https://mobiforge.com/design-development/sense-and-sensor-bility-access-mobile-device-sensors-with-javascript
https://www.html5rocks.com/en/tutorials/device/orientation/
http://qnimate.com/html5-proximity-api/
... View more
Labels:
03-01-2017
02:01 PM
My above example worked well, I just didn't implement paging. Some people have paging in other examples. It was just using the REST API. What particular facebook APIS/information are you looking for?
... View more