Member since
04-24-2017
106
Posts
13
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1420 | 11-25-2019 12:49 AM | |
2506 | 11-14-2018 10:45 AM | |
2258 | 10-15-2018 03:44 PM | |
2126 | 09-25-2018 01:54 PM | |
1948 | 08-03-2018 09:47 AM |
11-28-2017
03:42 PM
1 Kudo
I created a NiFi Workflow that reads in data from a Database table into Avro, and later into JSON format. However, at the moment my JSON object looks the following: {
"metric" : "asdpacaduPwirkL1",
"value" : "-495.07000732421875",
"timestamp" : "2017-05-16 12:57:39.0",
"tags" : {
"a" : "b",
"c" : "d"
}
}
This is quite good, but as a last step I have to convert to "value" attribute value into a Number (it is a String at the moment), so my final JSON should look like {
"metric" : "asdpacaduPwirkL1",
"value" : -495.07000732421875, // this should be changed into a number
"timestamp" : "2017-05-16 12:57:39.0",
"tags" : {
"a" : "b",
"c" : "d"
}
} Which NiFi processor can I use for that and how to convert the data type from String to Number? Thank you for your help!
... View more
Labels:
- Labels:
-
Apache NiFi
11-16-2017
08:14 AM
Thank you for the fast answer! I added the property to my Kerberos settings in Ambari, but the problem still exists. Also my /var/log/hadoop-hdfs directory on the NameNode host exists, but it is empty!
... View more
11-15-2017
09:56 AM
I've setup a HDP 2.6.3 Hadoop Cluster with Ambari 2.5.2 (was HDP 2.5.2 and Ambari 2.4.2 earlier, but had the same situation). When I start all services via the Ambari UI, the process stucks at starting the NameNode service. The output always sais: 2017-11-15 09:23:25,594 - Waiting for this NameNode to leave Safemode due to the following conditions: HA: False, isActive: True, upgradeType: None
2017-11-15 09:23:25,594 - Waiting up to 19 minutes for the NameNode to leave Safemode...
2017-11-15 09:23:25,595 - Execute['/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://my-host.com:8020 -safemode get | grep 'Safe mode is OFF''] {'logoutput': True, 'tries': 115, 'user': 'hdfs', 'try_sleep': 10}
2017-11-15 09:23:27,811 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://my-host.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2017-11-15 09:23:40,148 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://my-host.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2017-11-15 09:23:52,525 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://my-host.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2017-11-15 09:24:04,853 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://my-host.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2017-11-15 09:24:17,238 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://my-host.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
2017-11-15 09:24:29,566 - Retrying after 10 seconds. Reason: Execution of '/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://my-host.com:8020 -safemode get | grep 'Safe mode is OFF'' returned 1.
... I think this issue occurs since I setup the cluster ~6 months ago. I always try to leave the safemode manually using the command: hdfs dfsadmin -safemode leave But this command doesn't help very often, mostly it shows that safemode is still ON. So I have to force the safemode exit with hdfs dfsadmin -safemode forceExit Afterwards the NameNode start resumes and all other services start also fine. When I forget to type the forceExit command, the NameNode service start times out and the following service starts fail also. Can someone explain, why Ambari / NameNode can't leave the safemode automatically? What could here be the reason? Here a screenshot of my HDFS overview on Ambari, after a successful start of the services (after forceExit): Any help would be appreciated, thank you!
... View more
Labels:
11-06-2017
12:49 PM
Hi @Bryan Bende this tutorial is really helpful, thank you! For me, everything is working, except the "SPNEGO" part: a) When I open "https://myhost.de:9445/nifi" (without a kinit before), my Kerberos Client asks for authentication (which looks good). When I enter the principal and password, it continues with step b. b) When I already made a "kinit" before opening my browser and entering "https://myhost.de:9445/nifi", I always get the username / password prompt as shown in section "Kerberos Login". What am I missing here? I configured the following settings in my Firefox browser: [Windows only] network.auth.use-sspi = false network.negotiate-auth.delegation-uris = https://myhost.de:9445 network.negotiate-auth.trusted-uris = https://myhost.de:9445 Tested it on Centos7, Ubuntu and Windows, I always get the login screen, instead of skipping it after the "kinit". Can you help?
... View more
07-07-2017
08:34 AM
Ok, easy solution here: I took the hive-jdbc-<version>.jar file as dependency, but I have to take hive-jdbc-<version>-standalone.jar, so changing /usr/hdp/current/hive-client/lib/hive-jdbc-1.2.1000.2.6.1.0-129.jar into /usr/hdp/2.6.1.0-129/hive2/jdbc/hive-jdbc-2.1.0.2.6.1.0-129-standalone.jar did it for me! You can find the hive-jdbc-standalone.jar with find / -name "hive-jdbc*standalone.jar
... View more
07-06-2017
06:41 AM
1 Kudo
Removing the "hdp-1.novalocal" from the hosts list and using the hostname script for setting the public / private hostname did it for me! Thank you so much, I think you saved my whole week!
... View more
07-05-2017
04:38 PM
Thank you @Jay SenSharma for the very fast answer. I just found out, that the wrong hostnames come from the ambari-agents! I did a curl ... DELETE .../hosts/hdp-1.novalocal and everything looked good afterwards (stopped the agents). When I restarted the agents the FQDNs were there in the GET request again! Can I still use your solution for that or is it another problem? Another question: Is there a possibility to set the hostname, that the agents take? Or which information do they use? (file /etc/hostname / hostname / hostname -f)? As I'm not the Linux expert I'm thankful for each help.
... View more
07-05-2017
04:09 PM
1 Kudo
I just restarted my cluster (HDP 2.6) nodes for the first time. When I ran the ambari-server start
ambari-agent start command, the Ambari UI didn't find a heartbeat from any host anymore! When I called curl -i -H "X-Requested-By: ambari" -u admin:mypassword -X GET http://localhost:8080/api/v1/hosts I get the list of my hosts, but it contains two entries for each node (one is the pure hostname, and the other one the FQDN): HTTP/1.1 200 OK
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
Cache-Control: no-store
Pragma: no-cache
Set-Cookie: AMBARISESSIONID=7pm102h29dbu1670zcr416aq9;Path=/;HttpOnly
Expires: Thu, 01 Jan 1970 00:00:00 GMT
User: admin
Content-Type: text/plain
Vary: Accept-Encoding, User-Agent
Content-Length: 1017
Server: Jetty(8.1.19.v20160209)
{
"href" : "http://localhost:8080/api/v1/hosts",
"items" : [
{
"href" : "http://localhost:8080/api/v1/hosts/hdp-1",
"Hosts" : {
"cluster_name" : "TestCluster",
"host_name" : "hdp-1"
}
},
{
"href" : "http://localhost:8080/api/v1/hosts/hdp-1.novalocal",
"Hosts" : {
"host_name" : "hdp-1.novalocal"
}
},
{
"href" : "http://localhost:8080/api/v1/hosts/hdp-2",
"Hosts" : {
"cluster_name" : "TestCluster",
"host_name" : "hdp-2"
}
},
{
"href" : "http://localhost:8080/api/v1/hosts/hdp-2.novalocal",
"Hosts" : {
"host_name" : "hdp-2.novalocal"
}
},
{
"href" : "http://localhost:8080/api/v1/hosts/hdp-3",
"Hosts" : {
"cluster_name" : "TestCluster",
"host_name" : "hdp-3"
}
},
{
"href" : "http://localhost:8080/api/v1/hosts/hdp-3.novalocal",
"Hosts" : {
"host_name" : "hdp-3.novalocal"
}
}
]
} This seems to confuse the ambari-server / ambari-agent, that I can't receive a heartbeat anymore! How can I solve this issue, my cluster is not usable anymore, as the services miss the heartbeat! Thank you! Update: I just saw, that I set the hostname to hdp-x before the ambari-server install, e.g.: sudo hostname hdp-1 When I restart the node(s), it has its "old" hostname again: hdp1.novalocal I just tried to make another "sudo hostname hdp-1" again, but it didn't help, is it because the ambari-server and ambari-agents start automatically after boot? Stopping and restarting the services after this "hostname hdp-1" command didn't help!
... View more
Labels:
- Labels:
-
Apache Ambari
07-05-2017
11:44 AM
I installed the Hortonworks Data Platform 2.6 (still un-kerberized at the moment), which includes Zeppelin 0.7 and Spark 1.6.3. Zeppelin seems to be prepared for Spark 2.x and Scala 2.11, so I had to remove and re-install a new Spark interpreter for my Spark version 1.6.3 and Scala 2.10. Therefore I took the following link: https://zeppelin.apache.org/docs/0.6.1/manual/interpreterinstallation.html#install-spark-interpreter-built-with-scala-210 After restarting the Zeppelin service via Ambari, I was able to run this paragraph (which didnt work before): %spark
sc.version
===========================
res0: String = 1.6.3 Now I'm trying to use the %jdbc interpreter (part of the SAP Vora installation), just to do some select statements: %jdbc
select * from sys.tables using com.sap.spark.engines
======================================================================================================
java.lang.NoSuchMethodError: org.apache.hive.service.cli.thrift.TExecuteStatementReq.setQueryTimeout(J)V
at org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:297)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:241)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:581)
at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:692)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:97)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:490)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Here a screenshot of my JDBC interpreter settings: Also other selects (without the com.sap.spark.engines part) don't work for me, and I get the same exception! Could it be a problem caused by incompatibilities of Hadoop tools? Or what could be the reason for the error message above? Is a dependency missing (if yes which)? Thank you for each advice!
... View more
Labels:
- Labels:
-
Apache Spark
-
Apache Zeppelin
04-25-2017
07:16 AM
So, however the solution to change Spark's hive-site.xml file persistently (survive Spark service restarts) is to find them in Ambari Hive Configs and change them here. After restarting the Hive and afterwards the Spark component via Ambari, the Spark config file /usr/hdp/current/spark-client/conf/hive-site.xml is also updated from these values set in the Ambari Hive configs. Thank you @Doroszlai, Attila! See also the comment above: https://community.hortonworks.com/answers/98279/view.html
... View more
- « Previous
- Next »