Member since
06-13-2017
45
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1993 | 01-24-2019 06:22 AM | |
3189 | 08-06-2018 12:05 PM |
07-16-2018
11:11 AM
seems it relate to https://issues.apache.org/jira/browse/PHOENIX-3333 , however, in hdp2.6.2, it is fixed according to https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/patch_phoenix.html
... View more
07-16-2018
08:44 AM
Hi there,
in my hdp 2.6.2, i am using spark2.1.1, phoenix 4.7. When start spark-shell as below,
spark-shell --conf "spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" --conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/hoenix-core-4.7.0.2.6.2.0-205.jar"
It can successfully save data to table2 with code
val phoenixOptionMap=Map("table"->"TABLE1","zkUrl"->"zk1:2181/hbase-secure")
val df2=spark.sqlContext.read.format("org.apache.phoenix.spark").options(phoenixOptionMap).load()
val configuration = HBaseConfiguration.create();
configuration.set("zookeeper.znode.parent", "/hbase-secure")
df2.saveToPhoenix("table2",configuration,Option("zk1:2181/hbase-secure"))
Then i created a new scala program as:
package com.test
import org.apache.spark.sql.{SQLContext, SparkSession}
import org.apache.phoenix.spark._
import org.apache.hadoop.hbase.HBaseConfiguration
object SmokeTest {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder()
.appName("PhoenixSmokeTest")
.getOrCreate()
val phoenixOptionMap=Map("table"->"TABLE1","zkUrl"->"zk1:2181/hbase-secure")
val df2=spark.sqlContext.read.format("org.apache.phoenix.spark").options(phoenixOptionMap).load()
val configuration = HBaseConfiguration.create();
configuration.set("zookeeper.znode.parent", "/hbase-secure")
configuration.addResource("/etc/hbase/conf/hbase-site.xml")
df2.saveToPhoenix("table2",configuration,Option("zk1:2181/hbase-secure"))
}
}
and run it with below spark-submit script spark-submit \
--class com.test.SmokeTest \
--master yarn\
--deploy-mode client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 4 \
--num-executors 2 \
--conf "spark.executor.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--conf "spark.driver.extraClassPath=/usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar:/usr/hdp/current/phoenix-client/phoenix-client.jar:/usr/hdp/current/phoenix-client/lib/hbase-client.jar:/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar:/usr/hdp/current/phoenix-client/lib/hbase-common.jar:/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar:/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar" \
--jars /usr/hdp/current/phoenix-client/phoenix-4.7.0.2.6.2.0-205-spark2.jar,/usr/hdp/current/phoenix-client/phoenix-client.jar,/usr/hdp/current/phoenix-client/lib/hbase-client.jar,/usr/hdp/current/phoenix-client/lib/phoenix-spark2-4.7.0.2.6.2.0-205.jar,/usr/hdp/current/phoenix-client/lib/hbase-common.jar,/usr/hdp/current/phoenix-client/lib/hbase-protocol.jar,/usr/hdp/current/phoenix-client/lib/phoenix-core-4.7.0.2.6.2.0-205.jar \
--verbose \
/tmp/test-1.0-SNAPSHOT.jar It failed with below message 18/07/16 16:30:16 INFO ClientCnxn: Session establishment complete on server zk1/10.2.29.102:2181, sessionid = 0x364270588b5472f, negotiated timeout = 60000
18/07/16 16:30:17 INFO Metrics: Initializing metrics system: phoenix
18/07/16 16:30:17 INFO MetricsConfig: loaded properties from hadoop-metrics2.properties
18/07/16 16:30:17 INFO MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
18/07/16 16:30:17 INFO MetricsSystemImpl: phoenix metrics system started
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix$default$4()Lscala/Option;
at com.trendyglobal.bigdata.inventory.SmokeTest$.main(SmokeTest.scala:28)
at com.trendyglobal.bigdata.inventory.SmokeTest.main(SmokeTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:751)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/07/16 16:30:20 INFO SparkContext: Invoking stop() from shutdown hook
18/07/16 16:30:20 INFO ServerConnector: Stopped Spark@38f66b77{HTTP/1.1}{0.0.0.0:4040}<br> Woud anyone has any advice? Thanks, Forest
... View more
Labels:
- Labels:
-
Apache Phoenix
-
Apache Spark
11-06-2017
02:37 AM
1 Kudo
Thanks info
... View more
11-03-2017
11:58 AM
hi , i'd like to set the workload on the nifi server, one way is to set the Back Pressure Data Size or number. Is there a simplier way to set the number of running jobs globally on the nifi? Thanks.
... View more
Labels:
- Labels:
-
Apache NiFi
11-03-2017
11:43 AM
1 Kudo
@Bryan Bende do you mean delete all below directories to empty the queue? will the job flow be delete too? content_repository flowfile_repository provenance_repository
... View more
10-12-2017
03:48 AM
i tried the approach as @Geoffrey Shelton Okot adviced, but no luck. The kdc.conf is kdcconf.txt, and the krb5.conf is changed to krb5conf-after-install-client.txt after the step " Install Kerberos Client" The nodes are VMs on the same physical server, and the command "kadmin -p admin/admin@ABC.COM" is successfully on all nodes. Any hints? I can't find any output log for the step " Test Kerberos Client" . Actually, can i skip it?
... View more
10-11-2017
12:05 PM
Hi all, i have installed hdp2.6.2 cluster on ubuntu16.04 servers, while enabling kerberos, it hanged on the step "Test Kerberos Client" as the picture showed. I followed the guideline https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-security/content/optional_install_a_new_mit_kdc.html but seems stranged that when running "krb5_newrealm", it only asked me to enter the master key password, but NOT asked me to input the default realms. then I edited the krb5.conf to add the realm manually( krb5conf.txt) and the command "kadmin -p admin/admin@ABC.COM" is tested successfully. Any one had happened to encouter this and have any hints? Thanks
... View more
Labels:
09-07-2017
06:01 AM
Thanks advince. The issue was resolved after changing the realms name from dev.com to DEV.COM
... View more
09-06-2017
12:57 PM
I'd like enable kerberos for hdp2.6.1 on ubuntu16.04 and i follow below guideline https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-security/content/optional_install_a_new_mit_kdc.html https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-security/content/enabling_kerberos_security_in_ambari.html After install kerberos client in all hosts, it failed in the Test kerberos step with: add_principal: Insufficient access to lock database while creating "dakelake-090617@dev.com" I have trid to disable selinux (ref http://manpages.ubuntu.com/manpages/xenial/man8/kerberos_selinux.8.html) but no luck. i also tried to login kdc with kadmin -p admin/admin@dev.com then run "addprinc test3@dev.com" , it prompted with the same error. Would anyone encouter this and having any solution or hints? Thanks a ton. Forest
... View more
Labels:
- Labels:
-
Apache Ambari
-
Kerberos
-
Security
06-13-2017
03:57 PM
Thanks your sharing @Matt Clarke. so even we set the backpressure/controlrate threshold, we need to avoid a single large file which cause the nifi crashed (e.g. OOM). right?
... View more
- « Previous
- Next »