Member since
07-27-2015
92
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
518 | 12-15-2019 07:05 PM |
12-01-2020
02:59 AM
I have this error when I try to import from Nifi 1.9.0 to Nifi 1.11.4 Do you have any suggestions? Thank you
... View more
05-30-2020
07:08 PM
1 Kudo
Hello, @Bender Yes, I have got the link [2] from your reply Thank you very much! Paul
... View more
05-11-2020
12:30 AM
@iamfromsky Did You get any resolution for this? . I am facing the same scenario but no help so far in solving this. Jobs are running form certain tool is not able to connect to HMS and fails with the below error. ERROR org.apache.thrift.transport.TSaslTransport: [ pool-5-thread-207 ] : SASL negotiation failure javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [ Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or does not exist: HIVE_DELEGATION _ TOKEN
... View more
12-17-2019
07:30 PM
Hi @alim Is there any way to work with CaptureChangeMySQL and EnforceOrder in cluster environment for better performance?
... View more
12-15-2019
07:05 PM
Oh, it is network connection issue. it gone away when i add oracle.jdbc.ReadTimeout=120000 and oracle.net.CONNECT_TIMEOUT=10000 to config of DBCPConnectionPool.
... View more
11-11-2019
10:57 PM
@Team Could you advise a best practices for update&insert records to any type RDBMS with NIFI processor? Thanks, Paul
... View more
11-05-2019
06:05 PM
@MattWho Follow your points i got win. Thank you a lot. Paul
... View more
11-01-2019
05:23 PM
@Matt Thanks, I solved this issue when i follow your point. Paul
... View more
11-01-2019
12:13 AM
@Matt Thank you, I'm doing what you point me to do.
... View more
01-30-2019
09:03 PM
Thank you Bimalc, Placement order ' Use the pool Specified at run time.' did the trick. And I include 'only if the pool exists.' so that I do not want the end user have the control setting their queue. And the hive job will always place into user's primary group queue even if the queue does not exist. 🙂 1 Use the pool Specified at run time., only if the pool exists.
2 Use the pool root.[primary group] and create the pool if it does not exist.
This rule is always satisfied. Subsequent rules are not used.
... View more
05-28-2018
01:46 AM
Hello, There are not same file size of ten dfs.namenode.name.dir, e.g. $ sudo du -sb /data1/dfs/nn/ 1369355141 /data1/dfs/nn/ $ sudo du -sb /data2/dfs/nn/ 1369351045 /data2/dfs/nn/ $ sudo du -sb /data3/dfs/nn/ 1369359237 /data3/dfs/nn/ $ sudo du -sb /data4/dfs/nn/ 1369359237 /data4/dfs/nn/ $ sudo du -sb /data5/dfs/nn/ 1369367429 /data5/dfs/nn/ $ sudo du -sb /data6/dfs/nn/ 1369342853 /data6/dfs/nn/ $ sudo du -sb /data7/dfs/nn/ 1369355141 /data7/dfs/nn/ $ sudo du -sb /data8/dfs/nn/ 1369359237 /data8/dfs/nn/ $ sudo du -sb /data9/dfs/nn/ 1369351045 /data9/dfs/nn/ $ sudo du -sb /data10/dfs/nn/ 1369342853 /data10/dfs/nn/ I find there are some folders have different size. Is this a normal behavior? This behavior seems impact our cluster run stable. The standby namenode crashed at some time. Another, could we need change the ten folder to three folders for storing the hdfs metadata? while I restart the namenode, it would like to choose which folder of ten folder to rebuild the hdfs file image? Thank you
... View more
Labels:
05-24-2018
04:26 AM
Hello: We have a CM 5.11.3 cluster that deployed by tar, We would like upgrade it to 5.14.3 by rpm packages. So, the below is my care: 1. what is the different behavior of sudo yum upgrade cloudera-manager-server cloudera-manager-daemons cloudera-manager-agent between sudo yum install cloudera-manager-server cloudera-manager-daemons cloudera-manager-agent. 2. How can we got the upgrade? what is the point? BR Thanks
... View more
Labels:
02-07-2018
11:06 PM
HI Any update for the last question I cannot get the correctly numfound after i run: HADOOP_OPTS="-Djava.security.auth.login.config=jaas.conf" \ hadoop --config /etc/hadoop/conf jar /opt/cloudera/parcels/CDH/lib/hbase-solr/tools/hbase-indexer-mr-1.5-cdh5.11.2-job.jar --conf /etc/hbase/conf/hbase-site.xml -Dmapreduce.job.queuename=root.hadoop.plarch --hbase-indexer-zk oddev03:2181,oddev04:2181,oddev05:2181 --hbase-indexer-name onedata_order_orderIndexer --go-live Is this a known issue? if so, how to workaround? if no, how to correct the above command line? Thanks for your reply. BR Paul
... View more
09-28-2017
03:31 AM
Hi, I suppose to have exactly the same issue as described in the previous post, can you let me know if you have an answer. Thanks
... View more
08-22-2017
06:47 AM
HI, We are working with kerberos CDH 5.7.3 & CM 5.8. I create a Hive Table on HBase with the below command: create external table arch_mr_jobs
(job_id STRING,
dt STRING,
a STRING,
b STRING,
.......
)STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
("hbase.columns.mapping"=":key,d:dt,d:a,d:b,......) TBLPROPERTIES("hbase.table.name"="arch:mr_jobs); There is just one row that has the column d:a in hbase table arch:mr_jobs. There are no d:a column in other rows. So, the strange behavior comming: select a,count(1) from arch_mr_jobs; got:
FAILED | 1
--You can see: There is just one row ,I hope the result should be:
FAILED | 1
NULL | 50
--why the null value be ignore? This is a wrong result.
select a from arch_mr_jobs; got:
FAILED |
--I hope the result should be:
FAILED
NULL
NULL
.
. So, I believe I missed some config. I google but got nothing. Could you give me any point? Thank you at advance Paul
... View more
08-07-2017
07:06 AM
@gnovak Thanks for your answer. I can got it from rest api.
... View more
07-05-2017
07:30 AM
Hi @Paul Yang I am not aware of any specific reason to not fix a bug. Unfortunately the original effort to fix the issue has been left inactive until now. I've picked it up again and proposed a fix against the latest Apache NiFi codebase. Hopefully it can be merged soon. https://github.com/apache/nifi/pull/1976 Thanks again for reporting this issue!
... View more
05-08-2017
06:32 AM
@Jay SenSharma This error is occurring intermittently. But It is frequently when triggered. I cannot found any error logs on MySql (master or slave) Server.
... View more
05-04-2017
08:02 AM
@Matt Clarke Thanks.
... View more
02-13-2017
01:22 AM
@Matt Burgess Any update?
... View more
01-05-2017
07:28 AM
HI We are working with kerberos cluster, the version is HDF-2.0.0.0-centos6. Sometimes, we got the below error on my three nifi node. 2017-01-05 14:34:52,337 ERROR [Thread-18] o.a.r.admin.client.RangerAdminRESTClient Error getting policies. secureMode=true, user=nifi/xxxxxx@XXX (auth:KERBEROS), response={"httpStatusCode":404,"statusCode":0}, serviceName=xxx_nifi
2017-01-05 14:34:52,337 ERROR [Thread-18] o.a.ranger.plugin.util.PolicyRefresher PolicyRefresher(serviceName=xxx_nifi): failed to refresh policies. Will continue to use last known version of policies (27)
java.lang.Exception: HTTP 404
at org.apache.ranger.admin.client.RangerAdminRESTClient.getServicePoliciesIfUpdated(RangerAdminRESTClient.java:126) ~[ranger-plugins-common-0.6.0.2.0.0.0-579.jar:0.6.0.2.0.0.0-579]
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicyfromPolicyAdmin(PolicyRefresher.java:217) [ranger-plugins-common-0.6.0.2.0.0.0-579.jar:0.6.0.2.0.0.0-579]
at org.apache.ranger.plugin.util.PolicyRefresher.loadPolicy(PolicyRefresher.java:185) [ranger-plugins-common-0.6.0.2.0.0.0-579.jar:0.6.0.2.0.0.0-579]
at org.apache.ranger.plugin.util.PolicyRefresher.run(PolicyRefresher.java:158) [ranger-plugins-common-0.6.0.2.0.0.0-579.jar:0.6.0.2.0.0.0-579]
2017-01-05 14:34:53,110 INFO [Clustering Tasks Thread-2] o.a.n.c.c.ClusterProtocolHeartbeater Heartbeat created at 2017-01-05 14:34:53,016 and sent to xxxx.xxx.xx:9088 at 2017-01-05 14:34:53,110; send took 94 millis
I would like to know what reason is it? how to resolve it? Thanks
... View more
Labels:
11-30-2016
05:47 PM
I would leverage REST API in Nifi to do that https://nifi.apache.org/docs/nifi-docs/rest-api/ @Paul Yang
... View more
11-22-2016
07:00 AM
Thanks for your response. For me I cannot got the first step of client secret key, there is not CN<something_you_typed>_OU=ApacheNiFi.p12 file , I just do the second step : keytool -importkeystore -srckeystore <keystore.jks> -destkeystore keystore.p12 -deststoretype PKCS12
openssl pkcs12 -in keystore.p12 -out nifi-01.pem -nodes
So I put the nifi-01.pem to : conn=httplib.HTTPSConnection('nifi-test01.beta1.fn', 9091, key_file=None, cert_file="nifi-01.pem")
and it works. BTW , I really don't need to put username and password and I can access the rest get api. Of course, I did not to use post or delete api, is it the correct behavior? Thanks again.
... View more
11-06-2016
07:09 AM
@mclark Thanks for your detailed answers. Paul
... View more
10-05-2016
05:59 PM
Please pass all required HBase config properties for secure connectivity aside of just the ZK properties you're currently passing. These can be found in the entire listing of a secure HBase gateway host's /etc/hbase/conf/hbase-site.xml.
... View more
08-23-2016
02:37 AM
Hi Harsh, I got it. Thank you for your excellent work. BR Paul
... View more
08-22-2016
11:24 PM
1 Kudo
Hi, Harsh, the issue gone when i package the sunjce_provider.jar of JRE into lib folder. Thanks BR Paul
... View more
08-11-2016
01:02 AM
HI, we run hive2 action with oozie on cdh5.7.1 kerberos cluster. There is a strange behavior: The below is my env: 5 node 6G hiveserver 8G hivemetastore and are HA we run hive2 action, sometimes the hive2 action failed. The below is failed logs. Error: Could not open client transport with JDBC Uri: jdbc:hive2://arch-od-tracker04.beta1.fn:10000/: Peer indicated failure: DIGEST-MD5: IO error acquiring password (state=08S01,code=0)
No current connection
Connected to: Apache Hive (version 1.1.0-cdh5.7.1)
Driver: Hive JDBC (version 1.1.0-cdh5.7.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Closing: 0: jdbc:hive2://arch-od-tracker04.beta1.fn:10000/
Aug 11, 2016 1:30:27 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"
Intercepting System.exit(2)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2] 0: jdbc:hive2://arch-od-tracker04.beta1.fn:10 (closed)> set mapred.job.queue.na
me=plarch;
********the top is the key, what happen? why the beeline cannot read the below sentence ****
<<< Invocation of Beeline command completed <<<
Hadoop Job IDs executed by Beeline:
Intercepting System.exit(2)
<<< Invocation of Main class completed <<<
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2] The below is successed logs: SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Connecting to jdbc:hive2://arch-od-tracker04.beta1.fn:10000/
Connected to: Apache Hive (version 1.1.0-cdh5.7.1)
Driver: Hive JDBC (version 1.1.0-cdh5.7.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
No rows affected (0.089 seconds)
INFO : Compiling command(queryId=hive_20160811153737_7232c1fa-122e-4b47-b58a-30b06c69b8cd): use arch_onedata
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20160811153737_7232c1fa-122e-4b47-b58a-30b06c69b8cd); Time taken: 0.114 seconds
INFO : Executing command(queryId=hive_20160811153737_7232c1fa-122e-4b47-b58a-30b06c69b8cd): use arch_onedata
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20160811153737_7232c1fa-122e-4b47-b58a-30b06c69b8cd); Time taken: 0.016 seconds
INFO : OK
No rows affected (0.142 seconds)
INFO : Compiling command(queryId=hive_20160811153737_d0ae1042-d934-4a69-bb88-b79895cdd8f8): ALTER TABLE odl_mem_modify_phone_fdt ADD PARTITION(ds='2016-08-10')
INFO : Semantic Analysis Completed
INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=hive_20160811153737_d0ae1042-d934-4a69-bb88-b79895cdd8f8); Time taken: 0.111 seconds
INFO : Executing command(queryId=hive_20160811153737_d0ae1042-d934-4a69-bb88-b79895cdd8f8): ALTER TABLE odl_mem_modify_phone_fdt ADD PARTITION(ds='2016-08-10')
INFO : Starting task [Stage-0:DDL] in serial mode
INFO : Completed executing command(queryId=hive_20160811153737_d0ae1042-d934-4a69-bb88-b79895cdd8f8); Time taken: 0.062 seconds
INFO : OK
No rows affected (0.203 seconds)
Closing: 0: jdbc:hive2://arch-od-tracker04.beta1.fn:10000/
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster). =================================================================
>>> Invoking Beeline command line now >>>
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10> set mapred.job.queue.name=plarch;
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10>
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10>
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10>
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10> use ${database_name};
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10>
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10> ALTER TABLE ${table_name} ADD PAR
TITION(ds='${dateStr}');
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10>
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10>
0: jdbc:hive2://arch-od-tracker04.beta1.fn:10>
****the top is the different than the failed logs, there is the beeline readed the all sentences**
<<< Invocation of Beeline command completed <<<
Hadoop Job IDs executed by Beeline:
<<< Invocation of Main class completed <<<
Oozie Launcher, capturing output data:
=======================
# What cause the issue? How to resolve it ? Who can give me any idea? Thanks in advance! BR Paul
... View more
Labels:
08-05-2016
12:52 AM
HI, very urgent. could you give me some suggestion? Thanks you in advance. BR Paul.
... View more