Member since
07-27-2015
92
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
449 | 12-15-2019 07:05 PM |
05-30-2020
07:08 PM
1 Kudo
Hello, @Bender Yes, I have got the link [2] from your reply Thank you very much! Paul
... View more
05-26-2020
07:58 PM
HELLO, I remembered that I could download the newest version of CDF about one year ago. For me, I found that I couldn't free download the newest CDF now again. The download page mentioned it is just for customers. So my question is how could we become a customer? What policy would authorize to the customer? then I could download the newest version of CDF again.
... View more
12-17-2019
07:30 PM
Hi @alim Is there any way to work with CaptureChangeMySQL and EnforceOrder in cluster environment for better performance?
... View more
12-15-2019
07:05 PM
Oh, it is network connection issue. it gone away when i add oracle.jdbc.ReadTimeout=120000 and oracle.net.CONNECT_TIMEOUT=10000 to config of DBCPConnectionPool.
... View more
11-12-2019
05:20 AM
I'm sorry. Correct the above description. The behavior is the ExcuteSQL processor will execute very slowly seem hanged. but sometime execute very fast. I believe that is not database issue. So my questions is: 1. what is the impact? is dbcp connections pool issue of config? or is the dbcp connections pool lookup wrong behavior? 2. I'm afraid the impact of this will be magnified when we face to increased database connection pool config to 500+. Is that the right thing to worry about? Could help me to answer my questions? Thanks Paul
... View more
11-11-2019
10:57 PM
@Team Could you advise a best practices for update&insert records to any type RDBMS with NIFI processor? Thanks, Paul
... View more
11-09-2019
03:06 AM
Hello,
I would like to look for a generate method(template) of update&insert record to database(mysql,oracle,vertica ) use NIFI processors.
I google the method that like the below steps:
1. build sql from attribute then putsql for mysql (INSERT INTO ON DUPLICATE KEY UPDATE )
2. To check the record is exists, then decided to build update sql(if exists) or build insert sql(if not exists) the putSql(oracle, vertica).
Is there a generate method(template) for any relational database?
Thanks
Paul
... View more
Labels:
11-09-2019
02:27 AM
Hello, I am working with 3 nodes CFM cluster. the version of CFM is 1.0.1. The below pic shows the steps: the below pic is config of ExecuteSQL: the below pic is lookup service: the below is the config of dbcpconnetionpool: The behavior is the ExcuteSQL processor will execute very slowly seem hanged. but sometime execute very fast. I believe that is database issue. So my questions is: 1. what is the impact? is dbcp connections pool issue of config? or is the dbcp connections pool lookup wrong behavior? 2. I'm afraid the impact of this will be magnified. when increased database connection pool to 500+. Could help me to answer my questions? Thanks Paul
... View more
Labels:
11-05-2019
06:05 PM
@MattWho Follow your points i got win. Thank you a lot. Paul
... View more
11-04-2019
06:25 PM
@MattWho Thanks for your detail answers. I almost to get win. Unfortunately, I can not get sync policy between NIFI and NIFI Registry with my ldap account. I must to config my node identity as a user like CN=arch-fndtf04.beta1.fn, OU=NIFI and grant it proxy access policy, so can import the bucket and commit the version. If i remove the user CN=arch-fndtf04.beta1.fn, OU=NIFI or i remove the proxy access policy of it in NIFI Registry. The NIFI GUI will show the "?" on top left corner of process group picture. Could you help me how to avoid the issue? Paul
... View more
11-03-2019
02:43 AM
Update: the below is the log: 2019-11-03 20:06:10,531 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Attempting to extract user credentials using X509IdentityProvider
2019-11-03 20:06:10,531 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Adding credentials claim to SecurityContext to be authenticated. Credentials extracted by X509IdentityProvider: AuthenticationRequest{username='CN=arch-fndtf04.beta1.fn, OU=NIFI', credentials=[PROTECTED], details=null}
2019-11-03 20:06:10,531 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Credentials already extracted for [org.apache.nifi.registry.web.security.authentication.AuthenticationRequestToken$1@39a29a41], skipping credentials extraction filter using JwtIdentityProvider
2019-11-03 20:06:10,532 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.ResourceAuthorizationFilter Request filter authorization check is not required for this HTTP Method on this resource. Allowing request to proceed. An additional authorization check might be performed downstream of this filter.
2019-11-03 20:06:10,688 INFO [NiFi Registry Web Server-12] o.a.n.r.w.m.IllegalStateExceptionMapper java.lang.IllegalStateException: Kerberos service ticket login not supported by this NiFi Registry. Returning Conflict response.
2019-11-03 20:06:10,691 DEBUG [NiFi Registry Web Server-12] o.a.n.r.w.m.IllegalStateExceptionMapper
java.lang.IllegalStateException: Kerberos service ticket login not supported by this NiFi Registry
at org.apache.nifi.registry.web.api.AccessResource.createAccessTokenUsingKerberosTicket(AccessResource.java:285) ~[classes/:na]
......
2019-11-03 20:06:10,721 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Attempting to extract user credentials using X509IdentityProvider
2019-11-03 20:06:10,722 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Adding credentials claim to SecurityContext to be authenticated. Credentials extracted by X509IdentityProvider: AuthenticationRequest{username='CN=arch-fndtf04.beta1.fn, OU=NIFI', credentials=[PROTECTED], details=null}
2019-11-03 20:06:10,722 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Credentials already extracted for [org.apache.nifi.registry.web.security.authentication.AuthenticationRequestToken$1@2929ad59], skipping credentials extraction filter using JwtIdentityProvider
2019-11-03 20:06:10,723 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.ResourceAuthorizationFilter Request filter authorization check is not required for this HTTP Method on this resource. Allowing request to proceed. An additional authorization check might be performed downstream of this filter.
2019-11-03 20:06:10,784 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Attempting to extract user credentials using X509IdentityProvider
2019-11-03 20:06:10,784 DEBUG [NiFi Registry Web Server-17] o.a.n.r.w.s.a.IdentityFilter Attempting to extract user credentials using X509IdentityProvider
2019-11-03 20:06:10,784 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Adding credentials claim to SecurityContext to be authenticated. Credentials extracted by X509IdentityProvider: AuthenticationRequest{username='CN=arch-fndtf04.beta1.fn, OU=NIFI', credentials=[PROTECTED], details=null}
2019-11-03 20:06:10,784 DEBUG [NiFi Registry Web Server-17] o.a.n.r.w.s.a.IdentityFilter Adding credentials claim to SecurityContext to be authenticated. Credentials extracted by X509IdentityProvider: AuthenticationRequest{username='CN=arch-fndtf04.beta1.fn, OU=NIFI', credentials=[PROTECTED], details=null}
2019-11-03 20:06:10,784 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.IdentityFilter Credentials already extracted for [org.apache.nifi.registry.web.security.authentication.AuthenticationRequestToken$1@7bf82c3a], skipping credentials extraction filter using JwtIdentityProvider
2019-11-03 20:06:10,784 DEBUG [NiFi Registry Web Server-17] o.a.n.r.w.s.a.IdentityFilter Credentials already extracted for [org.apache.nifi.registry.web.security.authentication.AuthenticationRequestToken$1@69275fb3], skipping credentials extraction filter using JwtIdentityProvider
2019-11-03 20:06:10,785 DEBUG [NiFi Registry Web Server-19] o.a.n.r.w.s.a.ResourceAuthorizationFilter Request filter authorization check is not required for this HTTP Method on this resource. Allowing request to proceed. An additional authorization check might be performed downstream of this filter.
2019-11-03 20:06:10,785 DEBUG [NiFi Registry Web Server-17] o.a.n.r.w.s.a.ResourceAuthorizationFilter Request filter authorization check is not required for this HTTP Method on this resource. Allowing request to proceed. An additional authorization check might be performed downstream of this filter. The below is my configurations: nifi-registry.properties nifi.registry.db.directory=
nifi.registry.db.driver.class=org.h2.Driver
nifi.registry.db.driver.directory=
nifi.registry.db.maxConnections=5
nifi.registry.db.password=UqZCvEAQeGvUUIGH||82ibCgtpV4JUhkFCnxQkW7kXxkmkHrc
nifi.registry.db.password.protected=aes/gcm/256
nifi.registry.db.sql.debug=false
nifi.registry.db.url=jdbc:h2:/var/lib/nifiregistry/database/nifi-registry-primary;AUTOCOMMIT=OFF;DB_CLOSE_ON_EXIT=FALSE;LOCK_MODE=3;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
nifi.registry.db.url.append=
nifi.registry.db.username=nifireg
nifi.registry.initial.admin.identity=*******
nifi.registry.kerberos.krb5.file=/etc/krb5.conf
nifi.registry.kerberos.service.keytab.location=/var/run/cloudera-scm-agent/process/238-nifiregistry-NIFI_REGISTRY_SERVER/nifiregistry.keytab
nifi.registry.kerberos.spnego.authentication.expiration=12 hours
nifi.registry.kerberos.spnego.keytab.location=/var/run/cloudera-scm-agent/process/238-nifiregistry-NIFI_REGISTRY_SERVER/nifiregistry.keytab
nifi.registry.providers.configuration.file=/var/run/cloudera-scm-agent/process/238-nifiregistry-NIFI_REGISTRY_SERVER/providers.xml
nifi.registry.security.authorizer=managed-authorizer
nifi.registry.security.authorizers.configuration.file=/var/run/cloudera-scm-agent/process/238-nifiregistry-NIFI_REGISTRY_SERVER/authorizers.xml
nifi.registry.security.identity.provider=ldap-provider
nifi.registry.security.identity.providers.configuration.file=/var/run/cloudera-scm-agent/process/238-nifiregistry-NIFI_REGISTRY_SERVER/identity-providers.xml
nifi.registry.security.keyPasswd=cpDNEjgeOtHgUKBg||/TtGPhbQyltKWVvH9Cj7rj3ZVYZO
nifi.registry.security.keyPasswd.protected=aes/gcm/256
nifi.registry.security.keystore=/var/lib/nifiregistry/cert/keystore.jks
nifi.registry.security.keystorePasswd=QgccvlFai9XXLFUB||Pgu0W6X+BYYSPCiu1drPcqtWIru7
nifi.registry.security.keystorePasswd.protected=aes/gcm/256
nifi.registry.security.keystoreType=jks
nifi.registry.security.needClientAuth=true
nifi.registry.security.truststore=/var/lib/nifiregistry/cert/truststore.jks
nifi.registry.security.truststorePasswd=TKpFfRmNkxQD5xqg||IY8IZookjPjKpGiKiTplZpvmkMRB
nifi.registry.security.truststorePasswd.protected=aes/gcm/256
nifi.registry.security.truststoreType=jks
nifi.registry.sensitive.props.additional.keys=nifi.registry.db.password
nifi.registry.web.http.host=
nifi.registry.web.http.port=
nifi.registry.web.https.host=arch-fndtf03.beta1.fn
nifi.registry.web.https.port=18433
nifi.registry.web.jetty.threads=200
nifi.registry.web.jetty.working.directory=/var/lib/nifiregistry/work/jetty
nifi.registry.web.war.directory=/opt/cloudera/parcels/CFM-1.0.1.0/REGISTRY/lib identity-providers.xml: <identityProviders>
<provider>
<identifier>kerberos-identity-provider</identifier>
<class>org.apache.nifi.registry.web.security.authentication.kerberos.KerberosIdentityProvider</class>
<property name="Authentication Expiration">12 hours</property>
<property name="Default Realm"></property>
<property name="Enable Debug">false</property>
</provider>
<provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.registry.security.ldap.LdapIdentityProvider</class>
<property name="User Search Base">***</property>
<property name="Connect Timeout">10 secs</property>
<property encryption="aes/gcm/256" name="Manager Password">**</property>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN">**</property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Identity Strategy">USE_USERNAME</property>
<property name="User Search Filter">cn={0}</property>
<property name="Authentication Expiration">12 hours</property>
<property name="Read Timeout"></property>
<property name="Url">**</property>
</provider>
</identityProviders> authorizations.xml <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<authorizations>
<policies>
<policy identifier="627410be-1717-35b4-a06f-e9362b89e0b7" resource="/tenants" action="R">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="15e4e0bd-cb28-34fd-8587-f8d15162cba5" resource="/tenants" action="W">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="2dbc92a2-b091-3616-8e88-5078b9103b04" resource="/tenants" action="D">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="ff96062a-fa99-36dc-9942-0f6442ae7212" resource="/policies" action="R">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="ad99ea98-3af6-3561-ae27-5bf09e1d969d" resource="/policies" action="W">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="01b87cb5-c0b6-342d-b108-d8bc03ab5cde" resource="/policies" action="D">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="9d182b11-ebe3-3a7a-8731-98ce6d6e44fd" resource="/buckets" action="R">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="dfbf3c51-fdec-3328-b169-3b54eb033147" resource="/buckets" action="W">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="05b96464-9ec8-312a-8459-67812a8b48c1" resource="/buckets" action="D">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="2fd3fcf5-b10f-33fa-8d8e-b262fa34815e" resource="/actuator" action="R">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="2f470357-e82c-38ee-8062-ab6388d6ec75" resource="/actuator" action="W">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="3ee4703f-94ca-33c2-8060-17f5d313f560" resource="/actuator" action="D">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="0eaa47b9-e409-304e-8682-30d1b0d86d05" resource="/swagger" action="R">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="cf4d8390-5ac7-3ff0-82ce-a274b5f88b21" resource="/swagger" action="W">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="ac587f43-6e1c-3890-81fd-83b4df2e678e" resource="/swagger" action="D">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
<policy identifier="287edf48-da72-359b-8f61-da5d4c45a270" resource="/proxy" action="W">
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53"/>
</policy>
</policies>
</authorizations> users.xml cat users.xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<tenants>
<groups/>
<users>
<user identifier="d9e3d4d3-e7d2-3c6e-9a70-2602c3265b53" identity="****"/>
</users>
</tenants> could you please point me what i missed? Thanks, Paul
... View more
11-01-2019
07:44 PM
1 Kudo
Hello, I am working with cloudera flow manager version 1.0.1. I cannot got login page when i enable ssl & ldap in nifi registry instead of this page, and there is an node identity. I have checked the config that may be correctly. and try more time such as remove /var/lib/nifiregister/* or /var/run/cloudera-scm-agent/process/***-nifiregistry-NIFI_REGISTRY_SERVER/* the blow pic shows the login user is node id. the related ldap info is below, but there is not userDN config option in cloudera manager. there is same setting between nifi and nifi registry. 2019-11-02 10:24:59,167 INFO org.springframework.ldap.core.support.AbstractContextSource: Property 'userDn' not set - anonymous context will be used for read-write operations The behavior is very strange. Who could help me what i missed? Thanks, Paul
... View more
Labels:
11-01-2019
05:23 PM
@Matt Thanks, I solved this issue when i follow your point. Paul
... View more
11-01-2019
12:26 AM
Hello, My new CDF cluster include 3 nifi nodes. I enable SSL in nifi . but there is not SSL on LDAP. I don't what is the reason got the error: 2019-11-01 13:42:28,489 WARN org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator: Failed to replicate request GET /nifi-api/flow/current-user to xx-cmf02.beta1.nn/10.202.252.92:8080 due to java.net.ConnectException: Failed to connect to xx-cmf02.beta1.nn/10.202.252.92:8080
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:242)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:160)
......
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) The behavior is very strange that one node access else node 8080 port but i did not to config it. So, what did i miss it? Thank you for your advice first. Paul
... View more
Labels:
11-01-2019
12:13 AM
@Matt Thank you, I'm doing what you point me to do.
... View more
10-31-2019
02:23 AM
Hello!
Urgent problem I'm working on Cloudera Flow Management 1.0.1 and to evaluate the feasibility to move HDF to CDF. I'm meeting a problem that is how to config the Multi-Tenant Authorization with kerberos/LDAP in cloudera manager. Could you please help me for the following questions ? For HDF, there is apache ranger can config and implement the Multi-Tenant Authorization. It can be config in Ranger Admin GUI. How can I do the Multi-Tenant Authorization like HDF with CFM?
Thanks,
Paul
... View more
Labels:
05-28-2018
01:46 AM
Hello, There are not same file size of ten dfs.namenode.name.dir, e.g. $ sudo du -sb /data1/dfs/nn/ 1369355141 /data1/dfs/nn/ $ sudo du -sb /data2/dfs/nn/ 1369351045 /data2/dfs/nn/ $ sudo du -sb /data3/dfs/nn/ 1369359237 /data3/dfs/nn/ $ sudo du -sb /data4/dfs/nn/ 1369359237 /data4/dfs/nn/ $ sudo du -sb /data5/dfs/nn/ 1369367429 /data5/dfs/nn/ $ sudo du -sb /data6/dfs/nn/ 1369342853 /data6/dfs/nn/ $ sudo du -sb /data7/dfs/nn/ 1369355141 /data7/dfs/nn/ $ sudo du -sb /data8/dfs/nn/ 1369359237 /data8/dfs/nn/ $ sudo du -sb /data9/dfs/nn/ 1369351045 /data9/dfs/nn/ $ sudo du -sb /data10/dfs/nn/ 1369342853 /data10/dfs/nn/ I find there are some folders have different size. Is this a normal behavior? This behavior seems impact our cluster run stable. The standby namenode crashed at some time. Another, could we need change the ten folder to three folders for storing the hdfs metadata? while I restart the namenode, it would like to choose which folder of ten folder to rebuild the hdfs file image? Thank you
... View more
Labels:
05-24-2018
04:26 AM
Hello: We have a CM 5.11.3 cluster that deployed by tar, We would like upgrade it to 5.14.3 by rpm packages. So, the below is my care: 1. what is the different behavior of sudo yum upgrade cloudera-manager-server cloudera-manager-daemons cloudera-manager-agent between sudo yum install cloudera-manager-server cloudera-manager-daemons cloudera-manager-agent. 2. How can we got the upgrade? what is the point? BR Thanks
... View more
Labels:
02-07-2018
11:06 PM
HI Any update for the last question I cannot get the correctly numfound after i run: HADOOP_OPTS="-Djava.security.auth.login.config=jaas.conf" \ hadoop --config /etc/hadoop/conf jar /opt/cloudera/parcels/CDH/lib/hbase-solr/tools/hbase-indexer-mr-1.5-cdh5.11.2-job.jar --conf /etc/hbase/conf/hbase-site.xml -Dmapreduce.job.queuename=root.hadoop.plarch --hbase-indexer-zk oddev03:2181,oddev04:2181,oddev05:2181 --hbase-indexer-name onedata_order_orderIndexer --go-live Is this a known issue? if so, how to workaround? if no, how to correct the above command line? Thanks for your reply. BR Paul
... View more
12-21-2017
06:44 PM
HI, I meet the same issue, maybe is another. I do reltime indexer for hbase with Clouder search lily indexer. There are a collection with 3 shards and 6 replications . Fortunately, It is correct that the numfound of index documents when I query . After the reltime indexer executing. I do index with hbase-indexer-mr-1.5-*-job.jar , and the issue come on. the numfound of query become very strange. the numfound is 80. the numfound should be 40, since the hbase table is just 40 rows. the below is result fo query that I do as your metioned. http://oddev05.dev1.fn:8983/solr/test_lily_solr_shard1_replica2/select?q=*:*&distrib=false <result name="response" numFound="28" start="0">
http://oddev03.dev1.fn:8983/solr/test_lily_solr_shard1_replica1/select?q=*:*&distrib=false <result name="response" numFound="28" start="0">
http://oddev03.dev1.fn:8983/solr/test_lily_solr_shard2_replica2/select?q=*:*&distrib=false <result name="response" numFound="28" start="0">
http://oddev04.dev1.fn:8983/solr/test_lily_solr_shard2_replica1/select?q=*:*&distrib=false <result name="response" numFound="28" start="0">
http://oddev04.dev1.fn:8983/solr/test_lily_solr_shard3_replica2/select?q=*:*&distrib=false <result name="response" numFound="24" start="0">
http://oddev05.dev1.fn:8983/solr/test_lily_solr_shard3_replica1/select?q=*:*&distrib=false <result name="response" numFound="24" start="0">
*the total of the numfound of the 3 shards is 80 *
http://oddev04.dev1.fn:8983/solr/test_lily_solr/select?q=*:*&start=0&rows=100 <result name="response" numFound="40" start="0" maxScore="1.0"> *correct*
http://oddev04.dev1.fn:8983/solr/test_lily_solr/select?q=*:*&start=0&rows=10 <result name="response" numFound="80" start="0" maxScore="1.0">
http://oddev04.dev1.fn:8983/solr/test_lily_solr/select?q=*:* <result name="response" numFound="80" start="0" maxScore="1.0"> I google got: Preventing the problem is easy -- always index documents onto the correct shard. I think maybe it's right. But how to index documents onto the correct and same shard with lily and mapreduce? I would like to know more keys. Thank you
... View more
08-22-2017
06:47 AM
HI, We are working with kerberos CDH 5.7.3 & CM 5.8. I create a Hive Table on HBase with the below command: create external table arch_mr_jobs
(job_id STRING,
dt STRING,
a STRING,
b STRING,
.......
)STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
("hbase.columns.mapping"=":key,d:dt,d:a,d:b,......) TBLPROPERTIES("hbase.table.name"="arch:mr_jobs); There is just one row that has the column d:a in hbase table arch:mr_jobs. There are no d:a column in other rows. So, the strange behavior comming: select a,count(1) from arch_mr_jobs; got:
FAILED | 1
--You can see: There is just one row ,I hope the result should be:
FAILED | 1
NULL | 50
--why the null value be ignore? This is a wrong result.
select a from arch_mr_jobs; got:
FAILED |
--I hope the result should be:
FAILED
NULL
NULL
.
. So, I believe I missed some config. I google but got nothing. Could you give me any point? Thank you at advance Paul
... View more
08-07-2017
07:06 AM
@gnovak Thanks for your answer. I can got it from rest api.
... View more
08-04-2017
09:55 AM
HI: as title, we would like to get the job/map/reducer counter metrics of MR by api or another way from external hadoop cluster. Could you give us any advise? Thanks
... View more
Labels:
06-30-2017
10:01 AM
Hi, as title, I cannot get the HDF nifi source code from http://repo.hortonworks.com/content/repositories/releases/org/apache/nifi/nifi-standard-processors/. how to got it? <dependency>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-standard-processors</artifactId>
<version>1.0.0.2.0.0.0-579</version>
</dependency>
... View more
Labels:
06-30-2017
08:52 AM
HI, What is the reason the nifi team did not fix the bug even until the latest nifi version? for ConvertJsonToSql, is this a critical bug? or some other reason?
... View more
06-13-2017
12:41 PM
Thanks @kkawamura , currently, I change the decimal(5,4) to decimal(7,4) to workaround the bug. It is so big bug.
... View more
06-12-2017
12:04 PM
HI I am working with NIFI. there is an scene that export an mysql table to another mysql table, The tables are the same table structure. source table has a field 'a' that is decimal(5,4), target table has a field 'b' that is decimal(5,4) too. The issue was triggered. eg. the value 0.1494 from source, the result was cut off to 0.1490. The root cause seems the below code. ///From ConvertJSONToSQL, the ColumnSize() is precision length ,but the value length is 6, so the subString method change the value to 0.149 ..
final Integer colSize = desc.getColumnSize();
final JsonNode fieldNode = rootNode.get(fieldName);
if (!fieldNode.isNull()) {
String fieldValue = fieldNode.asText();
if (colSize != null && fieldValue.length() > colSize) {
fieldValue = fieldValue.substring(0, colSize);
}
attributes.put("sql.args." + fieldCount + ".value", fieldValue); }
Is this a bug? How to resolve it?
... View more
Labels:
05-08-2017
06:32 AM
@Jay SenSharma This error is occurring intermittently. But It is frequently when triggered. I cannot found any error logs on MySql (master or slave) Server.
... View more
05-08-2017
03:40 AM
any update? is this a NIFI bug?
... View more