Member since
06-26-2015
511
Posts
137
Kudos Received
114
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1391 | 09-20-2022 03:33 PM | |
4032 | 09-19-2022 04:47 PM | |
2334 | 09-11-2022 05:01 PM | |
2444 | 09-06-2022 02:23 PM | |
3859 | 09-06-2022 04:30 AM |
02-06-2022
07:56 PM
Without having more information, it seems to me that the content of your metadata file is not correct. It seems to be a SAML Assertion, rather than a SAML Metadata document.
... View more
02-06-2022
07:45 PM
Hi, Minh, How and where from did you generate the metadata.xml file? André
... View more
02-06-2022
04:18 PM
You can try the following Jolt spec in a JoltTransformRecord processor to change the name of one or more columns: Jolt Transformation DSL: Chain Jolt Specification: [ { "operation": "shift", "spec": { "*": "&", "surname": "lastname" } }, { "operation": "remove", "spec": { "surname": "" } } ]
... View more
02-02-2022
06:35 PM
That's an odd thing to do 🙂 If you need to use another port for some reason, it would be better to change the ports on all hosts consistently, using the "TCP Port" or the "TLS/SSL Port" properties in Cloudera Manager, whether are you connecting without TLS or with it, respectively. It is possible to configure ports on a host-by-host basis, but it makes it harder to maintain and client configuration becomes a little more cumbersome. To change the port for a particular host, go to Kafka > Instances > Click on the broker your want the change the port for > Configuration > Continue Editing Role Instance. Then enter the following in the "Kafka Broker Advanced Configuration Snippet (Safety Valve) for kafka.properties" property: port=9096 listeners=PLAINTEXT://:9096 The PLAINTEXT value will depend on your cluster config: PLAINTEXT: No Kerberos, No TLS SSL: No Kerberos, Using TLS SASL_PLAINTEXT: Using Kerberos, No TLS SASL_SSL: Using Kerberos, Using TLS After that restart the Broker instance that was reconfigured.
... View more
02-01-2022
08:45 PM
Some more examples here: https://github.com/asdaraujo/cdp-examples
... View more
08-16-2019
07:33 AM
1 Kudo
Hi, @hpasumarthi , It seems you missed one installation step. Besides installing the parcel you have to also download the NIFI CSD, copy it to /opt/cloudera/csd and restart the cloudera-scm-server service, as described here: https://docs.hortonworks.com/HDPDocuments/CFM/CFM-1.0.0/installation/content/get-csd.html After you do this, the NiFi service will appear in the list. Regards, André
... View more
08-15-2019
09:22 AM
1 Kudo
It works for me on a CM 6.3. Which version are you using?
... View more
08-14-2019
10:00 PM
Are you using a Director template to create the deployment or is this being launched from the UI? If you are using a template, would it be possible to share it?
... View more
08-14-2019
04:42 PM
Could you please check for errors on the Cloudera Manager server log?
... View more
08-14-2019
03:31 PM
2 Kudos
REFRESH the table only when I add new data through HIVE or HDFS commands ? That is when I am doing insert into ...through impala-shell no need for refreshing ? Correct. INVALIDATE METADATA of the table only when I change the structure of the table (add columns, drop partitions) through HIVE? Correct. Or creating new tables through Hive. DROPping partitions of a table through impala-shell (i.e alter table .. drop partition .. purge). Do I have to do REFRESH or INVALIDATE METADATA? No. DROPping partitions of a table through impala-shell . How can I compute the new stats of the partitioned table? Compute incremental stats OR Drop Incremental stats before dropping partition ? The next time you run an incremental stats for a new partition Impala will update things correctly (e.g. the global row count)
... View more