2209
Posts
231
Kudos Received
82
Solutions
About
My expertise is not in hadoop but rather online communities, support and social media. Interests include: photography, travel, movies and watching sports.
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 588 | 05-07-2025 11:41 AM | |
| 1175 | 02-27-2025 12:49 PM | |
| 3053 | 06-29-2023 05:42 AM | |
| 2576 | 05-22-2023 07:03 AM | |
| 1890 | 05-22-2023 05:42 AM |
06-18-2021
09:31 AM
Hi , Thanks for the response. It turns out my issue is slightly different. I have been able to unblock myself by creating a new pem key and cert file using openssl. Thanks for you help, please don't keep the issue open on my part, Best regards Darren
... View more
06-14-2021
06:20 AM
@Rupesh_Raghani NiFi was not designed to provide a completely blank canvas to each user. There are important design reason for this. NiFi runs within a single JVM. All dataflows created on the canvas run as the NiFi service user and not as the user who is logged in. This means that all user's dataflows share and compete for the same system resources. Another user's poorly designed dataflow(s) can have an impact on the operation of another user's dataflow(s). So it is important for one users to be able to identify where backlogs may be forming even if that is occurring in another user's dataflow(s). With a secured NiFi, authorization policy control what a successfully authenticated user can see and do on the NiFi canvas. While components added to the canvas will always be visible to all users, what is displayed on the component is limited only stats for unauthorized users (no component names, component types, component configurations, etc). So an unauthorized user would be unable to see how that unauthorized component is being used and for what. The unauthorized user would also not have access to modify the component, access FlowFiles that traversed those components (unless that data passed through an authorized component somewhere else in the dataflow(s)), etc. Besides resource usage, another reason users need to see these place holders for all components is so that users do not build dataflows atop one another. It is common for multiple teams to be authorized to work within the same NiFi. It is also common to have some users who are members of more than one team. For those users, it would be very difficult to use the UI if each teams flows were built on top of one another. Most common setup involves an admin user creating a single Process Group (PG) on the root canvas level (top level - what you see when you first log in to a new NiFi). Then each team is authorized only to their assigned PG. So team1 user logs in and there PG is fully rendered and non authorized PGs are present by non configurable and no displayed details. team1 is unable to add components to canvas at this level and must enter their authorized PG before they can start building dataflows. When you enter sub-PG, you have a blank canvas to work with. Hope this helps with your query. Matt
... View more
06-11-2021
05:38 AM
@dscarlat Have you looked at this post? https://community.cloudera.com/t5/Support-Questions/How-can-I-convert-PST-time-to-UTC-time-in-Hive/td-p/213018
... View more
06-07-2021
12:32 PM
@nnandula did you resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-03-2021
05:33 AM
@aie Has your issue been resolved? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-03-2021
05:31 AM
@kkhambadkone1 Has your issue been resolved? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-03-2021
05:30 AM
@naush_madaka07 Has your issue been resolved? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
06-03-2021
05:30 AM
@Pamarthich Has your issue been resolved? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
05-26-2021
04:57 AM
Hi, Below are configuration for connecting Apache Ranger with LDAP/LDAPS. There's an important tool that will help to identify some settings in your AD AD Explorer - Windows Sysinternals | Microsoft Docs This configuration will sync LDAP users and link them with their LDAP groups every 12 hour, so you later from Apache Ranger you can give permission based on LDAP groups as well. For connecting using LDAPS, make sure you have the proper certificates added in the same server that contains the Ranger's UserSync service. Configuration Name Configuration Value Comment ranger.usersync.source.impl.class org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder ranger.usersync.sleeptimeinmillisbetweensynccycle 12 hour ranger.usersync.ldap.url ldaps://myldapserver.example.com ldaps or ldap based on your LDAP security ranger.usersync.ldap.binddn myuser@example.com ranger.usersync.ldap.ldapbindpassword mypassword ranger.usersync.ldap.searchBase OU=hadoop,DC=example,DC=com you can browse your AD and check which OU you want to make Ranger sync ranger.usersync.ldap.user.searchbase OU=hadoop2,DC=example,DC=com;OU=hadoop,DC=example,DC=com you can browse your AD and check which OU you want to make Ranger sync, you can also add 2 OU and separate them with ; ranger.usersync.ldap.user.objectclass user double check the same ranger.usersync.ldap.user.searchfilter (memberOf=CN=HADOOP_ACCESS,DC=example,DC=com) if you want to filter specific users to be synced in ranger and not your entire AD ranger.usersync.ldap.user.nameattribute sAMAccountName double check the same ranger.usersync.ldap.user.groupnameattribute memberOf double check the same ranger.usersync.user.searchenabled true ranger.usersync.group.searchbase OU=hadoop,DC=example,DC=com you can browse your AD and check which OU you want to make Ranger sync ranger.usersync.group.objectclass group double check the same ranger.usersync.group.searchfilter (cn=hadoop_*) if you want to sync specific groups not all AD groups ranger.usersync.group.nameattribute cn double check the same ranger.usersync.group.memberattributename member double check the same ranger.usersync.group.search.first.enabled true ranger.usersync.truststore.file /path/to/truststore-file ranger.usersync.truststore.password TRUST_STORE_PASSWORD There's some helpful links about how to construct complex LDAP search queries Search Filter Syntax - Win32 apps | Microsoft Docs Best Regards,
... View more
05-24-2021
06:46 PM
Hi below are some pointers just in case not tried ( which I believe you using them ) set the column value to TIMESTAMP -- ( map-column-hive <cols_name>=TIMESTAMP) Then please keep in mind the column should be bigint The main issue is the format. Parquet represent time in "msec" where as impala will interpret BigInt as "sec" so to get correct value you need to take care at query level. ( divide by 1000) Regards Jay
... View more