Member since
08-19-2019
58
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1230 | 08-30-2019 04:51 AM |
06-14-2021
08:13 AM
A bit late to the party, but hope the following will help. By calling the main functions of the classes, UnixUserGroupBuilder, PolicyMgrUserGroupBuilder or LdapUserGroupBuilder is not going to work, since the main classes of these are only initializing the classes. In order to start the actual sync, the function updateSink needs to be called. During startup this is handled by the class org.apache.ranger.usergroupsync.UserGroupSync thus, calling its main function will trigger the syncing using the configuration that you set in your cluster. A complete example for triggering the usersync manually could be: java -Dlogdir=/var/log/ranger/usersync -cp "/usr/hdp/current/ranger-usersync/dist/unixusersync-1.2.0.3.1.5.135-2.jar:/usr/hdp/current/ranger-usersync/lib/*:/etc/ranger/usersync/conf" org.apache.ranger.usergroupsync.UserGroupSync for HDP and java -Dlogdir=/var/log/ranger/usersync -cp "/opt/cloudera/parcels/CDH/lib/ranger-usersync/dist/unixusersync-2.1.7.1.7.0-460.jar:/opt/cloudera/parcels/CDH/lib/ranger-usersync/lib/*:/etc/ranger/usersync/conf" org.apache.ranger.usergroupsync.UserGroupSync for CDP.
... View more
02-12-2020
05:02 AM
1 Kudo
@Peruvian81 I would delete only enough to get nifi re-started. Then I would want to go into the flow and look at what has caused it to fill up. This is of course assuming you have enough space in the drive to begin with. Next, I would recommend you should address nifi documented steps for disk configuration, and based on your flow, expand the content repository if necessary and if possible. Last thing to consider: your flow may just need to terminate large flow files when they are completed at the end of the line. If these are held in Q, and no longer needed, they are taking up valuable space.
... View more
01-28-2020
11:52 PM
It depends what you want to change: If you want just to add additional disks in all nodes follow this: Best way to create partitions like /grid/0/hadoop/hdfs/data - /grid/10/hadoop/hdfs/data and mount them to new formatted disks (its one of recommendation parameters for hdfs data mounts but you can change it): /dev/sda1 /grid/0 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0 /dev/sdb1 /grid/1 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0 /dev/sdc1 /grid/2 ext4 inode_readahead_blks=128,commit=30,data=writeback,noatime,nodiratime,nodev,nobarrier 0 0 After that just add all partitions paths in hdfs configs like: /grid/0/hadoop/hdfs/data,/grid/1/hadoop/hdfs/data,/grid/2/hadoop/hdfs/data But dont delete existed partition from configuration because you will lost data from block which stored in /hadoop/hdfs/data. Path dont really matter just keep them separately and dont forget to make re-balance between disks.
... View more
12-03-2019
06:29 AM
Hi @Peruvian81 there is no such option in ambari UI You can either check from Namenode UI --> datanode tab and see if the block counts are increasing.
... View more
10-31-2019
09:47 AM
Thank you very much. It helped me to find the error, in the end my provider had in the DC a schedule different from the host. synchronize and work. Greetings
... View more
10-22-2019
08:16 AM
1 Kudo
@Peruvian81 You need to update ranger, You can follow the below steps to reset the password in postgres: 1.Login into postgres 2. postgres=# \connect ranger 3. ranger=# update x_portal_user set password = 'ceb4f32325eda6142bd65215f4c0f371' where login_id = 'admin'; Above would reset the password to 'admin'. 4. Login to Ranger UI using the above password 5. Go to User Profile and change the password 6. Open Ambari UI Ranger Configs 7. Update 'admin_password' in Advanced ranger-env with the newly set password
... View more
10-17-2019
04:47 AM
Hey, Optimizing your Kafka Cluster depends upon your cluster usage & use-case. Based on your main concern like throughput or CPU utilization or Memory/Disk usage, you need to modify different parameters and some changes may have an impact on other aspects. For example, if acknowledgments is set to "all", all brokers that replicate the partitions need to acknowledge that the data was written prior to confirming the next message needs to be sent. This will ensure data consistency but increase CPU utilization and network latency. Refer Benchmarking Apache Kafka: 2 Million Writes Per Second (On Three Cheap Machines) article[1] written by Jay Kreps(Co-founder and CEO at Confluent). [1]https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines Please let me know if this helps. Regards, Ankit.
... View more
10-08-2019
10:00 AM
@Peruvian81 You can try below flow which is just for testing purposes: Basically I have a tailFile processor passing data through splitText then these messages are sent to PublishKafka_1_0(use this processor for this test), finally I created a consumer to consume data from the same topic configured in PublishKafka_1_0 storing the data in the file system with putFile. In putFile I have configured Maximum File Count to 10, to avoid excessive space usage in the file system.
... View more
10-03-2019
12:24 PM
Hi @Peruvian81 Kafka has multiple ways to be secured: SSL Kerberos PLAINTEXT No No SSL Yes No SASL_PLAINTEXT No Yes SASL_SSL Yes Yes If you already are using Kerberos, you can check the document below: https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/authentication-with-kerberos/content/kerberos_kafka_configuring_kafka_for_kerberos_using_ambari.html For your clients, you can use below command line depending of the Kafka version: consumer example: bin/kafka-console-consumer.sh --bootstrap-server <kafkaHost>:<kafkaPort> --topic <topicName> --security-protocol SASL_PLAINTEXT For newer versions, consumer example: bin/kafka-console-consumer.sh --topic <topicName> --bootstrap-server <brokerHost>:<brokerPort> --consumer-property security.protocol=SASL_PLAINTEXT * Make sure to get a valid Kerberos ticket before running these commands (kinit -kt keytab principal) ** Ensure the Kerberos principal has permissions to publish/consume data from/to the selected topic
... View more
09-26-2019
07:44 AM
@Peruvian81 You can try below command for the consumer: ./kafka-console-consumer.sh --bootstrap-server w01.s03.hortonweb.com:6667 --topic PruebaNYC --consumer-property security.protocol=SASL_PLAINTEXT --from-beginning If that solves your issue, kindly put this thread as solved. Thanks.
... View more