Member since
10-03-2016
42
Posts
16
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1746 | 03-10-2017 10:52 PM | |
2537 | 01-03-2017 04:22 PM | |
1684 | 10-29-2016 03:08 PM | |
1115 | 10-08-2016 05:49 PM |
12-27-2016
11:05 PM
Hi, I got a problem when Ranger sync user from AD. My Ranger is V0.6.0.2.0 in HDF-2.0.2.0 Most of users are sync from AD, and I can see them on Ranger Admin UI Users/Groups. Only two users in the group nifi-admins are missed. But I can see they are fetched in usersync.log 27 Dec 2016 22:45:50 INFO UserGroupSync [UnixUserSyncThread] - Begin: initial load of user/group from source==>sink
27 Dec 2016 22:45:50 INFO LdapUserGroupBuilder [UnixUserSyncThread] - LDAPUserGroupBuilder updateSink started
27 Dec 2016 22:45:50 INFO LdapUserGroupBuilder [UnixUserSyncThread] - Performing Group search first
27 Dec 2016 22:45:50 INFO LdapUserGroupBuilder [UnixUserSyncThread] - Adding nifi-admins to user CN=NiFi Admin1,OU=CorpUsers,DC=field,DC=hortonworks,DC=com
27 Dec 2016 22:45:50 INFO LdapUserGroupBuilder [UnixUserSyncThread] - Adding nifi-admins to user CN=NiFi Admin,OU=CorpUsers,DC=field,DC=hortonworks,DC=com
27 Dec 2016 22:45:50 INFO LdapUserGroupBuilder [UnixUserSyncThread] - No. of members in the group nifi-admins = 2
And I can verify them in the Ranger Usersync node with sssd $ id nifiadmin
uid=1960401378(nifiadmin) gid=1960400513(domain_users) groups=1960400513(domain_users),1960401370(nifi-admins)
$ id nifiadmin1
uid=1960402757(nifiadmin1) gid=1960400513(domain_users) groups=1960400513(domain_users),1960401370(nifi-admins)
Not sure how to solve it. Regards, Wendell
... View more
Labels:
- Labels:
-
Apache Ranger
12-23-2016
05:15 PM
Hi @Avijeet Dash You need to query the template list first. /flow/templates Then parse the json and get the id. Regards, Wendell
... View more
12-20-2016
10:55 PM
Hi @rnettleton and @Attila Kanto and @smagyari Your solution is fine for manually provision the cluster with blueprint. But within current version cloudbreak, there's no way to add a template. You can only either inherit "default_password" configured in cloudbreak UI or add "users.admin" in your blueprint which is less secured. Regards, Wendell
... View more
12-10-2016
12:07 PM
Hi @Alejandro Fernandez Yes, All 2.4.x I tried 2.4.0.1, 2.4.1, 2.4.2, all the same. Cheers, Wendell
... View more
12-02-2016
11:31 PM
Hi, I installed the latest Ambari 2.4.2, but I can't find the latest HDP 2.5.3. It still use HDP2.5.0. Anyone knows how to add HDP2.5.3 into Ambari 2.4.2? Regards, Wendell
... View more
Labels:
12-01-2016
03:29 AM
2 Kudos
Background Customer attached bigger disks to expand the data node storage. If one disk physical fail, can also use this solution. Step by Step Decommission HDFS DataNode component on the host It takes hours to finish, depends on your existing data size DataNode is decommissioned Turn the host in maintenance Stop all components on the host Change linux /etc/fstab mount new disks to existing mount points. If possible use uuid rather than disk device. uuid is much stable especially in cloud environment. Manually create the yarn log and local folders in the mount points. Because we don't reprovision the host, yarn won't create create these dirs in your configure, but try to reuse them. # for disk in /hadoop/disk-sd{d..j}/Hadoop
> do
> mkdir ${disk}/yarn/log && chown yarn:hadoop ${disk}/yarn/log
> mkdir ${disk}/yarn/local && chown yarn:hadoop ${disk}/yarn/local
> done After change Linux disk mount configuration, start all components on the host Recommission DateNode Turn off maintenance Check hdfs blocks $ hdfs fsck / | egrep -v '^\.+$' | grep -v eplica
FSCK started by hdfs (auth:KERBEROS_SSL) from /192.168.141.39 for path / at Tue Nov 29 10:42:34 UTC 20161.............................................................................................Status:
HEALTHY Total size: 769817156313
B (Total open files size: 75189484 B) Total dirs: 4934 Total files: 23693 Total symlinks: 0 (Files currently being written: 30) Total blocks (validated): 27536 (avg. block size 27956753 B) (Total
open file blocks (not validated): 24) Corrupt blocks: 0 Number of data-nodes: 7 Number of racks: 1FSCK ended at Tue Nov 29
10:42:34 UTC 2016 in 433 millisecondsThe filesystem under path
'/' is HEALTHY
... View more
Labels:
11-30-2016
12:15 PM
Hi @Rahul Pathak Can you please add the bug ticket and patch link? Regards, Wendell
... View more
11-22-2016
07:54 PM
1 Kudo
Background When we use NiFi flow to load Adobe ClickStream tsv file into hive, we found around 3% rows are in wrong format or missed. Source Data Quality $ awk -F "\t" '{print NF}' 01-weblive_20161014-150000.tsv | sort | uniq -c | sort
1 154
1 159
1 162
1 164
1 167
1 198
1 201
1 467
2 446
2 449
2 569
6 13
10 3
13 146
13 185
15 151
16 54
18 433
21 432
22 238
23 102
26 2
34 138
179 1
319412 670
After clean the tsv $ awk -F "\t" 'NF == 670' 01-weblive_20161014-150000.tsv >> cleaned.tsv
$ awk -F "\t" '{print NF}' cleaned.tsv | sort | uniq -c | sort
319412 670
Still missed a few percent rows. Root Cause and Solution We are using ConvertCSVToAvro and ConvertAvroToORC. The clickstrem tsv files have " in them and the ConvertCSVtoAvro processor uses " as the value for the "CSV quote Character" processor configuration property by default. As a result many tabbed fields end up in the same record. We can get good output by changing this configuration property to another character that is not used in input files anywhere. We used ¥ So when use CSV related processor, double check the contents don't have the quote character.
... View more
Labels:
11-11-2016
08:25 PM
2 Kudos
Hi, When use Ambari Blueprint to auto install HDP2.5.0 including SmartSense, the ActivityAnalysis admin password is not configured by "default_password" in blueprint. The component is failed to start, and have to manually set the password. Ambari version is V2.4.1.0. This bug should be fixed. Regards, Wendell
... View more
Labels:
- Labels:
-
Apache Ambari
-
Hortonworks SmartSense
11-10-2016
05:04 PM
1 Kudo
Error Ranger Tagsync shows lots of KafkaException in log file, which causes disk space alert in Ambari. Also it used out all of the client port. /var/log/ranger/tagsync/tagsync.log 10 Nov 2016 11 : 46 : 43 ERROR TagSynchronizer [main] - 262 tag-source:atlas initialization failed with
javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, b
ut the Kafka client code does not currently support obtaining a password from the user. not available
to garner authentication information from the user
kafka.common.KafkaException: fetching topic metadata for topics [Set(ATLAS_ENTITIES)] from broker [ArrayBuffer(BrokerEndPoint( 1001 ,host.domain , 6667 ))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala: 73 )
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala: 96 )
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala: 67 )
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala: 63 )
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala: 122 )
at kafka.producer.SyncProducer.liftedTree1$ 1 (SyncProducer.scala: 82 )
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$doSend(SyncProducer.scala: 81 )
at kafka.producer.SyncProducer.send(SyncProducer.scala: 126 )
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala: 59 )
... 3 more
Background The Atlas was installed after the HDP2.5.0 was kerberosed. Ambari2.4.1 doesn't create the kerberos principal for Ranger Tagsync, and distributed to the node. Could find the hint from Tagsync log: /var/log/ranger/tagsync/tagsync.log 10 Nov 2016 11 : 46 : 41 WARN SecureClientLogin [main] - 119 /etc/security/keytabs/rangertagsync.service.keytab doesn't exist.
10 Nov 2016 11 : 46 : 41 WARN SecureClientLogin [main] - 130 Can't find principal : rangertagsync/host.domain @REALM
Fix Manually create rangertagsync principal and keytab. kadmin.local: add_principal -randkey rangertagsync/ <code>rangertagsync/host.domain @REALM
kadmin.local: xst -k rangertagsync.service.keytab rangertagsync/<code>rangertagsync/host.domain @REALM
Deploy keytab to the node $ sudo cp rangertagsync.service.keytab /etc/security/keytabs/
$ sudo chown ranger:hadoop /etc/security/keytabs/rangertagsync.service.keytab $ sudo chmod 440 /etc/security/keytabs/rangertagsync.service.keytab
No errors in the Ranger Tagsync log.
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »