Member since
12-02-2015
18
Posts
1
Kudos Received
0
Solutions
05-16-2018
12:10 PM
Hi @sapin amin, Your response above states you can use SAM and schema registry when you run HDF on HDP. Have you tried this? I ask as the HDF documentation states: "You cannot install SAM and Schema Registry for HDF 3.1 on an HDP 2.6.4 cluster, and you cannot upgrade these services from a previous HDP cluster" https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.1/bk_planning-your-deployment/content/ch_deployment-scenarios.html Do you know if this is the case? Regards, Geouffrey
... View more
04-13-2018
01:57 PM
Hi @ sangameshwar swami, Which version are you upgrading from? Did you follow the recommendations from previous answers in terms of upgrade sequence of the services? I performed an upgrade from 3.0.1.1 to the latest 3.1.1 a week ago and had no major issues. Regards, Geouffrey
... View more
03-15-2018
05:13 PM
Any idea if there are any plans to include the kafka rest proxy in the near future into HDF. I noticed the JIRA has been closed as it didn't pass community vote but it's been added to the confluent platform and was wondering if the same will occur for HDF
... View more
01-03-2018
09:35 AM
@Raffaele S, That is a bummer, I feel your pain. I had a similar scenario a few months ago trying to upgrade to HDF 3 and ended up reinstalling and configuring an entire cluster. Hope it goes better with the next upgrade.
... View more
12-20-2017
02:24 PM
Hey @Raffaele S, Unfortunately not. In the end I reverted to backups to HDF 3.0.1 and performed an upgrade to 3.0.1.1 instead, which is using Ambari 2.5.1. This version is fine for now as I was trying to resolve issues with Schema Registry but it is a bit of a pain as I would've preferred upgrading to the latest version while I was going through the upgrade pain. Hope you have better luck Regards, Geouffrey
... View more
12-08-2017
09:14 AM
Hi, I have this same error but it occurred during the upgrade process of Ambari from 2.5.1.0-159 to 2.6.0.0-267. The Ambari upgrade itself went through fine and the server and agents started. But during the post-upgrade steps those services (Infra, Log Search, etc) all fail with this error. I've confirmed the versions of my server and all agents are the same but is there perhaps something else I might've missed? Regards, Geouffrey
... View more
12-05-2017
02:06 PM
Hi, I'm using HDF 3.0.1 and have configured Schema Registry with an AVRO schema to use with Kafka. Reading through the docs it makes sense how I can publish and consumer using NiFi and a Java client but I have a bit of a problem. Some of the systems we use are .NET and I can't find KafkaAvroDeserializer or Serializer for .NET. The Confluent Schema Registry seems to provide a .NET sdk but I believe I can't use that Schema registry or Serializers with Hortonworks. Any recommendations how I can get our .NET clients to publish to a Kafka topic and taking advantage of the HWX schema registry in HDF? Kind Regards, Geouffrey
... View more
08-20-2017
11:12 AM
Seems like it Luckily this is not production yet. I wanted to upgrade everything before we went live with the environments over the next month which is when everything went wrong. I guess I will lose all my configurations? LDAP, ranger, SSL, etc when I start with a fresh postgres setup
... View more
08-20-2017
08:18 AM
Good idea but unfortunately I ended up with the exact same error message on the new server
... View more
08-19-2017
06:53 AM
Yes I did restored my db before I started ambari-server No luck with the recommendation. The error remains the same.
... View more
08-18-2017
08:12 AM
Hi, I tried upgrading my HDF from 2.1.4 to 3.0.1.1. The installation failed so I thought I would roll back to the woking state before I try again. But now I am unable to roll back to a working state on my environment and Ambari Service will not start. I performed these steps to roll back: https://community.hortonworks.com/content/supportkb/48757/how-to-perform-a-roll-back-on-ambari-server-after.html The error I'm seeing is similar to what is described here: https://issues.apache.org/jira/browse/AMBARI-18467 Is it possible to roll back major version and, if it is, is there a way to get past this issue without installing my entire stack? Regards, Geouffrey ambari-server.tar.gz
... View more
- Tags:
- ambari-server
- hdf
Labels:
07-05-2017
02:42 PM
Hi Matt, Thanks for your detailed response. Please see my reply to Bryan above for the fix to my original problem. For the Identity Mappings I went with your suggestion number 2 and that worked perfectly. Option 1 will not really work as the AD structure might not always be the same for all users so I thought option 2 will work better. Regards, Geouffrey
... View more
07-05-2017
02:25 PM
Thanks for your response Bryan. I managed to find the culprit earlier this morning. The problem was the & in my Manager DN. After replacing this with & NiFi started and LDAP worked as well.
... View more
07-05-2017
12:22 PM
1 Kudo
Hi, I'm running HDF 2.1.4 and have the following components working:
2 Nodes NiFi Cluster (SSL Enabled using NiFi CA) Ranger (integrated with NiFi and I have Initial Admin Identity working from browser) I am now trying to configure AD on my setup. I managed to get Ranger Usync to work and I can see my AD users in Ranger but when I make the changes for NiFi the servicde won't start. I made the following changes in Ambari under Advanced nifi-properties: nifi.security.user.login.identity.provider ldap-provider nifi.security.identity.mapping.pattern.dn ^CN=(.*?), OU=(.*?)$ nifi.security.identity.mapping.value.dn $1@$2
And this is what my Template for login-identity-providers.xml looks like <loginIdentityProviders>
<provider>
<identifier>ldap-provider</identifier>
<class>org.apache.nifi.ldap.LdapProvider</class>
<property name="Authentication Strategy">SIMPLE</property>
<property name="Manager DN">CN=xxx,OU=Administrator & Service Accounts,OU=Administration,OU=NOC,DC=xxx,DC=xxx</property>
<property name="Manager Password">xxx_password</property>
<property name="TLS - Keystore">/usr/hdf/current/nifi/conf/keystore.jks</property>
<property name="TLS - Keystore Password">keystore_password</property>
<property name="TLS - Keystore Type">jks</property>
<property name="TLS - Truststore">/usr/hdf/current/nifi/conf/truststore.jks</property>
<property name="TLS - Truststore Password">truststore_password</property>
<property name="TLS - Truststore Type">jks</property>
<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol">TLS</property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Connect Timeout">10 secs</property>
<property name="Read Timeout">10 secs</property>
<property name="Url">ldap://ad_controller:389</property>
<property name="User Search Base">OU=Administration,OU=NOC,DC=xxx,DC=xxx</property>
<property name="User Search Filter">sAMAccountName={0}</property>
<property name="Authentication Expiration">12 hours</property>
</provider>
</loginIdentityProviders>
After saving these changes and restarting NiFi I get the following errors: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 360, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart
self.start(env, upgrade_type=upgrade_type)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 164, in start
self.configure(env, is_starting = True)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 145, in configure
params.nifi_flow_config_dir, params.nifi_sensitive_props_key, is_starting)
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi.py", line 286, in encrypt_sensitive_properties
if nifi_toolkit_util.contains_providers(nifi_config_dir+'/login-identity-providers.xml'):
File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi_toolkit_util.py", line 257, in contains_providers
dom = xml.dom.minidom.parseString(content)
File "/usr/lib64/python2.7/xml/dom/minidom.py", line 1931, in parseString
return expatbuilder.parseString(string)
File "/usr/lib64/python2.7/xml/dom/expatbuilder.py", line 940, in parseString
return builder.parseString(string)
File "/usr/lib64/python2.7/xml/dom/expatbuilder.py", line 223, in parseString
parser.Parse(string, True)
xml.parsers.expat.ExpatError: not well-formed (invalid token): line 7, column 74 The login provider configuration looks ok to me but I've never configured NiFi with AD so I could well be missing something. Any ideas? Regards, Geouffrey
... View more
Labels:
05-15-2017
12:28 PM
Hi, Thanks for publishing the article. I'm very new to NiFi but I managed to get PutHDFS writing to ADL using HDF 2.1 but was wondering if there is a similar way to get CreateHDFSFolder to work in the same manner? From the docs I can see the Additional Classpath Resources property is not available for CreateHDFSFolder. Any recommendations? Kind Regards, Geouffrey
... View more
12-10-2015
12:47 AM
Sorry for the delay and thank you for all your assistance. I can confirm that he memory increase did solve the issue. Thanks, Geouff
... View more
12-02-2015
09:59 AM
Hi, Yes I am using the QuickStart VM. 2 CPUs, 8GB of RAM, express edition. Yes I'm using Hue and I can only see the following message with the spinning wheel next to it: "There are currently no logs to visualize." That is the only message
... View more
12-02-2015
08:30 AM
Hi, I'm new here and trying to complete the cloudera live guide. I'm trying to follow the steps required to load the data from access.log.2 but the task doens't complete. It's not giving me any errors and I'm not sure where to look for log files. The code I'm using looks like this: CREATE EXTERNAL TABLE intermediate_access_logs ( ip STRING, date STRING, method STRING, url STRING, http_version STRING, code1 STRING, code2 STRING, dash STRING, user_agent STRING) ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe' WITH SERDEPROPERTIES ( 'input.regex' = '([^ ]*) - - \\[([^\\]]*)\\] "([^\ ]*) ([^\ ]*) ([^\ ]*)" (\\d*) (\\d*) "([^"]*)" "([^"]*)"', 'output.format.string' = "%1$$s %2$$s %3$$s %4$$s %5$$s %6$$s %7$$s %8$$s %9$$s") LOCATION '/user/hive/warehouse/original_access_logs'; CREATE EXTERNAL TABLE tokenized_access_logs ( ip STRING, date STRING, method STRING, url STRING, http_version STRING, code1 STRING, code2 STRING, dash STRING, user_agent STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION '/user/hive/warehouse/tokenized_access_logs'; ADD JAR /usr/lib/hive/lib/hive-contrib.jar; INSERT OVERWRITE TABLE tokenized_access_logs SELECT * FROM intermediate_access_logs; I'm sure I'm missing something silly here. Kind Regards, Gee
... View more
Labels: