Member since
05-30-2018
1322
Posts
715
Kudos Received
148
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 4131 | 08-20-2018 08:26 PM | |
| 2004 | 08-15-2018 01:59 PM | |
| 2431 | 08-13-2018 02:20 PM | |
| 4232 | 07-23-2018 04:37 PM | |
| 5115 | 07-19-2018 12:52 PM |
06-20-2017
04:45 PM
I am getting the error regarding specifying connection manager even though I have added the jar file at /usr/hdp/current/sqoop_client/lib Can anyone help? I have used the same command as given above. Error- WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
17/06/20 11:28:48 INFO manager.SqlManager: Using default fetchSize of 1000
17/06/20 11:28:48 INFO tool.CodeGenTool: Beginning code generation
17/06/20 11:28:48 ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: Could not load db driver class: com.microsoft.jdbc.sqlserver.SQLServerDriver
... View more
02-15-2017
03:26 PM
@Angelo Alexander please refer to the following doc, also you can download the MySQL driver jar from MySQL website and place it in /usr/hdp/current/sqoop-client/lib http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_data-movement-and-integration/content/apache_sqoop_connectors.html
... View more
02-11-2017
02:49 AM
1 Kudo
OK I found what i was doing wrong. The posthttp processor has a compressionLevel attribute. If you set this value to > 0 it will compress the content as gzip. So no reason to compress prior to using posthttp if you are using gzip
... View more
02-10-2017
06:59 PM
2 Kudos
The Compression Level property doesn't look like it supports expression language. The docs should say which properties do.
... View more
02-01-2017
05:15 PM
1 Kudo
@Sunile Manjee Please have a look at the following HCC post https://community.hortonworks.com/questions/77731/is-there-a-way-to-allow-both-sso-and-ldap-authenti.html
... View more
01-30-2017
04:50 PM
Ambari Product Docs define database requirements, at: http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/database_requirements.html. Ambari installs PostgresSQL as the default, (since Ambari v2). http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/setup_options.html describes how to specify database type and location. Thanks for posting additional, specific information for AWS EC2.
... View more
09-09-2017
02:46 AM
I fixed this issue in my AWS instance by opening 443 port in inbound and outbound groups.
... View more
01-27-2017
07:44 PM
Thank you all for responses. Great stuff. I was able to parse the nifi user log as suggested and found my cert was wrong user. I am getting proxy error now. will open another post. thank you again.
... View more
01-25-2017
05:03 PM
@Michael Rivera If this has answered your question, please close out by accepting answer. thank you.
... View more
01-24-2017
03:29 PM
1 Kudo
@Adda Fuentes Since you started your new node with a configured authorizers.xml file pointing at a legacy authorized-users.xml file, the users.xml and authorizations.xml files in NiFi 1.1 where generated from that rather then inheriting those files form your already running cluster. Clear out the setting in your new nodes authorizers.xml file, remove the users.xml and authorizations.xml files, remove the flow.xml.gz file and restart this new node. It should successfully obtain these files from your existing cluster at this point. You will need to do one additional step once this new node has joined your cluster. Since the original clusters authorizations and users will not include this new node yet, you will need to access the cluster's UI from one of the original cluster nodes using and admin account and add the new node's DN as a user and then provide that new node with all the same access policies as the your existing node have. At a minimum you need to make sure your new node is granted the "Proxy user requests" access policy: If you do not do this the following issues could occur: 1. You will not be able to access the cluster's UI via the newly added node (you will get untrusted proxy message) 2. You will still be able to access the UI via the other nodes as long as NiFi does not switch the cluster coordinator to your newly added node. You cannot restrict NiFi from picking any node in your cluster to serve this role. Matt
... View more