Member since
07-17-2017
43
Posts
6
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1380 | 03-24-2019 05:54 PM | |
1701 | 03-16-2019 04:51 PM | |
1701 | 03-16-2019 04:15 AM | |
573 | 08-04-2018 12:44 PM | |
990 | 07-23-2018 01:35 PM |
05-07-2020
08:43 AM
Here is the very good explanation of hive view and hue replace in hdp 3.0 https://hadoopcdp.com/data-analytics-studio-das-replace-of-hue-hive-views-in-cdp/
... View more
05-07-2020
06:35 AM
Hi, I did restart nifi , but the problem still persists. The next release of nifi will add a new DBCP connection Pool: Hadoop DBCP connection pool: https://issues.apache.org/jira/browse/NIFI-7257 It will, hopefully, solve the issue. For my part, I implemented a specific connector which modifies the classpath: https://github.com/dams666/nifi-dbcp-connectionpool In short: public static final PropertyDescriptor DB_DRIVER_LOCATION = new PropertyDescriptor.Builder() .name("database-driver-locations") .displayName("Database Driver Location(s)") .description("Comma-separated list of files/folders and/or URLs containing the driver JAR and its dependencies (if any). For example '/var/tmp/mariadb-java-client-1.1.7.jar'") .defaultValue(null) .required(false) .addValidator(StandardValidators.createListValidator(true, true, StandardValidators.createURLorFileValidator())) .expressionLanguageSupported(ExpressionLanguageScope.VARIABLE_REGISTRY) .dynamicallyModifiesClasspath(true) .build(); ... dataSource = new BasicDataSource(); dataSource.setDriverClassName(drv); dataSource.setDriverClassLoader(this.getClass().getClassLoader()); But I still got same error message: PutSQL[id=94d192a9-fd1d-3c59-99be-d848f8902968] Failed to process session due to java.sql.SQLException: Cannot create PoolableConnectionFactory (ERROR 103 (08004): Unable to establish connection.): org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot create PoolableConnectionFactory (ERROR 103 (08004): Unable to establish connection.) My setup: Database Connection URL : jdbc:phoenix:zk4-xxx.ax.internal.cloudapp.net,zk5-xxx.ax.internal.cloudapp.net,zk6-xxx.ax.internal.cloudapp.net:2181:/hbase-unsecure Database Driver Class Name : org.apache.phoenix.jdbc.PhoenixDriver Damien
... View more
02-10-2020
12:32 AM
Hello Rory, I would like to know if you could solve this problem. I have a similar issue, launch a count simultaneously in several threats using Hive and Cloudera JDBC Driver and y get "[Cloudera][JDBC](10360) Column name not found: column.". If I launch every thread one by one the process works fine. Caused by: java.sql.SQLException: [Cloudera][JDBC](10360) Column name not found: column.
at com.cloudera.hiveserver2.exceptions.ExceptionConverter.toSQLException(Unknown Source)
at com.cloudera.hiveserver2.jdbc.common.SForwardResultSet.findColumn(Unknown Source)
at com.cloudera.hiveserver2.jdbc.common.SForwardResultSet.getObject(Unknown Source)
... View more
03-29-2019
03:43 PM
So it turns out the patch for this hasn't been merged into HDF 3.4.0 yet and despite Ambari enabling SSO for SAM it's broken. See https://github.com/hortonworks/streamline/issues/1330
... View more
03-24-2019
05:54 PM
Turns out DAS Lite was trying to dump and failing due to it having been shutdown for too long.
... View more
03-16-2019
04:51 PM
And finally typing out the answer for the fourth time since I keep getting logged out. Ambari is setting rm_security_opts in yarn-env.sh to include yarn_jaas.conf. This is incorrect and breaks the yarn app commands. Commenting out that section and restarting yarn makes everything work correctly.
... View more
03-17-2019
04:18 AM
I submitted KNOX-1828 for this issue and have created a pull request for a patch that appears to work.
... View more
07-23-2018
01:47 PM
I'm pretty sure the parameter "orc.compress" doesn't apply to tables "stored as textfile". In the error message above it's obvious hive detected snappy and then for some reason ran out of memory. How big is the Snappy file and how much memory is allocated on your cluster for Yarn?
... View more
11-17-2018
03:28 AM
@Shawn Weeks I have a ticket open for this but no one at Hortonworks has ever seen it before except for this post. I do not see anything relevant from you in the JIRA system - https://issues.apache.org/jira/browse/AMBARI-14714?jql=project%20%3D%20AMBARI%20AND%20status%20%3D%20Open%20AND%20text%20~%20%22ipa%22. Where is the open ticket?
... View more
10-03-2017
07:01 PM
@Shawn Weeks There is not list maintained for those properties. The values are dynamically determined from the configs. Every service provides certain configs and the "params_linux.py" reads those parameter values. Which can be found inside the following directory. /var/lib/ambari-server/resources/common-services/$SERVICE_NAME/x.x.x.x/package/scripts Example: # find /var/lib/ambari-server/resources/common-services -name "params_linux.py" | xargs grep -i "hive_server_host"
/var/lib/ambari-server/resources/common-services/KNOX/0.5.0.2.2/package/scripts/params_linux.py:hive_server_hosts
= default("/clusterHostInfo/hive_server_host", None)
/var/lib/ambari-server/resources/common-services/KNOX/0.5.0.2.2/package/scripts/params_linux.py:if
type(hive_server_hosts) is list:
/var/lib/ambari-server/resources/common-services/KNOX/0.5.0.2.2/package/scripts/params_linux.py:
hive_server_host = hive_server_hosts[0]
/var/lib/ambari-server/resources/common-services/KNOX/0.5.0.2.2/package/scripts/params_linux.py:
hive_server_host = hive_server_hosts
/var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py:hive_server_host
= default("/clusterHostInfo/hive_server_host", [])
/var/lib/ambari-server/resources/common-services/HDFS/2.1.0.2.0/package/scripts/params_linux.py:has_hive_server_host
= not len(hive_server_host) == 0
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py:hive_server_hosts
= default("/clusterHostInfo/hive_server_host", [])
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py:hive_server_host
= hive_server_hosts[0] if len(hive_server_hosts) > 0 else None
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py:hive_url
= format("jdbc:hive2://{hive_server_host}:{hive_server_port}")
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py:elif
status_params.role == "HIVE_SERVER" and hive_server_hosts is not None
and hostname in hive_server_host:
/var/lib/ambari-server/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py:if
len(hive_server_hosts) == 0 and len(hive_server_interactive_hosts) >
0: Example: hive_http_port = default('/configurations/hive-site/hive.server2.thrift.http.port', "10001")
hive_http_path = default('/configurations/hive-site/hive.server2.thrift.http.path', "cliservice")
hive_server_hosts = default("/clusterHostInfo/hive_server_host", None) .
... View more
10-05-2017
07:10 PM
Depending upon what you're using for a KDC and how the developers are authenticating locally, you may be able to use a combination of cross-realm trust and auth-to-local mapping to map authenticated users from the developers' domain to local users with permissions. For example, if you are using Active Directory as the KDC, and cross-domain trusts exist, you need to specify the correct KDC to use for SPNEGO authentication to your specific hosts (as in this example: https://blog.godatadriven.com/cross-realm-trust-kerberos.html), but you can then get an authenticated user passed through to HDFS's auth-to-local mapping. From there, you can use auth-to-local rules to map the incoming user to the correct local user.
... View more
08-01-2019
06:40 AM
Hi Sarah, Can you please publish the document with the updated Service users list. Thanks Raj
... View more
08-12-2017
01:00 PM
1 Kudo
It turns out this caused by updating OpenJDK while NiFi is running. I didn't notice one of the admins had run updates earlier and a restart of NiFi made the issue go away.
... View more
10-17-2017
02:19 PM
@Shawn Weeks I have found the solution. It is with the principal which is has permission validation. Thanks for your help
... View more