Member since
06-16-2021
18
Posts
0
Kudos Received
0
Solutions
02-28-2023
08:53 PM
Hello @ighack Thanks for using Cloudera Community. In such Cases, Kindly review the StdOut & StdErr within the Directory shared in the 1st line of the Screenshot (Ending with "8464-hbase-MASTER"). These 2 files would offer additional details into the JVM StartUp. Note that Role Log would be Useful, if the Role was terminated by any issues specific to HBase while the StdOut & StdErr are Useful wherein OS/JVM concerns cause the Role StartUp issues. Regards, Smarak
... View more
11-08-2021
05:29 AM
@ighack Did you resolved your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
09-10-2021
07:07 AM
Hi @ighack, If you mean the current RS heap is 50 megabytes or 80 megabytes, it's usually not enough. A good number is 16GB ~ 31 GB for most cases. If you indeed don't have enough resource in RS nodes, at lease keep RS heap 4GB as default, if still see many long GC pauses you have to increase it. Refer to below link to install phoenix and validate installation. https://docs.cloudera.com/documentation/enterprise/latest/topics/phoenix_installation.html#concept_ofv_k4n_c3b If you installed as above steps, then in any of the CDH node find the JDBC jar: find / -name "phoenix-*client.jar" and follow this guide: https://docs.cloudera.com/runtime/7.2.10/phoenix-access-data/topics/phoenix-orchestrating-sql.html Check your JDBC URL syntax should looks like: jdbc:phoenix:zookeeper_quorum:zookeeper_port:zookeeper_hbase_path Regards, Will
... View more
07-21-2021
08:36 PM
my openldap can anonymous access I remove hadoop.security.group.mapping.ldap.bind.user hadoop.security.group.mapping.ldap.bind.password that I don't get WARN
... View more
06-27-2021
11:08 PM
Class.forName("com.cloudera.impala.jdbc41.Driver");
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(principal,keytabPath);
UserGroupInformation loginUser = UserGroupInformation.getLoginUser();
loginUser.doAs((PrivilegedAction<Void>) () -> {
try {
try (Connection connection = DriverManager.getConnection("jdbc:impala://tidb4ser:21051;AuthMech=1;KrbRealm=JOIN.COM;KrbHostFQDN=tidb4ser;KrbServiceName=impala;DelegationUID=read_hive")) {
try (Statement statement = connection.createStatement()) {
statement.execute("use hivetest");
ResultSet resultSet = statement.executeQuery("SELECT id,name,year FROM hivetest.chinese_par t where t.city='重庆'");
List dataList = resultSetToList(resultSet);
System.out.println(JSONObject.toJSON(dataList));
}catch (Exception err){
err.printStackTrace();
}
}
} catch (SQLException e) {
e.printStackTrace();
}
return null;
}); use ImpalaJDBC41 that is OK but I want use HIVE-JDBC
... View more
06-20-2021
11:21 PM
@ighack You can try below workaround. Do any of the following: Use yarn-client mode for the SparkAction OR Still use yarn-cluster mode, however, add the submitting user's keytab on to a secure HDFS location and rewrite the workflow as shown: <file>hdfs://xxx/yyy.keytab#zzz.keytab</file> <spark-opts>--keytab zzz.keytab --principal zzz@YOUR_REALM</spark-opts>
... View more
06-17-2021
01:56 AM
cp /etc/hadoop/conf/hdfs-site.xml /opt/cloudera/parcels/CDH/etc/oozie/conf.dist/hadoop-conf/ I also get the same error
... View more