Member since
10-11-2022
121
Posts
20
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
691 | 11-07-2024 10:00 PM | |
1208 | 05-23-2024 11:44 PM | |
1044 | 05-19-2024 11:32 PM | |
5363 | 05-18-2024 11:26 PM | |
2062 | 05-18-2024 12:02 AM |
05-12-2024
01:37 AM
@ChineduLB Impala doesn't directly support nested select statements within the WHEN clause of a CASE statement. However, you can achieve similar logic Subqueries for conditions: You can use subqueries within the WHEN clause to evaluate conditions based on data retrieved from other tables. SELECT case when (select count(*) from table1) > 0 then (select * from table1) when (select count(*) from table2) > 0 and (select count(*) from table3) > 0 then (select * from table3) else null end as result_table; This query checks if table1 has any rows. If yes, it selects all columns from table1. Otherwise, it checks if both table2 and table3 have rows. If both have data, it selects all columns from table3. If none of the conditions are met, it returns null.
... View more
05-12-2024
01:32 AM
1 Kudo
@Marks_08 1. Verify if any firewalls are blocking incoming connections on ports 10000 (HiveServer2) and 10002 (Thrift server). You can use tools like netstat -atup or lsof -i :10000 to check if any processes are listening on these ports. If a firewall is restricting access, configure it to allow connections on these ports from the machine where you're running Beeline. 2. Double-check the HiveServer2 configuration files (hive-site.xml and hive-env.sh) in Cloudera Manager. Ensure that the hive.server2.thrift.port property is set to 10000 in hive-site.xml. Verify that the HIVESERVER2_THRIFT_BIND_HOST environment variable (if set) in hive-env.sh allows connections from your Beeline machine. Make sure the HiveServer2 service has the necessary permissions to bind to these ports. 3. beeline -u jdbc:hive2://<HOST>:10000/;principal=hive/USR@PWD (specifies principal) 4. Try restarting the Hive and HiveServer2 services in Cloudera Manager. This can sometimes resolve conflicts or configuration issues. 5. Check the HiveServer2 log files (usually in /var/log/hive-server2/hive-server2.log) for any error messages that might indicate why it's not listening on the expected ports.
... View more
05-10-2024
07:31 AM
1 Kudo
@snm1523 check if this doc can help https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/upgrade-cdp/topics/ug_cdh_upgrade_cdp2cdp_post.html
... View more
05-08-2024
09:43 PM
1 Kudo
@Lorenzo The error message "identity[myaduser], groups[] does not have permission to access the requested resource" indicates that while Kerberos authentication is successful, your user myaduser lacks the necessary permissions to access the specific NiFi flow you're targeting in API call N.2. 1. Verify User Permissions in NiFi: Access the NiFi UI and navigate to the specific flow you're trying to modify. Go to the "Policies" tab. Ensure "myaduser" has the appropriate read/write permissions on the flow or specific process group. You might need to add the user to a group with the required permissions. 2. Check Ranger Policies (if applicable): If you're using Apache Ranger for authorization in your Cloudera cluster, there might be Ranger policies restricting access to the NiFi flow. Review Ranger policies for NiFi resources. Verify if any policies specifically deny access to the flow or process group for "myaduser" or its groups. 3. Kerberos Service Principal Configuration: Double-check the Kerberos service principal configured for NiFi. Ensure the service principal used for authentication has the necessary permissions in Ranger or NiFi authorization policies. 4. Testing with a More Privileged User: Try using a user with known administrative privileges in NiFi to perform the API call N.2. If the call succeeds with the privileged user, it confirms the issue lies with "myaduser" permissions.
... View more
05-01-2024
10:28 PM
1 Kudo
@VenkataAvinash The error you're encountering (java.lang.RuntimeException: org.apache.storm.thrift.TApplicationException: Internal error processing submitTopologyWithOpts) indicates that there's an issue with submitting the Storm topology, but it doesn't directly point to the specific cause. However, based on your configuration and the error message, it seems like there might be an issue with the Kerberos authentication setup or configuration for the Storm Nimbus service. =>Review Kerberos Configuration: Double-check the Kerberos configuration for Storm Nimbus and ensure that it matches the settings in your storm.yaml file. Verify that the Kerberos principal (hdfs/hari-cluster-test1-master0.avinash.ceje-5ray.a5.cloudera.site@AVINASH.CEJE-5RAY.A5.CLOUDERA.SITE) and keytab file (/root/hdfs.keytab) are correctly specified. =>Check Keytab Permissions: Ensure that the keytab file /root/hdfs.keytab has the correct permissions set and is accessible by the Storm Nimbus service. =>Verify Service Principals: Confirm that the Kerberos principal (hdfs/hari-cluster-test1-master0.avinash.ceje-5ray.a5.cloudera.site@AVINASH.CEJE-5RAY.A5.CLOUDERA.SITE) is correctly configured for the Storm Nimbus service and that it has the necessary permissions to access HDFS. =>Check Nimbus Logs: Check the Nimbus logs (nimbus.log) for any additional error messages or stack traces that might provide more insight into the issue. =>Classpath Issues:Confirm that the versions of Storm, HDFS, and Kerberos libraries on your cluster are compatible with each other. Refer to the documentation for each component for known compatibility issues. =>Try submitting a simpler topology without the HDFS bolt initially to see if the basic Kerberos configuration works. This can help isolate the issue further. =>Consider using a tool like klist to verify if your user has successfully obtained a Kerberos ticket before submitting the topology.
... View more
05-01-2024
10:23 PM
1 Kudo
@wallacei Error: sqlline-thin.py is configured to use Protobuf serialization for communication with PQS. Protobuf relies on pre-defined class names to parse responses from the server. The error message suggests that sqlline-thin.py is unable to find the class name for a specific response message from PQS. =================== Check PQS Configuration: Ensure PQS is configured to use Protobuf serialization as well. This might involve checking configuration files or options during PQS startup. Verify Library Versions: Make sure the versions of sqlline-thin.py and the Phoenix libraries (including PQS) are compatible. Inconsistent versions might lead to class name mismatch issues. You can check the documentation for sqlline-thin.py for specific version compatibility information. Consider sqlline.py (Regular JDBC): As your sqlline.py script works with regular JDBC, it suggests the basic Phoenix connection is functional. You might consider using sqlline.py for now while troubleshooting the Protobuf issue with sqlline-thin.py. Alternative Tools: If sqlline-thin.py continues to cause problems, explore alternative tools for connecting to Phoenix like the Phoenix JDBC thin client or a GUI client like Squirrel SQL. Double-check the connection URL in sqlline-thin.py. Ensure it points to the correct PQS endpoint (http://localhost:8765 by default).
... View more
05-01-2024
10:18 PM
1 Kudo
@VTHive Assuming you have a table named your_table with a column named condition, you can extract the variable names using SQL: SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(condition, '=', 1), ' ', -1) AS variable_name FROM your_table UNION SELECT SUBSTRING_INDEX(SUBSTRING_INDEX(condition, ' in ', 1), ' ', -1) AS variable_name FROM your_table WHERE condition LIKE '% in %' UNION SELECT TRIM(SUBSTRING_INDEX(SUBSTRING_INDEX(condition, '=', 1), ' ', -1)) AS variable_name FROM your_table WHERE condition LIKE '% ne %'; The query will extract the variable names from the conditions in the condition column of your table. It handles conditions with =, in, and ne operators. Adjust the table and column names accordingly to fit your actual schema
... View more
05-01-2024
06:41 AM
@manishg Define environment variables in your Kubernetes Deployment or Pod configuration YAML file. Assign values to these environment variables to represent the properties from your bootstrap.conf file (e.g., JAVA_ARG_2, JAVA_ARG_3). These environment variables will be accessible within your containerized application. Use these environment variables to set properties in your application's bootstrap process. Adjust the names and values of the environment variables according to your specific requirements and configurations.
... View more
04-30-2024
02:11 AM
1 Kudo
Use a Dedicated Temporary Directory Configure a Temporary Directory, Set an environment variable within your Kubernetes deployment YAML file specifying an alternative temporary directory with appropriate permissions for the NiFi user. You can leverage an empty directory volume mounted specifically for temporary files. Update NiFi Configuration, Modify the NiFi configuration (potentially within the bootstrap.conf file) to use the environment variable for the temporary directory. Consult the NiFi documentation for specific instructions on how to configure an alternate temporary directory. @TreantProtector
... View more
04-30-2024
02:04 AM
1 Kudo
java.arg.2 and java.arg.3 in bootstrap.conf: These are the traditional way of setting JVM arguments within the NiFi configuration file itself. They translate to: java.arg.2=-Xms512m: This sets the initial heap size of the NiFi JVM to 512 megabytes (m). java.arg.3=-Xmx512m: This sets the maximum heap size of the NiFi JVM to 512 megabytes (m). NIFI_JVM_HEAP_INIT and NIFI_JVM_HEAP_MAX environment variables: These are environment variables that allow you to configure the JVM heap size externally. This is particularly useful when deploying NiFi in containerized environments If both java.arg.2 / java.arg.3 and NIFI_JVM_HEAP_INIT / NIFI_JVM_HEAP_MAX are defined, the environment variables take precedence. This means the values set in the environment variables will be used to configure the JVM heap size. @manishg
... View more
- « Previous
-
- 1
- 2
- Next »