Member since
03-06-2020
406
Posts
56
Kudos Received
36
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
910 | 11-21-2024 10:40 PM | |
879 | 11-21-2024 10:12 PM | |
2681 | 07-23-2024 10:52 PM | |
2005 | 05-16-2024 12:27 AM | |
6699 | 05-01-2024 04:50 AM |
11-21-2024
10:29 PM
Hi @xiaohai >What is the error you are seeing? >can you use this delimiter? impala-shell -B --output_delimiter='|' -q 'SELECT * FROM your_table' Regards, Chethan YM
... View more
11-21-2024
10:21 PM
Hi @ken_zz I think it is a below known BUG, Fix versions is NONE. https://issues.apache.org/jira/browse/HIVE-19689 Regards, Chethan YM
... View more
11-21-2024
10:12 PM
1 Kudo
Hi @pravin_speaks Can you export the below before running the sqoop command and see if it helps? export HADOOP_CLIENT_OPTS="-Dsqoop.oracle.escaping.disabled=false -Djava.security.egd="file:///dev/../dev/urandom" Regards, Chethan YM
... View more
11-15-2024
01:55 AM
1 Kudo
@luffy07 The given error message is generic when you are using JDBC driver to connect impala and it does not suggest the specific cause. Verify your Impala JDBC connection string is correct, port and Hosts are reachable etc.. Check Impala server that you are trying to connect is up and running fine, paste the memory errors here to understand what you are seeing in the logs. Also you can append the below into JDBC connection string and repro the issue , It will generate driver DEBUG logs and may give some more details about the issue. LogLevel=6;LogPath=/tmp/jdbclog And try to use the latest Impala JDBC driver that is avaible. https://www.cloudera.com/downloads/connectors/impala/jdbc/2-6-34.html Regards, Chethan YM
... View more
08-06-2024
10:06 PM
Hi @Supernova Can you try this? SELECT a.currency as currency, SUM(coalesce(a.ColA, 0) + coalesce(a.ColB, 0) + coalesce(a.ColC, 0) + coalesce(b.Col1, 0) + coalesce(b.Col2, 0) + coalesce(b.Col3, 0)) as sales_check FROM db.sales a INNER JOIN db.OTHER_sales b ON a.currency = b.currency WHERE a.DateField = '2024-06-30' AND b.DateField = '2024-06-30' GROUP BY a.currency; Regards, Chethan YM
... View more
08-06-2024
10:04 PM
Hi @Marks_08 To insert into a managed table from an external table created with HBase structure in CDP, you need to ensure that Hive can properly connect to HBase. This typically involves making sure that the Hive service is configured to access HBase correctly. One common issue is that the necessary configuration files, such as hbase-site.xml, are not accessible to Hive, leading to connection issues. Here’s what you can do to address this: 1. Copy hbase-site.xml to Hive Configuration Directory You need to copy the hbase-site.xml file to the Hive configuration directory. This file contains the necessary configuration for Hive to connect to HBase. sudo cp /etc/hbase/conf/hbase-site.xml /etc/hive/conf/ 2. Verify HBase Configuration Ensure that the hbase-site.xml file contains the correct configuration and points to the correct HBase nodes. The key configurations to check are: hbase.zookeeper.quorum hbase.zookeeper.property.clientPort These settings should correctly point to the Zookeeper quorum and client port used by your HBase cluster. 3. Restart Hive Service After copying the hbase-site.xml file, you might need to restart the Hive service to ensure it picks up the new configuration. 4. Check Hive and HBase Connectivity Make sure that the Hive service can properly communicate with HBase by running a simple query that accesses HBase data through Hive. Regards, Chethan YM
... View more
08-06-2024
10:00 PM
Hi @Supernova , Can you try this below? SELECT Currency, CASE WHEN COALESCE(SUM(ColA), 0) + COALESCE(SUM(ColB), 0) = COALESCE(SUM(ColC), 0) THEN 0 ELSE COALESCE(SUM(ColA), 0) + COALESCE(SUM(ColB), 0) - COALESCE(SUM(ColC), 0) END AS sales_check FROM Sales GROUP BY Currency; Regards, Chethan YM
... View more
07-23-2024
10:56 PM
@rizalt Yes, the Key Version Numbers (KVNO) of different principals can indeed be different. Each principal in Kerberos can have its own KVNO, which is an identifier that increments each time the key for that principal is changed. Reference: https://web.mit.edu/kerberos/www/krb5-latest/doc/user/user_commands/kvno.html#:~:text=specified%20Kerberos%20principals Regards, Chethan YM
... View more
07-23-2024
10:52 PM
1 Kudo
Hi @therealsrikanth You can follow this if you do not need CM or ambari. Step 1: Install Hadoop Download Hadoop: Download the latest stable release of Hadoop from the Apache Hadoop website. tar -xzf hadoop-3.3.4.tar.gz sudo mv hadoop-3.3.4 /usr/local/hadoop 2. Configure Hadoop Environment Variables: Add the following lines to your .bashrc or .profile file. export HADOOP_HOME=/usr/local/hadoop export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 3. Edit Configuration Files: Edit the core configuration files in $HADOOP_CONF_DIR. core-site.xml: <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master-node:9000</value> </property> </configuration> hdfs-site.xml: <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:///usr/local/hadoop/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///usr/local/hadoop/hdfs/datanode</value> </property> </configuration> mapred-site.xml: <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> yarn-site.xml: <configuration> <property> <name>yarn.resourcemanager.address</name> <value>master-node:8032</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> 4. Format the NameNode: hdfs namenode -format 5. Start Hadoop Services: start-dfs.sh start-yarn.sh Step 2: Install Zookeeper: 1. Download and Extract Zookeeper: https://downloads.apache.org/zookeeper tar -xzf apache-zookeeper-3.8.1-bin.tar.gz sudo mv apache-zookeeper-3.8.1-bin /usr/local/zookeeper 2. Configure Zookeeper: Create a configuration file at /usr/local/zookeeper/conf/zoo.cfg tickTime=2000 dataDir=/var/lib/zookeeper clientPort=2181 initLimit=5 syncLimit=2 server.1=master-node1:2888:3888 server.2=master-node2:2888:3888 server.3=slave-node1:2888:3888 3. Start Zookeeper: /usr/local/zookeeper/bin/zkServer.sh start Step 3: Install HBase: 1. Download HBase: Download the latest stable release of HBase from the Apache HBase website. tar -xzf hbase-2.4.16-bin.tar.gz sudo mv hbase-2.4.16 /usr/local/hbase 2. Configure HBase: hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master-node:9000/hbase</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>master-node1,master-node2,slave-node1</value> </property> </configuration> 3. Start HBase: /usr/local/hbase/bin/start-hbase.sh Step 4: Verify Installation: Check the Hadoop services using the web interfaces: NameNode: http://master-node:9870 ResourceManager: http://master-node:8088 HBase: http://master-node:16010 Additional Resources Apache Hadoop Documentation Apache Zookeeper Documentation Apache HBase Documentation Regards, Chethan YM
... View more
06-11-2024
10:29 PM
1 Kudo
Hi @rizalt The error is because you have not provided keytab path here the command should look like below: > klist -k example.keytab To create the keytab you can refer any of below steps: $ ktutil
ktutil: addent -password -p myusername@FEDORAPROJECT.ORG -k 42 -f
Password for myusername@FEDORAPROJECT.ORG:
ktutil: wkt /tmp/kt/fedora.keytab
ktutil: q Then kinit -kt /tmp/kt/fedora.keytab myusername@FEDORAPROJECT.ORG Note: Replace the username and REALM as per your cluster configurations. Regards, Chethan YM
... View more