Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 857 | 06-04-2025 11:36 PM | |
| 1435 | 03-23-2025 05:23 AM | |
| 718 | 03-17-2025 10:18 AM | |
| 2582 | 03-05-2025 01:34 PM | |
| 1695 | 03-03-2025 01:09 PM |
03-18-2018
06:06 PM
@Michael Bronson Visually I can't tell at what stage you'd reached with the upgrade. What is the version of the current HDP cluster, I could try to reproduce it if I have a similar version? Have you tried restarting all the ambari-agents maybe it lost connection while pooling because that looks a queer message if the your components are running and not in maintenance mode 🙂
... View more
03-18-2018
06:01 PM
@Steve Hofstra Have a look at some note in learning ropes for sandbox That should help
... View more
03-18-2018
05:42 PM
@Michael Bronson Can you share the Ambari UI screenshot of the cluster to be upgraded? Those are basic pre-requisites
... View more
03-18-2018
05:14 PM
@Artur Bukowski Great we are almost there. 🙂 You can set the HADOOP_CLASSPATH in the system that runs oozie server. So, sending it every time in request is not required. Otherwise, we can set it in the xml. In file oozie-site.xml set: <property>
<name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
<value>*=/home/user/oozie/etc/hadoop</value>
</property> Where /home/user/oozie/etc/hadoop is the absolute path where hadoop configuration files are located. Please let me know whether it worked.
... View more
03-18-2018
04:37 PM
@Sandeep Ranjan Could you elaborate! The HDP sandboxes are here You have a choice between HDP for Virtualbox(Oracle) VMware,Docker Linux/Mac/windows. No need to install for example on windows you need first to download Oracle Virtual box then download the HDP Sandbox that matches your Virtualbox OS. You then just import the image into it but make sure ou have more at least 8GB of RAM allocated to the sandbox. Here is a link under Hadoop Administration this will help you in the various issues already encountered by many users. Hope that helps
... View more
03-18-2018
04:17 PM
@Artur Bukowski Great Yes, we see your new table entry as a record in TBLS table of metastore database. Causedby: ERROR 42X05:Table/View'DBS' does not exist. The issue should be with your Hcat Classpath is missing from the Hadoop Classpath. Solution To resolve this issue, run the job with the following: HADOOP_CLASSPATH=$(hcat -classpath)
export HADOOP_CLASSPATH And let me know
... View more
03-18-2018
02:50 PM
@Artur Bukowski Just to validate can you logon to metastore database in MySQL, in my example the metastore DB is called hive $ mysql -u hive -p{hive_password}
mysql> use hive; Database changed mysql> show tables; You should see a table TBLS.... Run a select mysql> select * from TBLS; Do you get any output? If Mysql is being used as Metastore DB you should have some output. If not try this to validate on the Metastore server. # su - hive
$ hive hive> create table artur(id int, name string);
hive> describe artur;
hive> insert into artur values (6,"Errors"); Now go back to the previous step in Mysql and run a select * against the TBLS table there should be some output if not then your hive is using the derby database.
... View more
03-18-2018
12:54 PM
@Saurabh Can you describe table i0177a_cus_hdr_hst_staging? Can you share the DDL used to create this external table? Under your current (possibly flawed) design, you must delete the row out of the parent table before you can delete the row in the i0177a_cus_hdr_hst_staging table that it references.
... View more
03-18-2018
12:12 PM
@Saurabh You will need to disable the Foreign key before you can delete the tables as it will have a cascading effect "Cannot delete or update a parent row: a foreign key constraint fails" SET FOREIGN_KEY_CHECKS=0;-- to disable them
SET FOREIGN_KEY_CHECKS=1;-- to re-enable them Hope that helps
... View more