Member since
07-21-2021
9
Posts
0
Kudos Received
0
Solutions
12-08-2021
08:32 AM
I have a Hive server in my cluster running a MySQL instance. The logfile /var/run/mysqld/mysqld.log is getting really large and I need to figure out the proper way to prune it. The server it's running on is CentOS 7. Can I just 'cat /dev/null > mysqld.log' to clear it out, or are there other considerations? Can I get away with simply stopping the MySQL instance briefly to delete or 'zero out' the file, or do I need to stop all the services on that Hive master node in Ambari? Please advise if you can. Thanks, Mike
... View more
Labels:
12-07-2021
09:36 AM
I have an Ambari cluster that has 12 members, and I want to add another one. All O/S are CentOS 7.9, Ambari 2.7.5, HDP 3.1.5. The host that I want to add to the cluster can't get to the internet, so I added /etc/yum.repos.d/ambari-hdp-1.repo to it. Contents of that file have entries for HDP, HDP-GPL, and HDP-UTILS on the Ambari master. The directories exist on the Ambari master, and I can browse to them. "yum repolist" on that host lists the repos as expected. When I invoke the Add New Host wizard, it successfully finds the host and says the SSH key is valid. When I get to the final page of the wizard (right before deploy), it lists the repositories it's going to pull files from. The repo for HDP-GPL shows the HortonWorks (internet) repo instead of the local repo I put in the host's repo file listed above. At this point, I cancel out of the wizard. What step(s) do I need to take to get that repo changed? Please advise when you can. Thanks in advance, Mike
... View more
Labels:
10-26-2021
01:30 PM
@balajip Is there a preferred way to add that parameter (and the other one in the issue you linked)? Like, should I add it on via Ambari GUI -> YARN configs or edit the proper file (yarn-site.xml?)? If I edit files, which nodes do I edit the files on? Thanks, Mike
... View more
10-26-2021
01:24 PM
My cluster is Ambari 2.7.5.0, HDP 3.1.5, all nodes RHEL 7.9. When I look at the Cloudera Support Matrix and drill down to Oracle JDK 8, it specifies a minimum level of 1.8.0_77. 1.8.0_311 is out now, and I'd like to update my cluster to use that. Is there a maximum level of Oracle JDK 8 I should use? Are there Ambari/HDP issues with the latest JDK? Thanks for your consideration, Mike
... View more
Labels:
10-25-2021
11:59 AM
I have 5 Hadoop/HDFS nodes running RHEL 7.9, Ambari 2.7.5.0, HDP 3.1.5. /hadoop/yarn/local/usercache/hive/filecache/ on all of the Hadoop nodes is growing very large and I want to decrease its size. Is there a YARN setting I can change to lower the amount of filecache specified above? Do I need to manually delete directories that are X days old to clean this up? Please advise. Thanks! Mike
... View more
Labels:
10-14-2021
03:00 PM
To whom it may concern, I have a cluster with Ambari 2.7.5.0 on it and HDP 3.1.5. All the nodes in it are RHEL 7.9. I saw here: https://community.cloudera.com/t5/Support-Questions/java-update-Ambari/m-p/184506#M146626 ...someone posted instructions for updating Java on an older version of Ambari. It boils down to putting the same JDK in the same path on all of your nodes, then doing a couple of ambari-server commands. Am I supposed to stop all of the individual applications (HDFS, YARN, ZooKeeper, etc.) before doing this, or does the 'ambari-server restart' on the Ambari server handle all that? Is there an updated procedure for Ambari 2.7.5.0? Please respond when you can. Thanks!
... View more
Labels:
10-12-2021
07:24 AM
Thanks for the prompt response! One follow-up question: In Ambari, when I go to Add Property in Custom hive-site, There's a box for Property Type, and the choices in it are: PASSWORD, USER, GROUP, TEXT, ADDITIONAL_USER_PROPERTY, NOT_MANAGED_HDFS_PATH, and VALUE_FROM_PROPERTY_FILE. Which Property Type do I use for that?
... View more
10-07-2021
11:08 AM
I'm getting the error " message:java.lang.UnsupportedOperationException: Storage schema reading not supported" I've found this online and similar articles to it: https://knowledge.informatica.com/s/article/579038?language=en_US That says to "To resolve this issue, add the following property to the hive_site.xml file in the cluster configuration and restart the Data Integration Service: metastore.storage.schema.reader.impl=org.apache.hadoop.hive.metastore.SerDeStorageSchemaReader" I'm running Hive 3.1.5. I have hive-site.xml files on all of the hive clients in the cluster as well as on the server. I don't see an option to add that value to the Hive configs in the Ambari GUI. Also, when I look at the Ambari GUI, there's no Data Integration Service for Hive. So, to fix that issue, do I add that line to all of the client hive-site.xml files then stop/start certain Hive servers on the Ambari GUI? Or is this all command line and outside of that GUI? Please advise when you can. TIA
... View more
Labels: