Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 983 | 06-04-2025 11:36 PM | |
| 1563 | 03-23-2025 05:23 AM | |
| 777 | 03-17-2025 10:18 AM | |
| 2804 | 03-05-2025 01:34 PM | |
| 1850 | 03-03-2025 01:09 PM |
04-15-2019
03:26 PM
@Andy Sutan In the attached hiveserver2log I see you have an issue with the port <<Caused by: java.net.BindException: Address already in use (Bind failed)>> The offending port it's causing the failure of the HS2 to register with the zoookeeper !! The default HS2 port is 10000 did you manually change to the current ports? Did you manually change the port? <<Caused by: org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:2181.>> Check offending application format netstat -nap |grep <port> # netstat -nap |grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 1266/java Kill the offender # Kill -9 1266 10002 is the HiveServer2 web UI port and it should be freed up when HiveServer2 shuts down. The netstat output shows that some client is connected to your HiveServer2 UI port. You could try to figure out what client that may be and what it is doing since it is a bit unusual that a connection to the HiveServer2 UI would last very long. Finding out what client is running on that port may be a good thing. After killing the process ID that is using the port and now restart your HS2 it should start successfully HTH
... View more
04-15-2019
04:47 AM
@Sandeep R It seems to be an SSL issue can you validate your LDAP, the port 636 is LDAPS and 389 is for LDAP. To enable LDAPS, you must install a certificate that meets the following requirements: The LDAPS certificate is located in the Local Computer's Personal certificate store (programmatically known as the computer's MY certificate store). A private key that matches the certificate is present in the Local Computer's store and is correctly associated with the certificate. The private key must not have strong private key protection enabled. The Enhanced Key Usage extension includes the Server Authentication (1.3.6.1.5.5.7.3.1) object identifier (also known as OID). The Active Directory fully qualified domain name of the domain controller (for example, DC01.DOMAIN.COM) must appear in one of the following places: The Common Name (CN) in the Subject field. DNS entry in the Subject Alternative Name extension. The certificate was issued by a CA that the domain controller and the LDAPS clients trust. Trust is established by configuring the clients and the server to trust the root CA to which the issuing CA chains. You must use the Schannel cryptographic service provider (CSP) to generate the key. Hope that helps
... View more
04-13-2019
05:35 PM
@Andy Sutan Did you see my response? Can you respond, after going through my response and also attach the hiveserver2.log
... View more
04-13-2019
05:17 PM
@Ricardo Ramos I have documented a walk through which was successful in reproducing your issue please go through it and let me know How to start a HDP 3.0 Sandbox_Part2.pdf How to start a HDP 3.0 Sandbox_Part1.pdf
... View more
04-13-2019
07:39 AM
@Nikhil Belure You will need to adjust the value of AMS heap size When memstore are being forced to flush to make room in memory.keep flushing until we hit this mark. Defaults to 35% of heap. The value equals to the hbase.regionserver.global.memstore.upperLimit causes the minimum possible flushing to occur when updates are blocked to memstore limiting hbase.regionserver.global.memstore.lowerLimit = 0.3 Maximum Size of all memstores in the region server before new updates are blocked and flushes are forces. This defaults to 40% of the heap hbase.regionserver.global.memstore.upperLimit = 0.35 Maximum Size of all memstores in the region server before new updates are blocked and flushes are forces. This defaults to 40% of the heap So what is the current size of your Metrics Collector Heap Size? With the above setup with a cluster size of <20 nodes Set in the Advanced ams-env the value of Metrics Collector Heap Size = 1024 that should work. Please can you use this as a reference to tune your AMS https://cwiki.apache.org/confluence/display/AMBARI/Configurations+-+Tuning Hope that helps
... View more
04-12-2019
09:00 PM
@Andy Sutan I am assuming you are on Centos/RHEL. To resolve your issue lets walk through the steps below. Unfortunately, you didn't attach the hiveserver2.log found in /var/log/hive/hiveserver2.log Here are the steps I want you to follow 1. Revert your hive.server2.webui.port back to 10002 from 10202. 2. Can you try to connect to your hive database in my example my hive password is hive Make sure you have previously run the below command if not do it now # yum install -y mysql-connector-java
# ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar Please validate the hive databases is available and accessible for user hive # mysql -uhive -phive
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 37
Server version: 5.5.60-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| ambari |
| druid |
| hive |
| mysql |
| oozie |
| performance_schema |
| ranger |
| rangerkms |
| superset |
+--------------------+
10 rows in set (0.09 sec)
MariaDB [(none)]> use hive;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [hive]> So all looks okay but if you don't see the hive database you can create it using the CLI see below or use the Ambari wizard, make sure you test successfully the connection before you proceed. ###########################################################
# Create the hive user(hive/hive) and db as the root user
# assuming root password here is {welcome1}
##########################################################
mysql -u root -pwelcome1 create database hive;
create user 'hive'@'localhost' identified by 'hive';
grant all privileges on hive.* to 'hive'@'localhost';
grant all privileges on hive.* to 'hive'@'%';
grant all privileges on hive.* to 'hive'@'FQDN' identified by 'hive';
grant all privileges on hive.* to 'hive'@'localhost' with grant option;
grant all privileges on hive.* to 'hive'@'FQDN' with grant option;
grant all privileges on hive.* to 'hive'@'%' with grant option;
flush privileges; quit; 3. There seems to be a problem with hiveserver creating a znode in zookeeper. [caught exception: ZooKeeper node /hiveserver2 is not ready yet] #./usr/hdp/3.x.x.x/zookeeper/bin/zkCli.sh
Welcome to ZooKeeper!
.. --sample output----
......
[zk: localhost:2181(CONNECTED) 0] ls /hiveserver2
[serverUri=FQDN:10000;version=3.1.0.3.1.0.0-78;sequence=0000000046]
[zk: localhost:2181(CONNECTED) 1] My entry above shows my hiveserver2 registered with the zookeeper.But I am sure you don't have an entry in your zookeeper 4. To access to your HDP host chose updating hostname to Public DNS/IP. After the above restart, you cluster and should you encounter issues please send a detailed error stack Hope that helps
... View more
04-12-2019
04:36 PM
@Anurag Mishra Having used both Ranger and Sentry to build security over clusters, I can tell you Sentry was the weak link in Cloudera offering. The Apache Ranger It is a framework to enable, monitor and manage data security across the Hadoop platform. It provides a centralized security administration, access control and detailed auditing for user access within the Hadoop, Hive, HBase and other Apache components. This Framework has the vision to provide comprehensive security across the Apache Hadoop ecosystem. Because of Apache YARN, the Hadoop platform can now support a true data lake architecture. The data security within Hadoop needs to evolve to support multiple use cases for data access while providing a framework for the central administration of security policies and monitoring of user access. I can't enumerate all the advantages of Ranger over Sentry but here are a few The latest version has plugins for most of the components in the Hadoop ecosystem.(Hive, HDFS, YARN, Kafka, etc) You can extend the functionality by writing your own UDF's like [Geolocalised based policies] It has time-based rules. Data masking (PII, HIPAA compliance for GDPR). Ref:https://hortonworks.com/apache/ranger/ Sentry Personally, I find it rudimentary just like the Oracle Role-Based Access Control security where you create a role, grant this particular role some privileges and give the role to a user. This is quite cumbersome and a security management nightmare Ref:https://www.cloudera.com/documentation/enterprise/5-6-x/topics/sg_sentry_overview.html#concept_bp4_tjw_jr__section_qrt_c54_js You will need to extensively read about the 2 solutions one of the reasons there was a merger was the solid security Hortonworks provided combined with governance with Atlas that Cloudera was lacking.
... View more
04-12-2019
03:42 PM
@Vasanth Reddy Spark SQL data source can read data from other databases using JDBC.JDBC connection properties such as user and password are normally provided as connection properties for logging into the data sources. In the example below I am using MySQL, so will need to have the Postgres drivers in place Sample Program import org.apache.spark.sql.SQLContext
val sqlcontext = new org.apache.spark.sql.SQLContext(sc)
val dataframe_mysql = sqlcontext.read.format("jdbc").option("url", "jdbc:mysql://mbarara.com:3306/test").option("driver", "com.mysql.jdbc.Driver").option("dbtable", "emp").option("user", "root").option("password", "welcome1").load()
dataframe_mysql.show() Hope that helps
... View more
04-03-2019
05:46 PM
@BHASKARA VENNA It's usually advisable to immediately create a local user with a sudoer privilege,this could have been your savior. Depending on your OS check this 2 links you should be able to reset the root password. Ubuntu https://www.maketecheasier.com/reset-root-password-linux/ Centos https://opensource.com/article/18/4/reset-lost-root-password
... View more
03-28-2019
05:50 PM
@Nathaniel Vala That shows a permission issue on this file /usr/hdp/current/ranger-admin/ews/ranger-admin-services.sh can you check that the file is readable and executable by user ranger and only readable by group and world r-x-r--r--
... View more