Member since
10-02-2017
112
Posts
71
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3098 | 08-09-2018 07:19 PM | |
3900 | 03-16-2018 09:21 AM | |
4039 | 03-07-2018 10:43 AM | |
1156 | 02-19-2018 11:42 AM | |
4031 | 02-02-2018 03:58 PM |
08-09-2018
07:19 PM
1. COUNT will result in a full table scan and hence the query is slow. 2. Where on the primary key will be fast as it will do a lookup and not a scan. 3. Where used on any column apart from the primary key will result in a HBase full table scan. 4. Analyse table once to speed up count queries. But it will not affect the where on no-primary key.
... View more
08-09-2018
06:57 PM
1 Kudo
Sqoop is used for importing data from multiple sources onto HDFS. One of the most common use can is to use Hive imports sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username root -P --split-by id --columns id,name --table customer --target-dir /user/cloudera/ingest/raw/customers --fields-terminated-by "," --hive-import --create-hive-table --hive-table sqoop_workspace.customers If you want to specify a specific Queue for sqoop job -Dmapred.job.queuename=queueName needs to be immediately added after the import keyword. sqoop import -Dmapred.job.queuename=queueName --connect jdbc:mysql://localhost:3306/sqoop --username root -P --split-by id --columns id,name --table customer --target-dir /user/cloudera/ingest/raw/customers --fields-terminated-by "," --hive-import --create-hive-table --hive-table sqoop_workspace.customers
This will launch the sqoop job in the specific queu but the hive job will be launched in the default queue. To launch the hive job is specific queue make a copy of tez-site.xml and in the queue name add the queue you want the hive job to be executed. Property of tez-site.xml <property> <name>tez.queue.name</name> <value>custom Queue Name </value> </property> export HIVE_CONF_DIR=PATH OF DIR WHERE CUSTOM tez-site.xml is placed run the sqoop job with the export statement executed. Do remember to add -Dmapred.job.queuename=queueName (immediately after import) to set the sqoop queue name and tez-site.xml for hive queue name
... View more
Labels:
08-03-2018
06:31 PM
2 option is not the perfect approach but is full proof, as its never possible for HDFS directory to be empty while Hive has data.
... View more
08-03-2018
06:09 PM
1. select * from database.table limit 1 It will never perform a full table scan. You can verify this on Resource Manager with no new job being spawned. 2. You can find the HDFS path corresponding to the table and do a hdfs -du on the path to know what is the size of directory.
... View more
07-31-2018
07:40 PM
1 Kudo
The general perception that one needs to thoroughly know the code base for Understanding and Debugging a HDP service. Do appreciate the fact that HDP services including the execution engine (MR, TEZ, SPARK) are all JVM based processes. JVM along with the Operating Systems provides various knobs to know what the process is doing and performing at run-time. The steps mentioned below can be applied for understanding and debugging any JVM process in general. Lets take an example of how HiveServer2 works assuming one is not much acquainted or has a deep understanding of the code base. We are assuming one knows what the service does, but how it works internally the user is not aware of 1. Process Resource Usage Gives a complete overview of the usage pattern of CPU, memory by the process providing a quick insight of the health of the process 2. How to figure out which all service the process interacts with ADD JMX parameters to the service allowing you to visualize whats happening within the JVM at run-time using jconsole or jvisualm What this ensures is the JVM is broadcasting the metrics on port 7001 and can be connected using jconsole or Jvisualm. There is no security enabled on can add SSL certificates too for authentication What can we infer about the kind of threads we see from jconsole Number of threads have peaked to 461, currently only 170 are active Abandoned connection cleaner is cleaning the lost connections. Kafka- kerberos-refres-thread : As HS2 supports kerberos, TGT needs to be refreshed after the renewal period. It means we dont have to manually refresh the kerberos ticket. HS2 is interacting with Kafka and Atlas as can be seen by the threads above CuratorFramework is the class used for talking to zookeeper means HS2 is interacting with Zookeeper. HS2 is interacting with SOLR and Hadoop services (RMI) HS2 is sending audits to ranger, means HS2 is interacting with Ranger HS2 has some HiveServer2-Handler thread which are used for doing some reading from thrift Socket (this is the thread used for responding to connection) This view provides overall view of whats happening with the JVM. GC : Old gen ParNew GC has been happening . and 10 mins have been spend on minor GC ConcurrentMarkSweep is used for Tenured Gen and 1 min has been spend. If you look VM summary (Total up-time of 3 days 3 hours) you can find when was the VM started and since then 10 mins has been spend on minor and 1 min on Major GC providing a good overview of how the process is tuned to handle GC and is the heap space sufficient. Hence the above information provides the HS2 interaction with 1. Ranger 2. Solr 3. Kafka 4. Hadoop Components (NameNode, DataNode) 5. Kerberos 6. As its running Tez it should be interacting with Resource Manager and Node Manager 7. Heap and GC performance Lets take a further deep dive of how exactly JVM is performing over time. use jvisualm to find real time info of Threads and sampling of CPU and RAM all the above mentioned information can be derived from command line too 1. find the pid of the process ps -eaf | grep hiveserver2 (process name ) will fetch you the pid of the process 2. finding memory usage and gc at real time Total Heap = EDEN SPACE (S0C + S1C + EC ) + Tenured Gen ( OC) Currently Used Heap = EDEN SPACE (S0U + S1U + EU ) + Tenured Gen (OU) YGC = 15769 says till now how many time young GC has been done FGC = 83 How many times the FGC has been done If you see the count increasing too frequently its time to optimize it . JMAP to find the instances available inside the class along with their count and memory footprint Know state of threads running use jstack top -p pid RES = memory it consumes in RAM which should be equal to the heap consumed Find ports on which the process is listening to and which all clients its is connected To find any errors being reported by the service to client scan the network packets tcpdump -D to find the interfaces the machine has tcpdump -i network interface port 10000 -A (tcpdump -i eth0 port 10000 -A ) Scan the port 10000 on which HS2 is listening and look at the packets exchanged to find which client is having the exceptions Any GSS exception or any other exception reported by Server to client can be seen in the packet. To know the cpu consumption and memory consumed by the process top -p pid To know the the disk I/O read write done by the process use iotop -p pid Some OS commands to know about the how the host is doing running the process 1. iostat -c -x 3 dispaly the CPU and Disk IO 2. mpstat -A ALL get utilization of individual CPU 3. iostat -d -x 5 disk utilization 4. ifstat -t -i interface network utilization Take away 1. Any exception you see in the process logs for clients can be tracked in the network packet, hence you dont need to enable debug logs ( use tcpdump ) 2. A process consumes exceptionally high memory under High GC specially frequent FULL and minor GC 3. In hadoop every service client has a retry mechanism (no client fails after one retry) hence search for retries in the log and try to optimize for that 4. jconsole and jvisualm reveals all important info from threads, memory, cpu utilization, GC 5. keep a check on cpu, network, disk and memory utilization of the process to get holistic overview of the process. 6. In case of failure take heap dump and analyse using jhat for deeper debugging. Jhat will generally consume 6X the memory of the size of the dump (20 GB heap dump will need 120 GB of heap for jhat to run, jhat -J-mx20g hive_dump.hprof) 7. Always refer to the code to have a correlation of th process behavior to memory foot print. 8. jmap output will help you understand what data structure is consuming most of heap and you can point to where in the code the data Structure is being used. Jhat will help you get the tree of the data Structure.
... View more
Labels:
07-30-2018
05:15 PM
6 Kudos
Heap resigning and Garbage Collection tuning plays a central role in deciding how healthy and efficiently the cluster resources will be utilized. One of the biggest challenges is to fine tune the heap to make sure neither you are underutilized or over utilizing the resources. The following heap sizing has been made after a in depth analysis of the health of individual services. Do remember this is the base line you can add more heap to your resources depending on the kind of work load one executes. Key take away 1. All the services present in HDP are JVM based and all need appropriate heap sizing and GC tuning 2. HDP and YARN are the base of all the services, hence make sure NN, DN, RM and NN are given sufficient heap. 3. Rule of thumb, heap sizing till 20 - 30 GB doesnt require any special tuning, anything above the value need fine tuning. < 4. 99% of time your cluster is not working efficiently because the services are suffering GC and appear RED in the UI. 5. Most of the Services have short lived connections and interactions hence always provide enough space to the young Generation in the order of 5 - 10 GB (depending on load and concurrency). 6. HiveServer2 , Spark Thrift Server, LLAP sever needs special attention as this service interacts with each and every component in cluster, slowness of any of the component will impact the connection Establishment time of these services. 7. Look for "retries/ retry" in your log, to know which services are slow and fine tune it. HDFS YARN MapReduce2 HBase Oozie Zookeeper Ambari Infra Ambari Metrics - Hbase Atlas Spark Thrift Server1 Hive https://community.hortonworks.com/articles/209789/making-hiveserver2-instance-handle-more-than-500-c.html Ambari and Ambari views should run with a heap of at least 15 GB for it to be fast.
... View more
Labels:
07-30-2018
04:27 PM
8 Kudos
HiveServer2 is a thrift server which is a thin Service layer to interact with the HDP cluster in a seamless fashion. It supports both JDBC and ODBC driver to provide a SQL layer to query the data. An incoming SQL query is converted to either TEZ or MR job, the results are fetched and send back to client. No heavy lifting work is done inside the HS2. It just acts as a place to have the TEZ/MR driver, scan metadata infor and apply ranger policy for authorization. Configurations : TEZ parameters
Tez Application Master Waiting Period (in seconds) -- Specifies the amount of time in seconds that the Tez Application Master (AM) waits for a DAG to be submitted before shutting down. For example, to set the waiting period to 15 minutes (15 minutes x 60 seconds per minute = 900 seconds): tez.session.am.dag.submit.timeout.secs=900
Enable Tez Container Reuse -- This configuration parameter determines whether Tez will reuse the same container to run multiple queries. Enabling this parameter improves performance by avoiding the memory overhead of reallocating container resources for every query. tez.am.container.reuse.enabled=true
Tez Container Holding Period (in milliseconds) -- Specifies the amount of time in milliseconds that a Tez session will retain its containers. For example, to set the holding period to 15 minutes (15 minutes x 60 seconds per minute x 1000 milliseconds per second = 900000 milliseconds): tez.am.container.session.delay-allocation-millis=900000 A holding period of a few seconds is preferable when multiple sessions are sharing a queue. However, a short holding period negatively impacts query latency. HiveServer2 Parameters Heap configurations : GC tuning : Database scan disabling And Session initialization parameter : Tuning OS parameters (Node on which HS2, metastore and zookeeper are running) : set net.core.somaxconn=16384 set net.core.netdev_max_backlog=16384 set net.ipv4.tcp_fin_timeout=10 Disconnecting Idle connections to lower down the memory footprint (Values can be set to minutes and seconds): Default hive.server2.session.check.interval=6h hive.server2.idle.operation.timeout=5d hive.server2.idle.session.check.operation=true hive.server2.idle.session.timeout=7d Proactively closing connections hive.server2.session.check.interval=60000=1min hive.server2.idle.session.timeout=300000=5mins hive.server2.idle.session.check.operation=true Connection pool (change it depending how concurrent the connections are happening ) General Hive.server2.thrift.max.worker.threads 1000. (If they get exhauseted no incoming request will be served) Things to watch out for : 1. making more than 60 connection to HS2 from a single machine will result in failures as zookeeper will rate limit it. https://community.hortonworks.com/articles/51191/understanding-apache-zookeeper-connection-rate-lim.html 2. Don't forget to disable DB scan for every connection. 3. Watch out for memory leak bugs (HIVE-20192, HIVE-19860), make sure the version of HS2 you are using is patched. 4. Watch out for the number on connection limit on the your backed RDBMS. https://community.hortonworks.com/articles/208966/optimized-mysql-db-config-for-ambari-hive-to-handl.html 5. Depending on your usage you need to fine tune heap and GC, so keep an eye on the full GC and minor GC frequency. 6. Usage of add jar leads to class loader memory leak in some version of HS2, please keep an eye. 7. Do remember in Hadoop for any service the client always retries and hence look for retries log in HS2 and optimize the service, to handle connections seamlessly. 8. HS2 has no upper threshold in terms of the number of connection it can accept, it is limited by the Heap and respone of the other service to scale. 9. Keep an eye on CPU consumption on the machine where HS2 is hosted to make sure the process has enough CPU to work with high concurrency.
... View more
07-27-2018
07:12 PM
The article provides a optimized mysql config (Usage of enterprise RDBMS is encouraged ) that can be used, in case mysql is used as the backed DB for Ambari, Hive and Ranger. The configuration is good enough to handle 1000 concurrent users on HS2 and Ambari. The config file my.cnf is generally located at /etc/my.cnf my.cnf ****************************************************************************************************************************** # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M # # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock max_allowed_packet = 32M max_connections = 2000 open_files_limit = 10000 tmp_table_size = 64M max_heap_table_size = 64M query_cache_type = ON query_cache_size = 128M table_open_cache_instances = 16 back_log = 1000 default_storage_engine = InnoDB innodb-buffer-pool-size = 10G innodb_log_file_size = 1024M innodb_log_buffer_size = 16M innodb_flush_log_at_trx_commit = 0 innodb_flush_method = O_DIRECT innodb_buffer_pool_instances = 4 innodb_thread_concurrency = 10 innodb_checksum_algorithm=crc32 # MYISAM in case there is one key-buffer-size = 64M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 # Recommended in standard MySQL setup sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid ***************************************************************************************************** Key Take Aways : 1.innodb-buffer-pool-size = 10G This is one of the most important parameters, which says how much data can be cached in RAM. 2. mysqldump -u root --all-databases -p > dump.sql will dump all the databases in a file (which can be used to recovery) and in the process will fetch all the data from disk and cache it in RAM, making all queries lightening fast. 3. max_connections = 2000 concurrent connection as the database is connected by all HS2 instances. This can become a limiting factor and may result in connection timeouts for clients connecting to HS2. 4. Do keep in mind the number of connection the mysql can handle can directly impact the number of users the HS2 server can handle concurrently. 5. For a details of all config parameters of mysql kindly follow the amazing file http://www.speedemy.com/files/mysql/my.cnf 6. https://community.hortonworks.com/articles/80635/optimize-ambari-performance-for-large-clusters.html
... View more
Labels:
07-27-2018
06:54 PM
1 Kudo
Ambari schedules a connection check to HS2 every 2 minutes. Beeline client is executed to connect to the HS2 JDBC string with a timeout on client side. IF beeline is not able to connect withing the timeout period, the beeline process is killed. This shows your HS2 is not configured properly. As it is taking more than 60 seconds to connect which is quite unusual. HS2 should connect with 4 - 30 seconds max, for it be usable by external tools like Power BI, alation, Ambari Hive Views. Please follow the following flogs in details to know more about how to debug the issue. https://community.hortonworks.com/content/kbentry/202479/hiveserver2-connection-establishment-phase-in-deta.html https://community.hortonworks.com/content/kbentry/202485/hiveserver2-configurations-deep-dive.html Some steps to know where is the bottelneck 1. Enable debug mode for Beeline 2. Execute beeline with the HS2 JDBC string, it will provide a detailed view of time to connect to AD, Zookeeper, HS2 and mysql 3. Tune HS2 parameters (Tez Session at Initilization = true , disable database scan for each connection= true) 4. Bump up heap of HS2 Blog has all that you need https://community.hortonworks.com/content/kbentry/209789/making-hiveserver2-instance-handle-more-than-500-c.html
... View more
07-15-2018
04:58 PM
7 Kudos
HiveServer2 is a thrift server which is a thin Service layer to interact with the HDP cluster in a seamless fashion. It supports both JDBC and ODBC driver to provide a SQL layer to query the data. An incoming SQL query is converted to either TEZ or MR job, the results are fetched and send back to client. No heavy lifting work is done inside the HS2. It just acts as a place to have the TEZ/MR driver, scan metadata infor and apply ranger policy for authorization. HiveServer 2 maintains two type pools 1. Connection pool : Any incoming connection is handled by the HiveServer2-handler thread and is kept in this pool which is unlimited in nature but restricted by number of threads available to service an incoming request . An incoming request will not be accepted if there are no free HiveServer2-handler thread to service the request. The total no of threads that can be spawnned with the HS2 is controlled by parameter hive.server2.thrift.max.worker.thread. 2. Session pool (per queue) : this is the number of concurrent sessions that can be active. Whenever a connection in connection pool executes a SQL query, and empty slot in the Session queue is found and the sql statement is executed. The queue is maintained for each queue defined in "Default query queue". In this example only Default Queue is defined with the session per queue =3. More than 3 concuurent queries (Not connections) will have to wait until one of the slots in the session pool is empty. * There can be N connections alive in HS2 but at any point in time it can only execute 3 queries rest of the queries will have to wait. HIveServer 2 Connection overview. 1. HS2 listens on port 10000 and accepts connections from client on this port. 2. HS2 has connections opened to Kafka to write metric and lineage information which is consumed by Ambari metrics service and Atlas. 3. HS2 also has connects to DataNodes directly time to time to service request like "Select * from Table Limit N" 4. HS2 also connects to NodeManager where the TEZ AM are spawned to service the SQL query. 5. In The process it is talking to NameNode and Resource Manager, and a switch during a HA will impact the connection. 6. HiveServer2 overall is talking to ATLAS, SOLR, KAFKA, NameNode, DataNode, ResourceManager, NodeManager, MYSQL(overall) For HiveServer2 to be more interactive and Faster following parameters are set 1. Start Tez Session at Initilization = true: what it does is at time of Bootstrapping(start, restart) of HS2 it allocates TEZ AM for every session. (Default Query Queue = default, Session per queue = 3, hence 3 TEZ AM will be spawnned. If there were 2 queues in Default Query Queue = default, datascience and Session per queue = 3 then 6 TEZ AM will be spawned.) 2. Every TEZ AM can also be forced to spawn fix number of container (pre-warm) at time of HS2 bootstrapping. Hold container to reduce latency = true, Number of container held =3 will ensure Every TEZ AM will hold 2 containers to execute a SQL query . 3. Default Query Queue = default, Session per queue = 3, Hold container to reduce latency = true, Number of container held =3 HS2 will spawn 3 TEZ AM for every session. For Every TEZ AM 2 containers will be spawned. hence overall 9 conatiners (3 TEZ AM + 6 containers) can be seen running in Resource Manager UI. 4. For the TEZ to be more efficient and save the time for bootstrapping following parameters needs to be configured in TEZ configs. The parameter prevents the AM to die immediately after job execution. It wait between min- max to service another query else it kills itself freeing YARN resources. Tez.am.container.reuse.enabled=true Tez.am.container.idle.release-timeout.min.millis=10000 Tez.am.container.idle.release-timeout.max.millis=20000 Every connection has a memory footprint in HS2 as 1. Every connection scans the metastore to know about table. 2. Every connection reserves memory to hold metrics and TEZ driver 3. Every connection will reserve memory to hold the output of TEZ execution Hence if a lot of connections are alive then HS2 might become slower due to lack of heap space, hence depending on the number of connections Heap space needs to be tuned. 40 connections will need somewhere around 16 GB of HS2 heap. Anything more than this need fine tuning of GC and horizontal scalling. The client connections can be disconnected based on the idle state based on the following parameter hive.server2.session.check.interval=6h
hive.server2.idle.operation.timeout=5d hive.server2.idle.session.check.operation=true hive.server2.idle.session.timeout=7d HiveServer2 has embedded metastore which Interacts with the RDBMS to store the table schema info. The Database is available to any other service through HiveMetaStore. HiveMetastore is not used by HiveServer2 directly. Problems and solution 1. client have a frequent a timeout : HS2 can never deny a connection hence timeout is a parameter set of client and not on HS2. Most of the Tools have 30 second as timeout, increase it to 90-150 seconds depending on your cluster usage pattern. 2. HS2 becomes unresponsive: This can happen if HS2 is undergoing frequent FULL GC or unavailability of heap. Its time to bump up the heap and horizontally scale. 3. HS2 has large number of connections : HS2 accepts connections on port 10000 and also connects to RDBMS, DataNodes (for direct queries ) and NodeManager (to connect to AM) hence always tries to analyse is it lot of incoming or outgoing connections. 4. As HS2 as a service is interactive its always good to provide HS2 a good size in terms of young Gen. 5. In case SAS is interacting with HS2 a 20 GB heap where 6 -8 GB is provided to youngen is a good practice. Use ParNewGC for young Gen and ConcurrentMarandSweepGC for Tenured gen. To know in details about the connection establishment phase of HiveServer2 https://community.hortonworks.com/articles/202479/hiveserver2-connection-establishment-phase-in-deta.html
... View more
Labels: