Member since
05-23-2017
40
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1462 | 11-08-2019 10:23 PM | |
936 | 10-16-2019 10:37 AM | |
1208 | 10-15-2019 11:51 PM | |
436 | 08-27-2019 03:57 AM |
11-16-2019
09:07 PM
@mokkan - I am sure what is your HDP version and what is the connection string you are using to connect to HiveServer2Interactive. But in HDP 2.6.X, I am getting below output while connecting to HS2I. From this I can see that it is connecting to LLAP. beeline> !connect jdbc:hive2://abc.example.com:2181,abc1.example.com:2181,abc2.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2
Connecting to jdbc:hive2://abc.example.com:2181,abc1.example.com:2181,abc2.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2
Enter username for jdbc:hive2://abc.example.com:2181,abc1.example.com:2181,abc2.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2:
Enter password for jdbc:hive2://abc.example.com:2181,abc1.example.com:2181,abc2.example.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2:
Connected to: Apache Hive (version 2.1.0.2.6.4.0-91)
Driver: Hive JDBC (version 1.2.1000.2.6.4.0-91)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://abc.example.com:2181,abc1.exa> From HDP 3.X, below is the response. # beeline -u "jdbc:hive2://c1108-node2.squadron-labs.com:2181,c1108-node3.squadron-labs.com:2181,c1108-node4.squadron-labs.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive"
Connecting to jdbc:hive2://c1108-node2.squadron-labs.com:2181,c1108-node3.squadron-labs.com:2181,c1108-node4.squadron-labs.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive
19/11/17 05:06:12 [main]: INFO jdbc.HiveConnection: Connected to c1108-node4.squadron-labs.com:10500
Connected to: Apache Hive (version 3.1.0.3.0.1.0-187)
Driver: Hive JDBC (version 3.1.0.3.0.1.0-187)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0.3.0.1.0-187 by Apache Hive
0: jdbc:hive2://c1108-node2.squadron-labs.com>
0: jdbc:hive2://c1108-node2.squadron-labs.com> It might be possible that you are using wrong string to connect. Please verify the same or share the complete output from your screen.
... View more
11-16-2019
08:42 PM
@Shriniwas - You can use the "hostname -i" to check the IP address of the hostname.
... View more
11-14-2019
01:41 AM
@feblik - Old version I can see is 2.5.0 on Hortonworks site. https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html
... View more
11-13-2019
09:16 PM
1 Kudo
@asmarz - It seems like you are running ambari agent with root user. While starting the ambari-agent you are getting permission denied. Can you confirm, if root user have proper permission on the directory. I will suggest you to check the path like below: # ls -ld /var/
# ls -ld /var/lib/
# ls -ld /var/lib/ambari-agent/
# ls -ld /var/lib/ambari-agent/bin/
# ls -ld /var/lib/ambari-agent/bin/ambari-agent
... View more
11-13-2019
09:07 PM
2 Kudos
@Seaport - Please find the answers for all your questions: Question #1: what level of security does this cluster have, - no security or basic security? If Kerberos or Ranger is not enabled, then cluster will be non-secure. Question #2: Because there is no centralized security management, how does HDP manage the HDFS permissions? The user/group accounts used in HDFS permissions are actually local users and authenticated by the local Linux OS, right? HDFS file and directories are created based on the umask set in the HDFS configurations. Yes your understanding is correct about the user/groups. The permissions of file and directories are controlles based on the ACL set on these files and directories. Question #3: I used the command below to change the owner of an HDFS folder to the ambari user admin. I understand that hadoop is a group name, but it is not an ambari group. So how does this work - a combination of ambari user account and an linux group? ==== hdfs dfs -chown admin:hadoop /user/admin ==== I didn't get your last question completely. All the service users are part of the Hadoop Group.
... View more
11-13-2019
07:46 AM
1 Kudo
@hsbc - The case number has changed for all the Hortonworks cases. So you need to check the same in the cases view of the portal.
... View more
11-12-2019
08:54 PM
@kbmgp - If you are getting proper output in the Hive or beeline, then there is no issue from the Hive side. I will suggest you to check with Squirrel, I fell they can help you in better way.
... View more
11-12-2019
06:52 AM
@Priyan - You can use the below command to get the details. yarn applicationattempt -list <app_number> You can refer below document to create the command based upon the use case. https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YarnCommands.html#applicationattempt
... View more
11-11-2019
09:45 PM
@albert_ - Can you check with below command, if the correct nodes are assigned to Node Label. yarn node -status <Node_ID> In the configs group, can you check if you have set the appropriate value to the Yarn memory which LLAP is requesting.
... View more
11-08-2019
10:47 PM
@Eric_B - This scenario on updating a table (even two different rows) by two different processes at the same time is not possible at the moment for ACID tables. Currently, ACID concurrency management mechanism works at a partition level for partitioned tables and table level for non partitioned (which I believe is our case). Basically what the system wants to prevent is 2 parallel transactions updating the same row. Unfortunately, it can't keep track of this at individual row level, it does it at partition and table level respectively.
... View more
11-08-2019
10:42 PM
@stryjz - You can download any version of HDP from the below link. https://docs.cloudera.com/HDPDocuments/index.html
... View more
11-08-2019
10:38 PM
@Tomas79 -
I am not sure, what you are asking is possible. But you can control the size of the same using below properties.
We could set below parameters to restrict the size of local directories under NM and trigger DeletionService when the limit is reached.
# yarn.nodemanager.delete.thread-count: # yarn.nodemanager.localizer.cache.target-size-mb # yarn.nodemanager.localizer.cache.cleanup.interval-ms
Details under: https://blog.cloudera.com/resource-localization-in-yarn-deep-dive/
The yarn.nodemanager.localizer.cache.target-size-mb property defines decides the maximum disk space to be used for localizing resources. Once the total disk size of the cache exceeds the value defined in this property the deletion service will try to remove files which are not used by any running containers.
The yarn.nodemanager.localizer.cache.cleanup.interval-ms: defines this interval for the delete the unused resources if total cache size exceeds the configured max-size. Unused resources are those resources which are not referenced by any running container.
... View more
11-08-2019
10:28 PM
@dano-young - Did you try to start the LLAP from the Ambari? Can you please share the error message you are while restarting the same from Ambari.
... View more
11-08-2019
10:23 PM
@sampathkumar_ma - In HDP, Hive's execution engine only supports MapReduce & Tez. Running with Spark is not supported in HDP at this current moment in time.
... View more
10-22-2019
07:30 PM
@HadoopHelp - It seems like the "load data inpath" command is same in both the case. Please check if you shared it by mistake. Also let me know the error message you are getting while uploading the table.
... View more
10-22-2019
07:11 PM
@stryjz - In case of "Not working LLAP" queue utilization is almost 100%. Due to which you are not able to run any queries. In this case you can either increase the queue size or tune your LLAP settings. You can check below article for tuning LLAP. https://community.cloudera.com/t5/Community-Articles/LLAP-sizing-and-setup/ta-p/247425
... View more
10-17-2019
01:31 AM
@rohan_kulkarni - The Tez UI relies on the Application Timeline Server whose role is as a backing store for the application data generated during the lifetime of a YARN application. You can refer below article for more information on this: https://tez.apache.org/tez-ui.html
... View more
10-17-2019
01:24 AM
@rohan_kulkarni - If you are using HDP 2.6.5 or older version, then you can check the same from the Tez View. Tez UI has two tabs, "Hive Queries" and "All DAGS". Hive queries shows the query start and end time, And ALL DAGS show all the information about the DAGS. Can you please check and confirm, if this is correct for you. Yarn UI shows the Application start and end time. Or else you need to grep with the keywords to get the query details from the HiveServer2 logs.
... View more
10-16-2019
10:37 AM
1 Kudo
@RP3003 - It's not possible to upgrade the individual component. You have to use the version which is shipped with that particular HDP version. Or else you have to upgrade the cluster to required version. If your question is answered then, Please make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button.
... View more
10-16-2019
07:21 AM
@RP3003 - Atlas 0.8.0 comes with HDP 2.6.5. If you are looking for Atlas 2.0.0 then you have to install or upgrade to HDP 3.1.4 which is the latest version of HDP. https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.6.5/bk_release-notes/content/comp_versions.html https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/release-notes/content/comp_versions.html
... View more
10-16-2019
07:16 AM
@vikas88 - Can you please click on that particular application and share the snapshot with us. I am suspecting application is not getting enough resources to launch the AM.
... View more
10-16-2019
07:13 AM
@rohan_kulkarni - I am not sure what you are looking for exactly? Are you looking for Tez View logs or Yarn app logs? - Also you mentioned " pull out information like query text, start_time, user, etc.". You can pull out these information from the HiveServer2 logs also. You need to check on which HS2 query is running and then check the logs accordingly.
... View more
10-16-2019
07:08 AM
@stryjz - LLAP is Live Long and Process. So the LLAP daemon will keep on running irrespective of whether you are running query or not. It consists of a long-lived daemon which replaces direct interactions with the HDFS Data Node, and a tightly integrated DAG-based framework. So based on the LLAP Daemon size that particular YARN application will be utilizing the resources. You can refer below articles for more information on this: https://cwiki.apache.org/confluence/display/Hive/LLAP https://community.cloudera.com/t5/Community-Articles/Hive-LLAP-deep-dive/ta-p/248893
... View more
10-15-2019
11:51 PM
@shyamshaw - Still Spark execution engine is not supported with Hive in HDP 3.X. https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/hive-overview/content/hive-apache-hive-3-architectural-overview.html
... View more
09-17-2019
01:40 AM
@TechAven - Can you run below command on the ambari-server host and share the output. # ambari-server status. Also can you check the error message in the ambari server logs( /var/log/ambari-server/ambari-server.log) during the time of the startup. If possible you can share the ambari server logs.
... View more
09-17-2019
01:37 AM
@msheean253 - For disk failures, it depends on the property "DataNode failed disk tolerance". You can search for the same in the HDFS configs from Ambari. You need to configure based upon Disk failure. If Replication Factor is set to 3, then you can have max 2 DN down without any impact on the data.
... View more
09-10-2019
02:17 AM
@RickWang - Can you please confirm where you was the OutofMemory error. Can you please share the snippet of the log file. Also share the "#ps -ef | grep -i hiveserver2" from the hiveserver2 host.
... View more
09-10-2019
02:07 AM
@venk - You can refer the below article to get information about the cross-realm: https://community.cloudera.com/t5/Community-Articles/How-does-a-cross-realm-trust-work/ta-p/245705 https://www.cloudera.com/documentation/enterprise/5-14-x/topics/cm_sg_kdc_def_domain_s2.html
... View more
09-03-2019
11:19 PM
@LeeFan - Can you please click on any one applications and share the snapshot with me. I am suspecting, applicationMaster is not getting enough memory to run the applications. Also let me know the value of this parameter in Tez configs " tez. am. resource. memory. mb". If it is set to higher value, you can change to lower value and try it again.
... View more
08-27-2019
04:29 AM
@harry_li - Can you check and confirm, if your MYSQL is up and running? Also refer below article which talks about the error message which you are getting: https://community.cloudera.com/t5/Support-Questions/ambari-2-7-0-not-start/m-p/198737
... View more