Member since
07-18-2016
262
Posts
12
Kudos Received
21
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
6675 | 09-21-2018 03:16 AM | |
3196 | 07-25-2018 05:03 AM | |
4141 | 02-13-2018 02:00 AM | |
1930 | 01-21-2018 02:47 AM | |
37948 | 08-08-2017 10:32 AM |
01-21-2018
01:01 PM
After Fix Repository
... View more
01-21-2018
02:47 AM
Issue resolved , performed below steps are worked me, after that i able register the target version on Ambari. As we need to update the stack version , after ambari-server up-grade 1.6 to 2.+ . https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/ambari-chap9-3.html [root@server ~]# ambari-server upgradestack HDP-2.4
Using python /usr/bin/python
Upgrading stack of ambari-server
Ambari Server 'upgradestack' completed successfully.
[root@server ~]# ambari-server start
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start....................
Ambari Server 'start' completed successfully.
[root@server ~]#
... View more
01-20-2018
03:08 PM
An internal system exception occurred: Stack HDP-2.4 doesn't have upgrade packages
... View more
Labels:
01-17-2018
05:15 AM
Error :- WARN hive.metastore: set_ugi() not successful, Likely cause: new client talking to old server. Continuing without it.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset Reason of Failure :- we get this error due to client connections is greater than hive.server2.thrift.max.worker.threads, HiveServer2 stops accepting new connections and ends up failed. This should be handled more gracefully by the server and the JDBC driver, so that the end user gets aware of the problem and can take appropriate steps (either close existing connections or bump of the config value or use multiple server instances with dynamic service discovery enabled). We should also review the behaviour of background thread pool to have a well defined behavior on the the pool getting exhausted. Ideally implementing some form of general admission control will be a better solution, so that we do not accept new work unless sufficient resources are available and display graceful degradation under overload. Increasing the Hive Thirft connection and recomendation :- For Increasing the number Hive thirft connection, recommended to add New Hive Server 2 on another machine, instead of increasing by default 500 to some number in hive-site.xml
... View more
12-17-2017
08:04 AM
1 Kudo
your comments are appreciated, thanks you. as you mentioned and in addition we can change the input splits size according to our requirement by using the the below parameters. MAPRED.MAX.SPLIT.SIZE :- If we want to increase the inputsplit size ,use this parameter while running the job.
DFS.BLOCK.SIZE :- Is global HDFS block size parameter, while storing the data in cluster .
... View more
12-13-2017
12:24 PM
1 Kudo
We have inputsplit parameter and block-size is hadoop, why these two parameter required and what the use ? Block Size :- dfs.block.size : Inputsplitsize : while job running it takes. Why we required two parameter in hadoop cluster ?
... View more
Labels:
- Labels:
-
Apache Hadoop
11-02-2017
04:02 PM
WARN hive.metastore: set_ugi() not successful, Likely cause: new client talking to old server. Continuing without it.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
INFO tool.ConnectorExportTool: com.teradata.connector.common.exception.ConnectorException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.send_get_table(ThriftHiveMetastore.java:1212)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:1203)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.tableExists(HiveMetaStoreClient.java:1274)
at com.teradata.connector.hive.processor.HiveInputProcessor.inputPreProcessor(HiveInputProcessor.java:84)
at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:115)
at com.teradata.connector.common.tool.ConnectorExportTool.run(ConnectorExportTool.java:61)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.teradata.connector.common.tool.ConnectorExportTool.main(ConnectorExportTool.java:744)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)<br> Caused by: java.net.SocketException: Connection reset
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
09-03-2017
12:06 PM
1 Kudo
As sandeep said its issue to resource as you have updated same in error "NodeManager from ubuntu-VirtualBox doesn't satisfy minimum allocations,". To get some understanding read below, definitely you some idea. https://mapr.com/blog/best-practices-yarn-resource-management/
... View more
08-18-2017
02:48 AM
I would like to see time of grant given on table or role ?
... View more
08-17-2017
08:19 AM
I can able verify the owner, granter and which role,user is assinged. Is it possible to list time/data of grant given ? hive> show grant on table database.table_name;
OK
database table_name user USER DELETE true 1447912318000 user
database table_name user USER INSERT true 1447912318000 user
database table_name user USER SELECT true 1447912318000 user
database table_name user USER UPDATE true 1447912318000 user
database table_name user ROLE DELETE true 1447913961000 root
database table_name user ROLE INSERT true 1447913961000 root
database table_name user ROLE SELECT true 1447913961000 root
database table_name user ROLE UPDATE true 1447913961000 root
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive