Member since
12-14-2015
45
Posts
20
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
303 | 05-03-2016 02:27 PM | |
321 | 04-27-2016 02:22 PM | |
595 | 04-27-2016 08:00 AM | |
266 | 04-21-2016 02:29 PM | |
1202 | 02-03-2016 08:24 AM |
02-09-2021
03:12 AM
Hi all, we have a problem with spark ThriftServer when enable Spark Atlas Connector. When activate SAC we received this error at start: 21/01/11 08:25:37 INFO ThriftCLIService: Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V8
21/01/11 08:25:37 INFO SessionState: Created local directory: /tmp/8fb9febe-2e96-47c1-b0a6-416b46c204d7_resources
21/01/11 08:25:37 INFO SessionState: Created HDFS directory: /tmp/spark/spark/8fb9febe-2e96-47c1-b0a6-416b46c204d7
21/01/11 08:25:37 INFO SessionState: Created local directory: /tmp/spark/8fb9febe-2e96-47c1-b0a6-416b46c204d7
21/01/11 08:25:37 INFO SessionState: Created HDFS directory: /tmp/spark/spark/8fb9febe-2e96-47c1-b0a6-416b46c204d7/_tmp_space.db
21/01/11 08:25:37 INFO HiveSessionImpl: Operation log session directory is created: /tmp/spark/operation_logs/8fb9febe-2e96-47c1-b0a6-416b46c204d7
21/01/11 08:25:37 INFO StreamingQueryManager: Registered listener com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker
21/01/11 08:25:38 ERROR TThreadPoolServer: Thrift error occurred during processing of message.
org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:374)
at org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:451)
at org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:433)
at org.apache.thrift.transport.TSaslServerTransport.read(TSaslServerTransport.java:43)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:425)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:321)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:225)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27)
at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:53)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:310)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) After this error the cycle starts again with same error, every 1 minute. All works correctly (on atlas we see the aging of the data on spark), but after 24h spark going down (all time for ThriftServer, History Server sometimes stay up). ThriftServer stopped after 975 attempts to registry the listener: [root@coordinator03 spark2]# grep -o 'Registered listener com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker' spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-coordinator03.out.2 | wc -l
975
[root@coordinator03 spark2]# grep -o 'Registered listener com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker' spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-coordinator03.out.3 | wc -l
975
[root@coordinator03 spark2]# grep -o 'Registered listener com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker' spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-coordinator03.out.4 | wc -l
975
[root@coordinator03 spark2]# grep -o 'Registered listener com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker' spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1-coordinator03.out.5 | wc -l
975 The final error before stopped service is this: 21/01/12 00:35:37 ERROR ThriftCLIService: Error starting HiveServer2: could not start ThriftBinaryCLIService
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
at org.apache.thrift.server.TThreadPoolServer.execute(TThreadPoolServer.java:192)
at org.apache.thrift.server.TThreadPoolServer.serve(TThreadPoolServer.java:175)
at org.apache.hive.service.cli.thrift.ThriftBinaryCLIService.run(ThriftBinaryCLIService.java:99)
at java.lang.Thread.run(Thread.java:745) Any idea for resolution? At the moment we disable SAC. Thanks a lot
... View more
Labels:
09-21-2018
01:56 PM
I have the same problem, a customer use API of WebHcat for create or drop a db/table on hive. In HDP 3.0 using API what can be used for this operation? I tried a find some info online but I did not find anything about it.
... View more
06-01-2016
02:54 PM
1 Kudo
Maria_dev is a read_only user. You need enable an admin user: Log ssh into sandbox (user:root pass:hadoop) and run ambari-admin-password-reset from root home, then insert a password for admin ambari user. After change, log into ambari with admin user, and you enable to use service action button 🙂
... View more
06-01-2016
02:46 PM
ps: how user do you use? You use admin user of ambari? I think you not use an ambari admin if you not able to restart a service
... View more
06-01-2016
02:42 PM
Or you can start/stop (not restart) the service on hosts->sandbox.hortonworks.com->service you want restart (in case of hbase: hbase master and regionserver)
... View more
06-01-2016
02:40 PM
I use sandbox 2.4 with ambari 2.2.1 and i have the button.. Can you attach a screen please?
... View more
06-01-2016
02:25 PM
Hi, - Log to ambari console - Select service want to restart (ex hbase) - click on Service Action-> restart (or stop and then start) ambari.png
... View more
06-01-2016
01:52 PM
1 Kudo
Can you attach all ambari-server and ambari-agent log? You can try to use this command on host you have manually installed ambari-agent: ambari-agent reset <ambari_server_hostname>
This command reset ambari agent and force connection to the ambari-server host. Other way is uninstal ambari-agent package and use automatic installation of agent with ambari-server.
... View more
05-12-2016
09:42 AM
Your cluster is kerberized?
You need create new keytab for all service and reset all option on ambari if you change domain..
... View more
05-12-2016
09:38 AM
I had the same error time ago.
First, verify /etc/hosts, then verify the ambari-node able to connect to all nodes, and that all nodes able to connect to ambari-node (like ping or ssh connect).
Then I had resolved by resetting all agents (that I had stopped before): ambari-agent reset <Ambari-server-hostname> At next restart agents have started to successfully transmit information.
I hope it can help you
... View more
05-11-2016
01:47 PM
Simple example for better understand. Nice @Ana Gillan 🙂
... View more
05-11-2016
12:29 PM
HDFS permission have always precedence on Ranger permission. Good start is deny on hdfs and enable on ranger.
... View more
05-03-2016
02:39 PM
This script is need to lunch only BEFORE you install hdp. NOT AFTER or will remove all service and user!
This script not delete ambari user (is stored on db you are used for configured ambari), but delete only the SO user. Ambari user is not same SO user.
... View more
05-03-2016
02:30 PM
I response in other post:
delete all user, packet, directory and config of all hdp service for clean installation 🙂
... View more
05-03-2016
02:27 PM
1 Kudo
Problem is user hadoop password: CREATE-USER: Creating user hadoop
CREATE-USER: Setting password for hadoop
CREATE-USER FAILURE: Exception calling "SetInfo" with "0" argument(s): "The password does not meet the password policy requirements. Check the minimum password length, password complexity and password history requirements. After this error setup start unistall process HDP: Starting rollback
HDP: C:\HadoopInstallFiles\HadoopPackages\hdp-2.4.0.0-winpkg\scripts\uninstall.ps1 Try to respect complexity of password set on your win2012 server. (8 characters, symbol, number, etc).
... View more
05-03-2016
02:03 PM
yes, you can use this command for delete one to one all user. But with python script you can delete all user and packet, directory and config of all hdp service for clean installation 🙂 Cheers
... View more
05-03-2016
01:25 PM
1 Kudo
hi, lunch this script on all host
python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py --silent PS: with root user.
This script delete all user, packet, directory and config of all hdp service for clean installation 🙂
... View more
04-29-2016
12:52 PM
Hello, I came across a particular issue. The active NameNode (master01) returned a socket timeout on zkfc, soon after he performed automatically failover bringing master02 to active. But master01 remained in a stalemate, on Ambari NameNode could see up, but without a state (active or stand-by); the process of NameNode on server was up and answered the call. On NameNode log there are no errors, on zkfc log there are some SocketTimeout (I attached the log). For resolve this situation we had to restart the NameNode service on the master01, which is automatically left in stand-by just started. Then I tried to do many manual failover and have positive results all time. On system log no have error, lan is always up and no have error for communicate with server. As I wrote above, the NameNode service is up and running on all 2 server. Have an idea of what might have happened? PS:HDFS service work correctly nn-errors.txt
... View more
Labels:
04-28-2016
03:59 PM
Hi @Ludovic Rouleau, I think the only place where it could do the distcp is on the first action:
action name = "shell_date" Look the script is loaded, you should find something like:
hadoop distcp hdfs://nn1:8020/xxx hdfs://nn2:8020/xxx Or you could try to jump the first action and start the workflow directly from the second action: from: <start to="shell_date"/> to: <start to = "maj_t" />
... View more
04-28-2016
03:14 PM
hi @fayez murad,
ambari-server start for start ambari ambari-server status for see status of ambari ambari-server stop for stop ambari
For login to ambari open browser and type:
http://[ip-sandbox]:8080 [sandbox lan on bridge]
http://localhost:8080 [sandbox lan on nat] Login user: admin pass: admin
... View more
04-28-2016
10:37 AM
Ok, finish upgrade now, all works correctly. Thanks @Ignacio Pérez Torres
... View more
04-28-2016
08:47 AM
Great! I can install now, I try to complete the process.
... View more
04-27-2016
02:25 PM
ps: you can download HDP windows version here:
http://hortonworks.com/downloads/#data-platform
... View more
04-27-2016
02:22 PM
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0-Win/bk_HDP_Install_Win/content/ref-9bdea823-d29d-47f2-9434-86d5460b9aa9.1.html
Windows 10 is not official supported (only windows server edition), but I follow this guide for install HDP on single Windows 10 x64 and it works.
... View more
04-27-2016
10:47 AM
you are welcome
... View more
04-27-2016
09:23 AM
1 Kudo
You can use, but need to change heapsize of hdfs/yarn service or have javaheapsize error (default config with only hdfs/yarn/zook use 4gb).
By the way I think is not good choice, because 4 gb is minimum for work correctly, and you can deserve for vm 3gb (if you use linux, if you use windows 2gb is reserved for him :D)
... View more
04-27-2016
09:10 AM
Can you attach all log? By the way I think the problem is --target-dir: use /user/root/test For 2 reason: 1) Have permission to write with root user on /user/maria_dev? Default no.
2) target-dir must not exist, or job have this error:
ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://sandbox.hortonworks.com:8020/user/root/test already exists
... View more
04-27-2016
08:00 AM
hi, correct syntax is: select empid, ename from emp WHERE eid > '200' and $CONDITIONS (no ' ' on $CONDITIONS)
... View more
04-21-2016
02:29 PM
3 Kudos
You need use sqoop for direct import on hive form sql. First download sql jdbc
https://www.microsoft.com/en-us/download/details.aspx?id=11774 place jar on sqoop master server:
/usr/hdp/current/sqoop-server/lib use this command for import: import --connect jdbc:sqlserver://[SQL_SERVER_NAME]:[SQL_PORT]/[DB-NAME] --username "[SQL_USERNAME]" --password "[PASSWORD]" --query '[INSERT QUERY HERE] WHERE $CONDITIONS' -m 1 --hive-import --hive-database [DB_HIVE] --create-hive-table --hive-table [HIVE_TABLE]
if use --query you must have WHERE $CONDITIONS on query.
... View more
04-21-2016
10:39 AM
Hello, what is the version of the stack and Ambari are you using?
You can attach the hiveserver2 log or report error displayed on Ambari?
... View more