Member since
11-20-2015
24
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1650 | 07-29-2016 07:09 AM |
07-07-2019
04:16 AM
Got the same error. Your hint works. 2019-07-07 13:10:17,764 ERROR [main] org.apache.nifi.web.server.JettyServer Unable to load flow due to: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.SocketException: アドレスは既に使用中です (Listen failed)
org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.SocketException: アドレスは既に使用中です (Listen failed)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:323)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1008)
at org.apache.nifi.NiFi.<init>(NiFi.java:158)
at org.apache.nifi.NiFi.<init>(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:297)
Caused by: java.net.SocketException: アドレスは既に使用中です (Listen failed)
at java.net.PlainSocketImpl.socketListen(Native Method)
at java.net.AbstractPlainSocketImpl.listen(AbstractPlainSocketImpl.java:399)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at java.net.ServerSocket.<init>(ServerSocket.java:128)
at org.apache.nifi.io.socket.SocketUtils.createServerSocket(SocketUtils.java:108)
at org.apache.nifi.io.socket.SocketListener.start(SocketListener.java:85)
at org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.start(SocketProtocolListener.java:97)
at org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.start(NodeProtocolSenderListener.java:64)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:314)
... 4 common frames omitted
2019-07-07 13:10:17,766 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
... View more
10-29-2017
01:44 AM
Thanks. It also works for me. I am using HDP 2.6.2 with Ambari 2.5, After installing, the default proxy value is hadoop.proxyuser.root.hosts=ambari1.ec2.internal After changing to hadoop.proxyuser.root.hosts=* Error is resolved.
... View more
10-26-2017
06:47 AM
Thanks for your information. I think virtualenv venv. ./venv/bin/activate should be virtualenv venv
. ./venv/bin/activate
... View more
10-24-2017
03:46 PM
Thanks, it worked for me !
... View more
03-21-2017
02:06 AM
Yes, but I don't know the impact of adding a PK to NEXT_COMPACTION_QUEUE table, because this table belongs to Hive metastore. I can add a PK to this table, but I am not sure all the other functions of Hive will work correctly without a full test, so I asked this question.
... View more
03-16-2017
09:22 AM
Dear team, We are trying to build Hive Metastore on Percona XtraDB Cluster ,which is MySQL Compatible. https://www.percona.com/software/mysql-database/percona-xtradb-cluster However, got error when run initialize SQL scripts on Percona XtraDB. Error: > desc NEXT_COMPACTION_QUEUE_ID;
+----------+------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------+------+-----+---------+-------+
| NCQ_NEXT | bigint(20) | NO | | NULL | |
+----------+------------+------+-----+---------+-------+
> INSERT INTO NEXT_COMPACTION_QUEUE_ID VALUES(1);ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of DML command on a table (metastore.NEXT_COMPACTION_QUEUE_ID) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER I think we can resolve this problem with changing pxc_strict_mode to other values such as DISABLED, however our Database platform doesn't allow us to do that. It means that when initializing some Hive metastore tables with out PK, it will fail in Percona XtraDB Cluster. Does anybody met the same situation or is there any way to avoid this problem without changing pxc_strict_mode?
... View more
Labels:
- Labels:
-
Apache Hive
12-15-2016
05:21 AM
@stevel Thanks for your answer. @Dominika Thanks for updating the docs.
... View more
12-09-2016
04:10 AM
1 Kudo
I have a question about accessing multiple AWS S3 buckets of different accounts in Hive. I have several S3 buckets which belongs to different AWS accounts. I can access one of the buckets in Hive. However I have to write fs.s3a.access.key and fs.s3a.secret.key into hive-site.xml, it means for one instance of Hive, I can only access one AWS S3 account. Is that right? And I want to use different buckets of different AWS S3 account in one Hive instance, is it possible?
... View more
Labels:
- Labels:
-
Apache Hive
11-18-2016
09:01 AM
I have a question about accessing multiple AWS S3 buckets of different accounts in Hive. I have several S3 buckets which belongs to different AWS accounts. Following your info, I can access one of the buckets in Hive.
However I have to write fs.s3a.access.key and fs.s3a.secret.key into hive-site.xml,
it means for one instance of Hive, I can only access one AWS S3 account. Is that right? And I want to use different buckets of different AWS S3 account in one Hive instance, is it possible?
... View more
08-20-2016
02:28 PM
Thanks. I will try ssh key.
... View more