Member since
11-20-2015
24
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1607 | 07-29-2016 07:09 AM |
07-07-2019
04:16 AM
Got the same error. Your hint works. 2019-07-07 13:10:17,764 ERROR [main] org.apache.nifi.web.server.JettyServer Unable to load flow due to: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.SocketException: アドレスは既に使用中です (Listen failed)
org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.SocketException: アドレスは既に使用中です (Listen failed)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:323)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1008)
at org.apache.nifi.NiFi.<init>(NiFi.java:158)
at org.apache.nifi.NiFi.<init>(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:297)
Caused by: java.net.SocketException: アドレスは既に使用中です (Listen failed)
at java.net.PlainSocketImpl.socketListen(Native Method)
at java.net.AbstractPlainSocketImpl.listen(AbstractPlainSocketImpl.java:399)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at java.net.ServerSocket.<init>(ServerSocket.java:128)
at org.apache.nifi.io.socket.SocketUtils.createServerSocket(SocketUtils.java:108)
at org.apache.nifi.io.socket.SocketListener.start(SocketListener.java:85)
at org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.start(SocketProtocolListener.java:97)
at org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.start(NodeProtocolSenderListener.java:64)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:314)
... 4 common frames omitted
2019-07-07 13:10:17,766 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
... View more
02-15-2018
06:06 AM
I had same issue on rhel 7 with below error Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_3_0_235-hdfs' returned 1. Error: Package: hadoop_2_6_3_0_235-hdfs-2.7.3.2.6.3.0-235.x86_64 (HDP-2.6-repo-101)
Requires: libtirpc-devel
You could try using --skip-broken to work around the problem Solution : Please check the Red Hat Enterprise Linux Server 7 Optional (RPMs) enabled on all nodes with below command # yum repolist all (To check enabled or disabled) !rhui-REGION-rhel-server-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (RPMs) Disabled: #yum-config-manager --enable rhui-REGION-rhel-server-optional ( enabling the optional rpms) Cross verify with first command to get it optional rpms enabled # yum repolist all !rhui-REGION-rhel-server-optional/7Server/x86_64 Red Hat Enterprise Linux Server 7 Optional (RPMs) enabled: 13,201
... View more
08-14-2017
02:46 PM
There are actually several tables missing Primary Keys all related to transactions. I've submitted HIVE-17306 to address this as currently I don't know the impact of adding Primary Keys to all of these tables. On HDP 2.6 it appears that the following tables don't have Primary Keys. On another note I'm not sure you can add surrogate keys as it looks like we have some unqualified inserts in Transaction Metastore that don't explicity list what columns are being inserted. This prevents just adding an extra column. completed_txn_components next_compaction_queue_id next_lock_id next_txn_id txn_components write_set
... View more
10-26-2017
06:47 AM
Thanks for your information. I think virtualenv venv. ./venv/bin/activate should be virtualenv venv
. ./venv/bin/activate
... View more
12-15-2016
05:21 AM
@stevel Thanks for your answer. @Dominika Thanks for updating the docs.
... View more
11-22-2017
05:34 AM
Hi, This issue was resolved by the settings as follows: hadoop.proxyuser.root.hosts=* You can also see the answer on the below comment. https://community.hortonworks.com/comments/144449/view.html
... View more
08-20-2016
02:28 PM
Thanks. I will try ssh key.
... View more
07-29-2016
09:13 AM
Glad that it worked out, let us know if you believe something is missing from the docs and can be improved.
... View more
07-22-2016
01:25 AM
@bbihari Thanks for your comment. it worked
... View more
06-26-2019
12:06 AM
How to take care of the s3 user level permissions if the user does following? Can we leverage ranger hdfs to restrict s3 permissions if the user is going through hdfs client? hdfs dfs -cat s3a://s3hdptest/S3HDPTEST.csv
... View more