Member since
11-20-2015
24
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
821 | 07-29-2016 07:09 AM |
07-07-2019
04:16 AM
Got the same error. Your hint works. 2019-07-07 13:10:17,764 ERROR [main] org.apache.nifi.web.server.JettyServer Unable to load flow due to: org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.SocketException: アドレスは既に使用中です (Listen failed)
org.apache.nifi.lifecycle.LifeCycleStartException: Failed to start Flow Service due to: java.net.SocketException: アドレスは既に使用中です (Listen failed)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:323)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1008)
at org.apache.nifi.NiFi.<init>(NiFi.java:158)
at org.apache.nifi.NiFi.<init>(NiFi.java:72)
at org.apache.nifi.NiFi.main(NiFi.java:297)
Caused by: java.net.SocketException: アドレスは既に使用中です (Listen failed)
at java.net.PlainSocketImpl.socketListen(Native Method)
at java.net.AbstractPlainSocketImpl.listen(AbstractPlainSocketImpl.java:399)
at java.net.ServerSocket.bind(ServerSocket.java:376)
at java.net.ServerSocket.<init>(ServerSocket.java:237)
at java.net.ServerSocket.<init>(ServerSocket.java:128)
at org.apache.nifi.io.socket.SocketUtils.createServerSocket(SocketUtils.java:108)
at org.apache.nifi.io.socket.SocketListener.start(SocketListener.java:85)
at org.apache.nifi.cluster.protocol.impl.SocketProtocolListener.start(SocketProtocolListener.java:97)
at org.apache.nifi.cluster.protocol.impl.NodeProtocolSenderListener.start(NodeProtocolSenderListener.java:64)
at org.apache.nifi.controller.StandardFlowService.start(StandardFlowService.java:314)
... 4 common frames omitted
2019-07-07 13:10:17,766 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
... View more
10-29-2017
01:44 AM
Thanks. It also works for me. I am using HDP 2.6.2 with Ambari 2.5, After installing, the default proxy value is hadoop.proxyuser.root.hosts=ambari1.ec2.internal After changing to hadoop.proxyuser.root.hosts=* Error is resolved.
... View more
10-26-2017
06:47 AM
Thanks for your information. I think virtualenv venv. ./venv/bin/activate should be virtualenv venv
. ./venv/bin/activate
... View more
10-24-2017
03:46 PM
Thanks, it worked for me !
... View more
03-21-2017
02:06 AM
Yes, but I don't know the impact of adding a PK to NEXT_COMPACTION_QUEUE table, because this table belongs to Hive metastore. I can add a PK to this table, but I am not sure all the other functions of Hive will work correctly without a full test, so I asked this question.
... View more
03-16-2017
09:22 AM
Dear team, We are trying to build Hive Metastore on Percona XtraDB Cluster ,which is MySQL Compatible. https://www.percona.com/software/mysql-database/percona-xtradb-cluster However, got error when run initialize SQL scripts on Percona XtraDB. Error: > desc NEXT_COMPACTION_QUEUE_ID;
+----------+------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+------------+------+-----+---------+-------+
| NCQ_NEXT | bigint(20) | NO | | NULL | |
+----------+------------+------+-----+---------+-------+
> INSERT INTO NEXT_COMPACTION_QUEUE_ID VALUES(1);ERROR 1105 (HY000): Percona-XtraDB-Cluster prohibits use of DML command on a table (metastore.NEXT_COMPACTION_QUEUE_ID) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER I think we can resolve this problem with changing pxc_strict_mode to other values such as DISABLED, however our Database platform doesn't allow us to do that. It means that when initializing some Hive metastore tables with out PK, it will fail in Percona XtraDB Cluster. Does anybody met the same situation or is there any way to avoid this problem without changing pxc_strict_mode?
... View more
Labels:
- Labels:
-
Apache Hive
12-15-2016
05:21 AM
@stevel Thanks for your answer. @Dominika Thanks for updating the docs.
... View more
12-09-2016
04:10 AM
1 Kudo
I have a question about accessing multiple AWS S3 buckets of different accounts in Hive. I have several S3 buckets which belongs to different AWS accounts. I can access one of the buckets in Hive. However I have to write fs.s3a.access.key and fs.s3a.secret.key into hive-site.xml, it means for one instance of Hive, I can only access one AWS S3 account. Is that right? And I want to use different buckets of different AWS S3 account in one Hive instance, is it possible?
... View more
Labels:
- Labels:
-
Apache Hive
11-18-2016
09:01 AM
I have a question about accessing multiple AWS S3 buckets of different accounts in Hive. I have several S3 buckets which belongs to different AWS accounts. Following your info, I can access one of the buckets in Hive.
However I have to write fs.s3a.access.key and fs.s3a.secret.key into hive-site.xml,
it means for one instance of Hive, I can only access one AWS S3 account. Is that right? And I want to use different buckets of different AWS S3 account in one Hive instance, is it possible?
... View more
08-20-2016
02:28 PM
Thanks. I will try ssh key.
... View more
08-19-2016
05:39 AM
I downloaded the Cloudbreak image , and uploaded it to my Openstack enviroment. https://public-repo-1.hortonworks.com/HDP/cloudbreak/cloudbreak-2016-05-26-11-18.img Then, I created instances with this image, however I can't log in into it in Openstack Consol page.. I tried to set admin_pass , but it didn't work. What is the default account/password of this image file? ambari_cbgateway_0:
type: OS::Nova::Server
properties:
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key }
admin_user: centos
admin_pass: ssopassword
metadata: {"cb_instance_private_id":"0","cb_instance_group_name":"cbgateway"}
networks:
- network: { get_param: private_network }
user_data_format: SOFTWARE_CONFIG
user_data: { get_resource: core_user_data_config }
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
07-29-2016
07:09 AM
Self response: This problem is resolved. Because I imported a wrong .crt file, which is not for our HTTPS server, the "credential create" command failed. After putting the correct .crt file under "certs/trusted", I created a new credential successfully.
... View more
07-29-2016
03:04 AM
/cbreak_cloudbreak_1 | Importing certificates to the default Java certificate trust store.
/cbreak_registrator_1 | 2016/07/29 01:44:42 registrator: added: dd24262dd30c af7ab373046f:cbreak_cloudbreak_1:8080
/cbreak_consul_1 | 2016/07/29 01:44:42 [INFO] agent: Synced service 'af7ab373046f:cbreak_cloudbreak_1:8080'
/cbreak_cloudbreak_1 | Certificate was added to keystore
/cbreak_cloudbreak_1 | Certificate added to default Java trust store with alias sso.crt.
/cbreak_cloudbreak_1 | Starting the Cloudbreak application...
/cbreak_cloudbreak_1 | + '[' true == false ']'
/cbreak_cloudbreak_1 | + java -jar /cloudbreak.jar I checked log file and found "Certificate added to default Java trust store with alias sso.crt", so I think the .crt file is added correctly. However I still get SSL error when access the HTTPS endpoint.
... View more
07-29-2016
02:06 AM
Dear team (I posted the same content in github https://github.com/sequenceiq/cloudbreak/issues/1825 ) I am trying to use cloudbreak to create Hadoop cluster with our Openstack environment. but got some errors when create credentials. Enviroment: 1. Openstack version: Juno 2. CentOS Linux release 7.2 I found the same problem in #948 , and tried the same approach, 1. copy .crt file into docker 2. use "keytool -import" to import it into /etc/ssl/certs/java/cacerts 3. restart container cbreak_cloudbreak_1 4. run "credential create" and got failed. CLI: <code>credential create --OPENSTACK --name ynwm --description "keystone.(masked)" --userName sso --password (masked) --tenantName query-engine-test --endPoint https://keystone.(masked):5000/v2.0/ --sshKeyString "ssh-rsa AAA.....(masked)..5Q== sso_created" --publicInAccount true
error: <code>Command failed java.lang.RuntimeException: Failed to verify the credential: Could not verify credential [credential: 'ynwm'], detailed message: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
I found another document https://github.com/sequenceiq/cloudbreak-docs/blob/master/docs/openstack/deployer.md In which it says,
If your OpenStack is secured with a self-signed certificate, you need to import that certificate into Cloudbreak, or else Cloudbreak won't be able to communicate with your OpenStack. To import the certificate, place the certificate file in the generated certs directory /certs/trusted/. The trusted directory does not exist by default, so you need to create it. Cloudbreak will automatically pick up these certificates and import them into its truststore upon start. so I copied my .crt file into certs/trusted, and restarted cbd, <code>[sso@cloudbreak02 ~/tools/cloudbreak-deployment]$ sudo docker exec -it cbreak_cloudbreak_1 bash
root@dd24262dd30c:/# ls -al /certs/trusted/
total 16
drwxr-xr-x 2 root root 4096 Jul 29 01:41 .
drwxr-xr-x 3 root root 4096 Jul 29 01:39 ..
-rw-r--r-- 1 root root 4753 Jul 27 08:17 sso.crt
root@dd24262dd30c:/#
<code>sudo cbd start
sudo cbd util cloudbreak-shell
credential create --OPENSTACK ....(the same with the command above)
however still got the SSL error: <code>/cbreak_cloudbreak_1 | Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
/cbreak_cloudbreak_1 | at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
/cbreak_cloudbreak_1 | at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
/cbreak_cloudbreak_1 | at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
/cbreak_cloudbreak_1 | at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:382)
/cbreak_cloudbreak_1 | ... 82 common frames omitted
Thanks for any possible help
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
07-22-2016
01:25 AM
@bbihari Thanks for your comment. it worked
... View more
07-22-2016
01:24 AM
@bbihari Thanks for your comment. it work
... View more
07-21-2016
03:19 AM
Dear team I am trying to provisioning a new HDP cluster with Cloudbreak, but I have some questions about the input values. 1)I created 2 networks in the Web UI, like this: but these networks can't be selected when create new cluster. why did this happen? 2)[create cluster] => [Configure Cluster] tab => [Region] I can only select "local", is there any other possible values? (03_create_cluster_region.png) 3) Template I created templates, but they can't be selected when I creating clusters. (04_templates_created.png)
(05_no_template_to_select_in_blueprint.png ) Is there any wrong with my template settings? Appreciate any help. Thanks!
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak