Member since
07-16-2019
12
Posts
2
Kudos Received
0
Solutions
05-07-2020
01:01 PM
we want to build Hadoop cluster , with HDP - 2.6.5 About the Ethernet speed Dose speed: `1000Mb/s` for Ethernet , is enough when dealing with Hadoop cluster ? Or Hadoop cluster must have network Ethernet speed of 10G ? example of our setting on dell machine with linux rhel ethtool ifn-33 Settings for ifn33: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: 1000Mb/s reference - https://www.arista.com/assets/data/pdf/Whitepapers/Hadoop_WP_final.pdf
... View more
Labels:
04-30-2020
11:24 PM
Dear friends
We want to install Hortonworks Sandbox on Docker , ( HDP 2.6.5 or higher version )
The machine that we have is rhel version 7.6 , but this machine is without network ( we cant Download and install from network )
Therefore we want to know if it is possible to download the sandbox on docker ZIP/TAR file ?
And then uncompressed the file and install the SandBox docker
... View more
04-30-2020
07:40 AM
1 Kudo
hi all we want to build test Hadoop cluster on one Linux machine based on docker container/s Dose hortonworks ( cloudera ) support this ? , for example HDP version - 2.6.5 for example we need the following services HDFS ( include at least 3 data-nodes , ) YARN MAPreduce2 HIVE Zookeeper Ambari metrics KAFKA SPARK2 and all these services should be on one Linux machine
... View more
Labels:
04-03-2020
02:11 AM
Sorry bot misunderstood Do you mean binaries that isn't latest , are without Pay ,? ( versions? ) Or only the latest binaries are with Pay?
... View more
04-02-2020
12:06 PM
Dear all members
We intend to install Claudera Hadoop cluster based on the info - https://docs.cloudera.com/content/www/en-us/documentation/enterprise/6/release-notes/PDF/cloudera-releases.pdf
Cluster should be like this
1. Masters machines ( name-node management + resource managers management , ZK Fail-over , Journal nodes , etc ) - total 3 machines
2. Data Nodes machines - total 10 machines
3. Kafka machines - total 5 machines
Not clearly from document if we need to pay about license ? ( I mean do we need to pay Claudera about installing Hadoop cluster and using it ? )
or maybe the pay is only for support ?
... View more
Labels:
08-17-2019
04:38 PM
hi all
after hortonworks migration
I see that nickname displayed instead my name? ( nickname is my secret name and should not be displayed !!
how can I change my nickname in profile? ( because for now I cant change it )
... View more
- Tags:
- nickname
07-24-2019
06:07 PM
Dear all I am new on this site , and I asked only 3 questions until now , without any answer is it logical? or I miss something here?
... View more
07-24-2019
04:33 PM
any answer for my question ?
... View more
07-24-2019
11:50 AM
Dear all , nice to be here , with all professional folks we have ambari server , and all services shown from "Add Service Wizard" as click on actions button , and click on "+add Service" there we can found all available services so now how to get all these services list by API
... View more
07-22-2019
09:34 AM
1 Kudo
Dear colleges and friends I just reviewing on the HDP upgrade hortronworks documentation but not see the section about backup the folder - /var/hadoop/hdfs/namenode/current is it necessary to backup the folder - current before HDP upgrade ? as tar -zcvf current_bck.tar.gz /var/hadoop/hdfs/namenode/current
-rw-r--r-- 1 root root 9062640 Jul 22 09:28 current_bck.tar.gz
... View more
07-21-2019
06:51 PM
It’s my pleasure to be , and I hope to learn from the great hortonworks site We are using ambary cluster with hdp version – 2.6.4 And now we want to configure the repositories in mabari by api in order to upgrade the cluster to version hdp – 2.6.5 So instead to configure the repo manual we create the following API curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://testbox.gm.com:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/redhat7/repositories/HDP-2.6 -d @repo.json while the json file is: # more repo.json
{
"Repositories": {
"repo_name": "HDP-2.6.5.0",
"base_url": "http://testbox.gm.com/HDP/centos7/2.6.5.0-292",
"verify_base_url": true
}
} But when we run the : curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://testbox.gm.com:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/redhat7/repositories/HDP-2.6 -d @repo.json its passed without any output but when we access the ambary , we not see the repo - HDP-2.6.5.0 we also runs the API with trace curl -H "X-Requested-By: ambari" -X PUT -u admin:admin http://testbox.gm.com:8080/api/v1/stacks/HDP/versions/2.6/operating_systems/redhat7/repositories/HDP-2.6 -d @repo.json
* About to connect() to testbox.gm.com port 8080 (#0)
* Connected to testbox.gm.com port 8080 (#0)
* Server auth using Basic with user 'admin'
> PUT /api/v1/stacks/HDP/versions/2.6/operating_systems/redhat7/repositories/HDP-2.6 HTTP/1.1
> Authorization: Basic YWRtaW46YWRtaW4=
> User-Agent: curl/7.29.0
> Host: testbox.gm.com:8080
> Accept: */*
> X-Requested-By: ambari
> Content-Length: 158
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 158 out of 158 bytes
< HTTP/1.1 200 OK
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< X-Content-Type-Options: nosniff
< Cache-Control: no-store
< Pragma: no-cache
< Set-Cookie: AMBARISESSIONID=cj6u81acs0ij6v0hjuc7mius;Path=/;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< User: admin
< Content-Type: text/plain
< Content-Length: 0
<
* Connection #0 to host testbox.gm.com left intact I will appreciate to get help , and understand where we are wrong here
... View more