Member since
05-18-2017
6
Posts
1
Kudos Received
0
Solutions
12-31-2018
11:30 PM
:~/HDP# sh
docker-deploy-hdp30.sh +
registry=hortonworks + name=sandbox-hdp + version=3.0.1 +
proxyName=sandbox-proxy + proxyVersion=1.0 + flavor=hdp + echo hdp + mkdir -p
sandbox/proxy/conf.d + mkdir -p
sandbox/proxy/conf.stream.d + docker pull
hortonworks/sandbox-hdp:3.0.1 3.0.1: Pulling from
hortonworks/sandbox-hdp 70799bbf2226:
Downloading [================================> ] 47.17MB/72.85MB 40963917cdad:
Pulling fs layer 70799bbf2226:
Downloading [===================================> ] 51.45MB/72.85MB ee3ec4e8cb3d:
Downloading [=======>
] 57.28MB/368.8MB ee3ec4e8cb3d:
Downloading [=======>
] 57.81MB/368.8MB ee3ec4e8cb3d:
Downloading [========>
] 64.24MB/368.8MB 7ea5917732c0:
Download complete 2d951411620c:
Download complete f4c5e354e7fd:
Downloading ca01ba34744d:
Waiting 83326dded077:
Waiting eb3d71b90b73:
Waiting bdd1cab41c81:
Waiting 500cc770c4bd:
Waiting 0cb1decd5474:
Waiting b9591f4b6855:
Waiting f28e56086127:
Waiting e7de4e7d0bca:
Waiting ec77967d2166:
Waiting 4fdcae170114:
Waiting 6347f5df8ffc:
Waiting 6a6ecc232709:
Waiting ea845898ff50:
Waiting 02135573b1bf:
Waiting cb0176867cd8:
Waiting 3c08321268fd:
Pulling fs layer 82e82a97c465:
Waiting 3c08321268fd:
Waiting 74b321ac2ac5:
Waiting 569da02c0a66:
Waiting af40820407ef:
Pulling fs layer af40820407ef:
Waiting unauthorized:
authentication required + docker pull
hortonworks/sandbox-proxy:1.0 1.0: Pulling from
hortonworks/sandbox-proxy Digest:
sha256:42e4cfbcbb76af07e5d8f47a183a0d4105e65a1e7ef39fe37ab746e8b2523e9e Status: Image is up
to date for hortonworks/sandbox-proxy:1.0 + [ hdp = hdf ] + [ hdp = hdp ] +
hostname=sandbox-hdp.hortonworks.com + docker images + grep
hortonworks/sandbox-hdp + awk {print $2} + version= + docker network
create cda + docker run
--privileged --name sandbox-hdp -h sandbox-hdp.hortonworks.com --network=cda
--network-alias=sandbox-hdp.hortonworks.com -d hortonworks/sandbox-hdp: docker: invalid
reference format. See 'docker run
--help'. + echo Remove existing postgres run files. Please
wait Remove existing postgres run files. Please
wait + sleep 2 + docker exec -t
sandbox-hdp sh -c rm -rf /var/run/postgresql/*; systemctl restart
postgresql-9.6.service; Error: No such
container: sandbox-hdp + sed
s/sandbox-hdp-security/sandbox-hdp/g assets/generate-proxy-deploy-script.sh + mv -f
assets/generate-proxy-deploy-script.sh.new
assets/generate-proxy-deploy-script.sh + chmod +x
assets/generate-proxy-deploy-script.sh +
assets/generate-proxy-deploy-script.sh + uname + grep MINGW + chmod +x
sandbox/proxy/proxy-deploy.sh +
sandbox/proxy/proxy-deploy.sh sandbox-proxy baed363d86a7a73e7712b838df09c2aeda99a6b15f0333e9f8e6f445902a383b docker: Error
response from daemon: driver failed programming external connectivity on
endpoint sandbox-proxy
(d53fe3e3eba2b296b2f6acaa6d8202732308d93cd987157130dfdddef1b82170): Error
starting userland proxy: listen tcp 0.0.0.0:8042: bind: address already in use. i have gone through the script,:- it means docker pull hortonworks/sandbox-hdp:3:0:1 cat docker-deploy-hdp30.sh # CAN EDIT THESE
VALUES registry="hortonworks" name="sandbox-hdp" version="3.0.1" proxyName="sandbox-proxy" proxyVersion="1.0" flavor="hdp" # NO EDITS BEYOND
THIS LINE # housekeeping echo $flavor >
sandbox-flavor # create necessary
folders for nginx and copy over our rule generation script there mkdir -p
sandbox/proxy/conf.d mkdir -p
sandbox/proxy/conf.stream.d # pull and tag the
sandbox and the proxy container docker pull
"$registry/$name:$version"........ ======== but it fails with the error "unauthorized: authentication required". even though if i executed docker pull hortonworks/sandbox-hdp:3:0:1 also i had logged into docker with docker login -u <userid> -p <password>
... View more
12-31-2018
03:12 PM
1 Kudo
i am also getting same error. cloudgpu-server:~/HDP#
yarn jar
/usr/hdp/3.1.0.0-78/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar
-jar
/usr/hdp/3.1.0.0-78/hadoop-yarn/hadoop-yarn-applications-distributedshell.jar
-shell_command /usr/bin/nvidia-smi -container_resources memory-mb=3072,vcores=1,yarn.io/gpu=1
-num_containers 2 18/12/31 17:04:34
INFO distributedshell.Client: Initializing Client 18/12/31 17:04:34
INFO distributedshell.Client: Running Client 18/12/31 17:04:34
INFO client.RMProxy: Connecting to ResourceManager at
hostname/<ip_address>:8050 18/12/31 17:04:35
INFO client.AHSProxy: Connecting to Application History server at
<hostname>:/<ip_address>:10200 18/12/31 17:04:35
INFO distributedshell.Client: Got Cluster metric info from ASM,
numNodeManagers=4 18/12/31 17:04:35
INFO distributedshell.Client: Got Cluster node info from ASM 18/12/31 17:04:35
INFO distributedshell.Client: Got node report from ASM for,
nodeId=cloudgpu-server.com:45454, nodeAddress=cloudgpu-server.com:8042,
nodeRackName=/default-rack, nodeNumContainers=0 18/12/31 17:04:35
INFO distributedshell.Client: Got node report from ASM for,
nodeId=<hostname>:45454, nodeAddress=<hostname>:8042,
nodeRackName=/default-rack, nodeNumContainers=0 18/12/31 17:04:35
INFO distributedshell.Client: Got node report from ASM for,
nodeId=<hostname>:45454, nodeAddress=<hostname>:8042,
nodeRackName=/default-rack, nodeNumContainers=1 18/12/31 17:04:35
INFO distributedshell.Client: Got node report from ASM for,
nodeId=<hostname>::45454, nodeAddress=<hostname>::8042,
nodeRackName=/default-rack, nodeNumContainers=0 18/12/31 17:04:35
INFO distributedshell.Client: Queue info, queueName=default,
queueCurrentCapacity=0.03125, queueMaxCapacity=1.0, queueApplicationCount=1,
queueChildQueueCount=0 18/12/31 17:04:35
INFO distributedshell.Client: User ACL Info for Queue, queueName=root,
userAcl=SUBMIT_APPLICATIONS 18/12/31 17:04:35
INFO distributedshell.Client: User ACL Info for Queue, queueName=root,
userAcl=ADMINISTER_QUEUE 18/12/31 17:04:35
INFO distributedshell.Client: User ACL Info for Queue, queueName=default,
userAcl=SUBMIT_APPLICATIONS 18/12/31 17:04:35
INFO distributedshell.Client: User ACL Info for Queue, queueName=default,
userAcl=ADMINISTER_QUEUE 18/12/31 17:04:35
INFO distributedshell.Client: Max mem capability of resources in this cluster
8192 18/12/31 17:04:35
INFO distributedshell.Client: Max virtual cores capability of resources in this
cluster 38 18/12/31 17:04:35
WARN distributedshell.Client: AM Memory not specified, use 100 mb as AM memory 18/12/31 17:04:35
WARN distributedshell.Client: AM vcore not specified, use 1 mb as AM vcores 18/12/31 17:04:35
WARN distributedshell.Client: AM Resource capability=<memory:100,
vCores:1> 18/12/31 17:04:35
ERROR distributedshell.Client: Error running Client org.apache.hadoop.yarn.exceptions.ResourceNotFoundException:
Unknown resource: yarn.io/gpu at
org.apache.hadoop.yarn.applications.distributedshell.Client.validateResourceTypes(Client.java:1218) at
org.apache.hadoop.yarn.applications.distributedshell.Client.setContainerResources(Client.java:1204) at
org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:735) at
org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:265) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at
java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:318) at
org.apache.hadoop.util.RunJar.main(RunJar.java:232) root@cloudgpu-server:~/HDP# i have followed up steps https://hortonworks.com/blog/gpus-support-in-apache-hadoop-3-1-yarn-hdp-3/#comment-26766 Can any one advise this ?
... View more
11-23-2018
04:31 PM
Hi, i too have same problem, please suggest me from next next.
ambari=> select *
from hosts;
host_id | host_name | cpu_count | ph_cpu_count | cpu_info |
discovery_status | host_attributes | ipv4 | ipv6 | public_host_name |
last_registration_time | os_arch | os_info | os_type
| rack_info
| total_mem
---------+------------+-----------+--------------+----------+------------------+-------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------------------------------------------------------------------------------+----------------+----------------+------------------+------------------------+---------+---------+---------
+---------------+-----------
4 | master |
4 | 4 | | |
{"interfaces":"eno16780,lo,virbr0","os_family":"redhat","kernel":"Linux","timezone":"EST","kernel_release":"3.10.0-862.6.3.el7
.x86_64","os_release_version":"7.5.1804","physicalprocessors_count":"4","hardware_isa":"x86_64","kernel_majorversion":"3.10","kernel_version":"3.10.0","mac_address":"00:0C:29:47:9D:24","swap_free":"75.10
GB"
,"swap_size":"90.00
GB","selinux_enabled":"true","hardware_model":"x86_64","processors_count":"4"}
| <IPAddress166> | <IPAddress166> | master | 1542262557154 | x86_64 |
| centos7
| /default-rack
| 32781400
3 | hnode3.com | 4 | 4 | | |
{"interfaces":"eno16780,lo,virbr0","os_family":"redhat","kernel":"Linux","timezone":"IST","kernel_release":"3.10.0-327.el7.x86
_64","os_release_version":"7.2.1511","physicalprocessors_count":"4","hardware_isa":"x86_64","kernel_majorversion":"3.10","kernel_version":"3.10.0","mac_address":"00:50:56:B1:4B:92","swap_free":"29.80
GB","sw
ap_size":"29.80
GB","selinux_enabled":"false","hardware_model":"x86_64","processors_count":"4"} | <IPAddress60> | <IPAddress60> | hnode3.com | 1542890750770 | x86_64 |
| centos7
| /default-rack
| 16268760
2 | hnode2.com | 4 | 4 | | |
{"interfaces":"eno16780,lo,virbr0","os_family":"redhat","kernel":"Linux","timezone":"IST","kernel_release":"3.10.0-327.el7.x86
_64","os_release_version":"7.2.1511","physicalprocessors_count":"4","hardware_isa":"x86_64","kernel_majorversion":"3.10","kernel_version":"3.10.0","mac_address":"00:50:56:B1:98:CE","swap_free":"29.80
GB","sw
ap_size":"29.80
GB","selinux_enabled":"false","hardware_model":"x86_64","processors_count":"4"} | <IPAddress59> |<IPAddress59> | hnode2.com | 1542890767448 | x86_64 |
| centos7
| /default-rack
| 16268760
1 | hnode1.com | 4 | 4 | | |
{"interfaces":"eno16780,lo,virbr0","os_family":"redhat","kernel":"Linux","timezone":"IST","kernel_release":"3.10.0-327.el7.x86
_64","os_release_version":"7.2.1511","physicalprocessors_count":"4","hardware_isa":"x86_64","kernel_majorversion":"3.10","kernel_version":"3.10.0","mac_address":"00:50:56:B1:20:32","swap_free":"29.80
GB","sw
ap_size":"29.80
GB","selinux_enabled":"false","hardware_model":"x86_64","processors_count":"4"} | <IPAddress81> | <IPAddress81> | hnode1.com | 1542890776524 | x86_64 |
| centos7 | /default-rack
| 16268760 (4 rows)
... View more
10-31-2018
12:02 PM
i have followed the steps but after changing restart zeppeline i am getting below error:- help me to resolve HTTP ERROR: 503 Problem accessing /. Reason: Service Unavailable
... View more
05-18-2017
09:17 AM
Thanks for reply.. i do also having same problem. but as you asvised i removed the file /var/run/cloudera-scm-server.pid then i start the cloudera service and again same issue but that removed file automatically regenarated again. [root@cdh1 ~]# service cloudera-scm-server start Starting cloudera-scm-server: [ OK ] [root@cdh1 ~]# service cloudera-scm-server status cloudera-scm-server dead but pid file exists [root@cdh1 ~]# rm /var/run/cloudera-scm-server.pid rm: remove regular file `/var/run/cloudera-scm-server.pid'? y [root@cdh1 ~]#
... View more