Member since
07-11-2019
102
Posts
4
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
18200 | 12-13-2019 12:03 PM | |
4273 | 12-09-2019 02:42 PM | |
3127 | 11-26-2019 01:21 PM | |
1432 | 08-27-2019 03:03 PM | |
2729 | 08-14-2019 07:33 PM |
08-02-2019
08:30 PM
Here is an updated answer from the same post: http://community.hortonworks.com/answers/141427/view.html
... View more
07-31-2019
11:28 PM
Issue was that the --target-dir path included some variables in the start of the path and ended up with the path looking like //some/hdfs/path and the "empty" // was confusing sqoop.
... View more
07-31-2019
11:12 PM
Trying to import data from oracle DB and getting error .... 19/07/31 13:07:10 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-myuser/compile/375d3de163797c05cd7b480fddcfe58c/QueryResult.jar 19/07/31 13:07:10 ERROR tool.ImportTool: Import failed: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "null" at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3281) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3301) .... My sqoop command looks like... sqoop import \
-Dmapreduce.map.memory.mb=3144 -Dmapreduce.map.java.opts=-Xmx1048m \
-Dyarn.app.mapreduce.am.log.level=DEBUG \
-Dmapreduce.map.log.level=DEBUG \
-Dmapreduce.reduce.log.level=DEBUG \
-Dmapred.job.name="Ora import table $tablename" \
-Djava.security.egd=file:///dev/urandom \
-Djava.security.egd=file:///dev/urandom \
-Doraoop.timestamp.string=false \
-Dmapreduce.map.max.attempts=10 \
$oracle_cnxn_str \
--as-parquetfile \
--target-dir /some/hdfs/path \
-query "$sqoop_query" \
--split-by $splitby \
--where "1=1" \
--num-mappers 12 \
--delete-target-dir Not sure what to make of this error message. Any debugging suggestions or fixes?
... View more
Labels:
- Labels:
-
Apache Sqoop
07-29-2019
11:54 PM
@Jay Kumar SenSharma I understand using SSSD for cluster-wide users with LDAP, but my question more has to do with... "Ambari in any case is not responsible for creating user/groups for those ambari UI users in any node. For example you will see "admin" user in ambari but you wont see any such user on ambari server host or on any other node." What I was more interested in was, given the above, what is the point of these ambari users / groups? What is the context they are intended to be used in? I would think they could be used for adding ACL-like permissions to folders in the Ambari files view or something, but that does not seem to be the case, so I'm not sure what the point of them is. **Note I previously used MapR Hadoop which did operate in a similar way to this (where users of HDFS needed to exist across all nodes and the MapR mgmt UI allowed ACL-like permissions on HDFS volumes based on users and groups), so that's my frame of reference.
... View more
07-29-2019
09:16 PM
Having a problem with HDFS NFS, addressed on another site where it is recommended to set hdfs-site.xml like... <property>
<name>dfs.namenode.accesstime.precision</name>
<value>3600000</value>
<description> The access time for HDFS file is precise upto this value. The default value is 1 hour. Setting a value of 0 disables access times for HDFS. </description>
</property> Am confused about what exactly "access times for HDFS" means / is. Looking at the hadoop docs, was still not able to determine. Could someone give better understanding as to what this is doing? Also where is the nfs3 daemon log file ?
... View more
Labels:
- Labels:
-
Apache Hadoop
07-29-2019
08:49 PM
Confused about what Ambari users and groups are. When looking at the docs and the Ambari UI (Admin/Users and Admin/Groups), I get the impression that users / groups created in this interface should appear across all nodes in the cluster, but this does not seem to be the case, eg... [root@HW01 ~]# id <some user created in Ambari UI>
id: <some user created in Ambari UI>: no such user Same situation for groups created in Ambari UI admin section. Not sure I understand to use of the Ambari users and groups if they do not somehow have a link back to user and groups locally on the hosts. Can someone please explain what is going on here?
... View more
Labels:
- Labels:
-
Apache Ambari
07-29-2019
08:23 PM
Getting confused when trying to run a YARN process and getting errors. Looking in ambari UI YARN section, seeing...
(note it says 60GB available). Yet, when trying to run an YARN process, getting errors indicating that there are less resources available than is being reported in ambari, see...
➜ h2o-3.26.0.2-hdp3.1 hadoop jar h2odriver.jar -nodes 4 -mapperXmx 5g -output /home/ml1/hdfsOutputDir
Determining driver host interface for mapper->driver callback...
[Possible callback IP address: 192.168.122.1]
[Possible callback IP address: 172.18.4.49]
[Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 172.18.4.49:46721
(You can override these with -driverif and -driverport/-driverportrange and/or specify external IP using -extdriverif.)
Memory Settings:
mapreduce.map.java.opts: -Xms5g -Xmx5g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Dlog4j.defaultInitOverride=true
Extra memory percent: 10
mapreduce.map.memory.mb: 5632
Hive driver not present, not generating token.
19/08/07 12:37:19 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/08/07 12:37:19 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
19/08/07 12:37:19 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/ml1/.staging/job_1565057088651_0007
19/08/07 12:37:21 INFO mapreduce.JobSubmitter: number of splits:4
19/08/07 12:37:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1565057088651_0007
19/08/07 12:37:21 INFO mapreduce.JobSubmitter: Executing with tokens: []
19/08/07 12:37:21 INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.0.0-78/0/resource-types.xml
19/08/07 12:37:21 INFO impl.YarnClientImpl: Submitted application application_1565057088651_0007
19/08/07 12:37:21 INFO mapreduce.Job: The url to track the job: http://HW01.ucera.local:8088/proxy/application_1565057088651_0007/
Job name 'H2O_80092' submitted
JobTracker job ID is 'job_1565057088651_0007'
For YARN users, logs command is 'yarn logs -applicationId application_1565057088651_0007'
Waiting for H2O cluster to come up...
19/08/07 12:37:38 INFO client.RMProxy: Connecting to ResourceManager at hw01.ucera.local/172.18.4.46:8050
19/08/07 12:37:38 INFO client.AHSProxy: Connecting to Application History server at hw02.ucera.local/172.18.4.47:10200
----- YARN cluster metrics -----
Number of YARN worker nodes: 4
----- Nodes -----
Node: http://HW03.ucera.local:8042 Rack: /default-rack, RUNNING, 1 containers used, 5.0 / 15.0 GB used, 1 / 3 vcores used
Node: http://HW04.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://hw05.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
Node: http://HW02.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
----- Queues -----
Queue name: default
Queue state: RUNNING
Current capacity: 0.08
Capacity: 1.00
Maximum capacity: 1.00
Application count: 1
----- Applications in this queue -----
Application ID: application_1565057088651_0007 (H2O_80092)
Started: ml1 (Wed Aug 07 12:37:21 HST 2019)
Application state: FINISHED
Tracking URL: http://HW01.ucera.local:8088/proxy/application_1565057088651_0007/
Queue name: default
Used/Reserved containers: 1 / 0
Needed/Used/Reserved memory: 5.0 GB / 5.0 GB / 0.0 GB
Needed/Used/Reserved vcores: 1 / 1 / 0
Queue 'default' approximate utilization: 5.0 / 60.0 GB used, 1 / 12 vcores used
----------------------------------------------------------------------
ERROR: Unable to start any H2O nodes; please contact your YARN administrator.
A common cause for this is the requested container size (5.5 GB)
exceeds the following YARN settings:
yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1565057088651_0007'
Note the
ERROR: Unable to start any H2O nodes; please contact your YARN administrator.
A common cause for this is the requested container size (5.5 GB) exceeds the following YARN settings:
yarn.nodemanager.resource.memory-mb
yarn.scheduler.maximum-allocation-mb
Yet, I have YARN configured with
yarn.scheduler.maximum-allocation-vcores=3 yarn.nodemanager.resource.cpu-vcores=3 yarn.nodemanager.resource.memory-mb=15GB yarn.scheduler.maximum-allocation-mb=15GB
and we can see both container and node resource restrictions are higher than the requested container size.
So there are some things about this that I don't understand
Queue 'default' approximate utilization: 5.0 / 60.0 GB used, 1 / 12 vcores used
I would like to use the full 60GB that YARN can ostensibly provide (or at least have the option to, rather than have errors thrown). Would think that there should be enough resources to have each of the 4 nodes provide 15GB (> requested 4x5GB=20GB) to the process. Am I missing something here? Note that I only have the default root queue setup for YARN?
----- Nodes -----
Node: http://HW03.ucera.local:8042 Rack: /default-rack, RUNNING, 1 containers used, 5.0 / 15.0 GB used, 1 / 3 vcores used
Node: http://HW04.ucera.local:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 15.0 GB used, 0 / 3 vcores used
....
Why is only a single node being used before erroring out?
From these two things, it seems that neither the 15GB node limit nor the 60GB cluster limit are being exceeded, so why are these errors being thrown? What about this situation am I misinterpreting here? What can be done to fix (again, would like to be able to use all of the apparent 60GB of YARN resources for the job without error)? Any debugging suggestions of fixes?
... View more
Labels:
07-27-2019
12:33 AM
TLDR: nfs gateway service was already running (by default, apparently) and the service that I thought was blocking the hadoop nfs3 service ( jsvc.exec ) from starting was (I'm assuming) part of that service already running. What made me suspect this was that when shutting down the cluster, the service also stopped plus the fact that it was using the port I needed for nfs. The way that I confirmed this was just from following the verification steps in the docs and seeing that my output was similar to what should be expected. [root@HW02 ~]# rpcinfo -p hw02
program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 4242 mountd 100005 2 udp 4242 mountd 100005 3 udp 4242 mountd 100005 1 tcp 4242 mountd 100005 2 tcp 4242 mountd 100005 3 tcp 4242 mountd 100003 3 tcp 2049 nfs
[root@HW02 ~]# showmount -e hw02
Export list for hw02:
/ * Another thing that could told me that the jsvc process was part of an already running hdfs nfs service would have been checking the process info... [root@HW02 ~]# ps -feww | grep jsvc root 61106 59083 0 14:27 pts/2 00:00:00 grep --color=auto jsvc root 163179 1 0 12:14 ? 00:00:00 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/hadoop-hdfs-root-nfs3-HW02.ucera.local.out -errfile /var/log/hadoop/root/privileged-root-nfs3-HW02.ucera.local.err -pidfile /var/run/hadoop/root/hadoop-hdfs-root-nfs3.pid -nodetach -user hdfs -cp /usr/hdp/3.1.0.0-78/hadoop/conf:
...
and seeing jsvc.exec -Dproc_nfs3 ... to get the hint that jsvc (which apparently is for running java apps on linux) was being used to run the very nfs3 service I was trying to start. And for anyone else with this problem, note that I did not stop all the services that the docs want you to stop (since using centos7) [root@HW01 /]# service nfs status
Redirecting to /bin/systemctl status nfs.service
● nfs-server.service - NFS server and services Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled) Active: inactive (dead)
[root@HW01 /]# service rpcbind status
Redirecting to /bin/systemctl status rpcbind.service
● rpcbind.service - RPC bind service Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2019-07-19 15:17:02 HST; 6 days ago Main PID: 2155 (rpcbind) CGroup: /system.slice/rpcbind.service └─2155 /sbin/rpcbind -w
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
... View more
07-26-2019
09:59 PM
Was running into this exact same problem. Here is how I installed HBase Client from via the Ambari UI... 1. In the Ambari UI, go to Hosts then click the host you want to install the hbase client component on 2. In the list on components, you will have option to add more, see... 3. From here I installed the HBase client 4. Then stopped and restarted the cluster via Ambari UI (got notification of stale configs (though not sure if this was my problem all along)) One thing that was weird is that I did not change any configs or install anything new on the host nodes between trying to restart and running into this error and up until now everything appeared to be working fine. @Akhil S Naik, is there any reason that you could think of why this would only be happening now?
... View more
07-26-2019
09:10 PM
Think I found the problem, TLDR: firewalld (nodes running on centos7) was still running, when should be disabled on HDP clusters. From another community post: For Ambari to communicate during setup with the hosts it deploys to and manages, certain ports must be open and available. The easiest way to do this is to temporarily disable iptables, as follows: systemctl disable firewalld service firewalld stop So apparently iptables and firewalld need to be disabled across the cluster (supporting docs can be found here, I only disabled them on the Ambari installation node). After stopping these services across the cluster (I recommend using clush), was able to run the upload job without incident.
... View more