Member since
05-16-2016
785
Posts
114
Kudos Received
39
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2327 | 06-12-2019 09:27 AM | |
| 3573 | 05-27-2019 08:29 AM | |
| 5723 | 05-27-2018 08:49 AM | |
| 5239 | 05-05-2018 10:47 PM | |
| 3113 | 05-05-2018 07:32 AM |
07-05-2018
07:37 AM
One other thing. It looks like there were some issues with the Ubuntu OS and after switching over to Centos 7.5 the CDH 5.15 install ran without much issues. I have a question though, in the the install screens it has a Data Node configuration value: DataNode Data Directory dfs.data.dir, dfs.datanode.data.dir Comma-delimited list of directories on the local file system where the DataNode stores HDFS block data. Typical values are /data/N/dfs/dn for N = 1, 2, 3.... These directories should be mounted using the noatime option, and the disks should be configured using JBOD. RAID is not recommended. In JBOD mode say the server has 20 hard disks so each of the 20 disk will have 20 file mount points. I think we need to set this value to comma-delimited /data/1/dfs/dn, /data/2/dfs/dn, /data/3/dfs/dn....../data/20/dfs/dn . Now what happens if some of the data nodes have different number of JBOD disks say 20 disks in some and 10 disks in others. Since this is a global variable dfs.data.dir how does it allocate the 20 data directories in those data nodes with only 10 JBOD hard disks? Since there is no hostname defined in this variable to indicate different nunber of disks in different hosts. Also in future if new datanodes are added with different number of disks how is this specified while adding new data nodes? Thanks!
... View more
07-04-2018
02:22 AM
The solution provided above here has been working to me always for something about a year so far, you should do so that way. The other issue that had been happening to me from time to time is the files I tried to store there became corrupted for some reason, so I got them fixed only manually with this editor https://edit-pdf.pdffiller.com/ Every paid editor actually will fit well for such purposes, but others just way more expensive actually. Back to storing, I don't know have they fixed that issue so far or not, but check all the files after the export
... View more
06-17-2018
01:16 PM
@csguna It didn't work. The following are the log files
... View more
06-14-2018
08:25 AM
Could you perform a hard stop and start of the Cloudera-scm-agent and let me know if . its reflecting or not .
... View more
06-11-2018
09:57 AM
csguna, Here is the connection related to zookeeper, it looks that our created topic ciovInput_v3 and ciovInput_v1 are there. thanks # bin/kafka-topics.sh --describe --zookeeper localhost:2181 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer Topic: __consumer_offsets Partition: 0 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 1 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 2 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 3 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 4 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 5 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 6 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 7 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 8 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 9 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 10 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 11 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 12 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 13 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 14 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 15 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 16 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 17 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 18 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 19 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 20 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 21 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 22 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 23 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 24 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 25 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 26 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 27 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 28 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 29 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 30 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 31 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 32 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 33 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 34 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 35 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 36 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 37 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 38 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 39 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 40 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 41 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 42 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 43 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic: __consumer_offsets Partition: 44 Leader: 256 Replicas: 255,256,257 Isr: 256 Topic: __consumer_offsets Partition: 45 Leader: 256 Replicas: 256,255,257 Isr: 256 Topic: __consumer_offsets Partition: 46 Leader: 256 Replicas: 257,256,255 Isr: 256 Topic: __consumer_offsets Partition: 47 Leader: 256 Replicas: 255,257,256 Isr: 256 Topic: __consumer_offsets Partition: 48 Leader: 256 Replicas: 256,257,255 Isr: 256 Topic: __consumer_offsets Partition: 49 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic:ciovGSInput PartitionCount:1 ReplicationFactor:3 Configs: Topic: ciovGSInput Partition: 0 Leader: 256 Replicas: 257,255,256 Isr: 256 Topic:ciovInput PartitionCount:1 ReplicationFactor:1 Configs: Topic: ciovInput Partition: 0 Leader: 256 Replicas: 256 Isr: 256 Topic:ciovInput_v1 PartitionCount:1 ReplicationFactor:1 Configs: Topic: ciovInput_v1 Partition: 0 Leader: 256 Replicas: 256 Isr: 256 Topic:ciovInput_v3 PartitionCount:1 ReplicationFactor:1 Configs: Topic: ciovInput_v3 Partition: 0 Leader: 255 Replicas: 255 Isr: 255
... View more
06-09-2018
09:45 PM
It is a recommendation based on the fact that active and standby are merely states of the NameNode and not different daemons. The NameNode doesn't check it's own hardware to be the same as other NameNodes if that's what you are worried about.
... View more
06-08-2018
11:16 PM
Could you fire the below command . it will fix the permission error sudo -u hdfs hadoop fs -chmod 755 /tmp let me know if that helps . Guna
... View more
06-04-2018
10:54 AM
1 Kudo
Applying patch - i.e. upgrading to 5.13.3 solved that problem
... View more
05-27-2018
08:49 AM
The client does not know which is active / standy namenode it will only usess dfs.nameservices . DFS Client will determine which NameNode is the current Active, by using the either of the java classess which ever is implemented ConfiguredFailoverProxyProvider and the RequestHedgingProxyProvider, then subsequently it will invoke the ids and followed by the host identification by using the properties in hdfs-site.xml ( please refer the below link for the properties . https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
... View more
05-24-2018
11:28 AM
Hi, I am facing the same issue where, cloudera agent is exiting because it is unable to find the supervisord.conf file. The conf file is not getting created. Error see in cloudera-scm-agent.log file [24/May/2018 14:19:31 +0000] 10889 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent /supervisor/include [24/May/2018 14:19:31 +0000] 10889 MainThread agent ERROR Failed to connect to previous supervisor. Traceback (most recent call last): File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 2110, in find_or_start_supervisor self.get_supervisor_process_info() File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 2254, in get_supervisor_process_info self.identifier = self.supervisor_client.supervisor.getIdentification() File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/supervisor-3.0-py2.7.egg/supervisor/xmlrpc.py", line 460, in request self.connection.request('POST', handler, request_body, self.headers) File "/usr/lib64/python2.7/httplib.py", line 1017, in request self._send_request(method, url, body, headers) File "/usr/lib64/python2.7/httplib.py", line 1051, in _send_request self.endheaders(body) File "/usr/lib64/python2.7/httplib.py", line 1013, in endheaders self._send_output(message_body) File "/usr/lib64/python2.7/httplib.py", line 864, in _send_output self.send(msg) File "/usr/lib64/python2.7/httplib.py", line 826, in send self.connect() File "/usr/lib64/python2.7/httplib.py", line 807, in connect self.timeout, self.source_address) File "/usr/lib64/python2.7/socket.py", line 571, in create_connection raise err error: [Errno 111] Connection refused [24/May/2018 14:19:31 +0000] 10889 Dummy-1 daemonize WARNING Stopping daemon. [24/May/2018 14:19:31 +0000] 10889 Dummy-1 agent INFO Stopping agent... [24/May/2018 14:19:31 +0000] 10889 Dummy-1 agent INFO No extant cgroups; unmounting any cgroup roots The cloudera manager and agent version is 5.12.0 We did have a server failure and after the reboot of the server, the agent does not start. Ran the below the command: /usr/lib64/cmf/agent/build/env/bin/python /usr/lib64/cmf/agent/build/env/bin/cmf-agent --package_dir /usr/lib64/cmf/service --agent_dir /var/run/cloudera-scm-agent --lib_dir /var/lib/cloudera-scm-agent --logfile /var/log/cloudera-scm-agent/cloudera-scm-agent.log --comm_name cmf-agent --pidfile /var/run/cloudera-scm-agent/cloudera-scm-agent.pid Output: [24/May/2018 14:27:45 +0000] 11274 MainThread agent INFO SCM Agent Version: 5.12.0 [24/May/2018 14:27:45 +0000] 11274 MainThread agent WARNING Expected mode 0751 for /run/cloudera-scm-agent but was 0755 [24/May/2018 14:27:45 +0000] 11274 MainThread agent INFO Re-using pre-existing directory: /run/cloudera-scm-agent [24/May/2018 14:27:45 +0000] 11274 MainThread agent INFO Not starting a new session. [24/May/2018 14:27:45 +0000] 11274 MainThread agent INFO Adding env vars that start with CMF_AGENT_ [24/May/2018 14:27:45 +0000] 11274 MainThread agent INFO Logging to /var/log/cloudera-scm-agent/cloudera-scm-agent.log Traceback (most recent call last): File "/usr/lib64/cmf/agent/build/env/bin/cmf-agent", line 12, in <module> load_entry_point('cmf==5.12.0', 'console_scripts', 'cmf-agent')() File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 3095, in main main_impl() File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 3078, in main_impl agent.start() File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 804, in start self.find_or_start_supervisor() File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/agent.py", line 2151, in find_or_start_supervisor if not mount_tmpfs(process_dir, self.args.clear_agent_dir, self.sudo): File "/usr/lib64/cmf/agent/build/env/lib/python2.7/site-packages/cmf-5.12.0-py2.7.egg/cmf/tmpfs.py", line 62, in mount_tmpfs if os.path.samefile(p.mountpoint, path) and p.fstype == "tmpfs" and "noexec" not in p.opts: File "/usr/lib64/cmf/agent/build/env/lib64/python2.7/posixpath.py", line 162, in samefile s1 = os.stat(f1) OSError: [Errno 2] No such file or directory: 'net:[4026532100]' Please may you help on this? Is the issue fixed in v5.12.2?
... View more