Member since
10-24-2015
207
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4436 | 03-04-2018 08:18 PM | |
4330 | 09-19-2017 04:01 PM | |
1809 | 01-28-2017 10:31 PM | |
976 | 12-08-2016 03:04 PM |
12-14-2016
02:20 PM
@jss So you mean I should download it from our local Satellite repo once i get the network issues resolved...?
... View more
12-14-2016
12:07 PM
Hi, Thanks for the answer: I forgot to paste the packages missing: These are missing..... These are being downloaded from another repo server which is unable to reach... can i download these separately too and follow ambari installation? Error Downloading Packages:
fuse-2.8.3-4.el6.x86_64: failure: Packages/fuse-2.8.3-4.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
procmail-3.22-25.1.el6_5.1.x86_64: failure: Packages/procmail-3.22-25.1.el6_5.1.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
hesiod-3.1.0-19.el6.x86_64: failure: Packages/hesiod-3.1.0-19.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
sendmail-8.14.4-9.el6.x86_64: failure: Packages/sendmail-8.14.4-9.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
crontabs-1.10-33.el6.noarch: failure: Packages/crontabs-1.10-33.el6.noarch.rpm from rhel-http: [Errno 256] No more mirrors to try.
redhat-lsb-graphics-4.0-7.el6.x86_64: failure: Packages/redhat-lsb-graphics-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
redhat-lsb-printing-4.0-7.el6.x86_64: failure: Packages/redhat-lsb-printing-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
redhat-lsb-compat-4.0-7.el6.x86_64: failure: Packages/redhat-lsb-compat-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
fuse-libs-2.8.3-4.el6.x86_64: failure: Packages/fuse-libs-2.8.3-4.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
redhat-lsb-core-4.0-7.el6.x86_64: failure: Packages/redhat-lsb-core-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
cronie-anacron-1.4.4-15.el6.x86_64: failure: Packages/cronie-anacron-1.4.4-15.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
cronie-1.4.4-15.el6.x86_64: failure: Packages/cronie-1.4.4-15.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
redhat-lsb-4.0-7.el6.x86_64: failure: Packages/redhat-lsb-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No more mirrors to try.
... View more
12-14-2016
11:58 AM
Hi all,
I asked similar question earlier when I got this error, I came to know it is network issue and for it to get resolved it will take some time. But my question is: Is it ok to download the rpm's individually from the redhat rpm's online and use them ( are the below rpm's generic or does it have any dependencies that I hvae to download it from the same repository where my machine is unable to reach currently. EVen if I download, where should i place them? When install begins through Ambari how would it find those rpms? automatically ? Is it the right way to do or its good practice to wait until i get my network issues resolved? Thanks for all your suggestions.
... View more
Labels:
- Labels:
-
Apache Ambari
12-07-2016
11:16 PM
I cleaned up a host and installed ambari successfully. But when I tried to add host through Ambari(with repos already placed in yum.repos.d) the installation is stuck at installing Datanode and gave the following error. Can somebody tell me what issue this could be? network? This node uses a different interface and the Ambari uses different interface, they both talk through a common interface. Not sure it makes sense, somebody please help me ... THIS IS THE ERROR
FROM AMBARI: stderr: /var/lib/ambari-agent/data/errors-6076.txt Traceback
(most recent call last): File
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
line 167, in <module> DataNode().execute() File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 219, in execute method(env) File
"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py",
line 49, in install self.install_packages(env,
params.exclude_packages) File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
line 410, in install_packages retry_count=agent_stack_retry_count) File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
line 154, in __init__ self.env.run() File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 160, in run self.run_action(resource, action) File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 124, in run_action provider_action() File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 54, in action_install self.install_package(package_name, self.resource.use_repos,
self.resource.skip_repos) File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
line 49, in install_package self.checked_call_with_retries(cmd,
sudo=True, logoutput=self.get_logoutput()) File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 83, in checked_call_with_retries return self._call_with_retries(cmd,
is_checked=True, **kwargs) File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
line 91, in _call_with_retries code, out = func(cmd, **kwargs) File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 70, in inner result = function(command, **kwargs) File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 92, in checked_call tries=tries, try_sleep=try_sleep) File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 140, in _call_wrapper result = _call(command, **kwargs_copy) File
"/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.Fail:
Execution of '/usr/bin/yum -d 0 -e 0 -y install 'hadoop_2_4_*'' returned 1.
Error Downloading Packages: fuse-2.8.3-4.el6.x86_64: failure:
Packages/fuse-2.8.3-4.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try. procmail-3.22-25.1.el6_5.1.x86_64: failure:
Packages/procmail-3.22-25.1.el6_5.1.x86_64.rpm from rhel-http: [Errno 256] No
more mirrors to try. hesiod-3.1.0-19.el6.x86_64: failure:
Packages/hesiod-3.1.0-19.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try. sendmail-8.14.4-9.el6.x86_64: failure:
Packages/sendmail-8.14.4-9.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try. crontabs-1.10-33.el6.noarch: failure:
Packages/crontabs-1.10-33.el6.noarch.rpm from rhel-http: [Errno 256] No more
mirrors to try. redhat-lsb-graphics-4.0-7.el6.x86_64:
failure: Packages/redhat-lsb-graphics-4.0-7.el6.x86_64.rpm from rhel-http:
[Errno 256] No more mirrors to try. redhat-lsb-printing-4.0-7.el6.x86_64:
failure: Packages/redhat-lsb-printing-4.0-7.el6.x86_64.rpm from rhel-http: [Errno
256] No more mirrors to try. redhat-lsb-compat-4.0-7.el6.x86_64: failure:
Packages/redhat-lsb-compat-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No
more mirrors to try. fuse-libs-2.8.3-4.el6.x86_64: failure:
Packages/fuse-libs-2.8.3-4.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try. redhat-lsb-core-4.0-7.el6.x86_64: failure:
Packages/redhat-lsb-core-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No
more mirrors to try. cronie-anacron-1.4.4-15.el6.x86_64: failure:
Packages/cronie-anacron-1.4.4-15.el6.x86_64.rpm from rhel-http: [Errno 256] No
more mirrors to try. cronie-1.4.4-15.el6.x86_64: failure:
Packages/cronie-1.4.4-15.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try. redhat-lsb-4.0-7.el6.x86_64: failure:
Packages/redhat-lsb-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try. stdout: /var/lib/ambari-agent/data/output-6076.txt 2016-12-07
17:30:01,657 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-12-07
17:30:01,658 - Group['spark'] {} 2016-12-07
17:30:01,660 - Group['hadoop'] {} 2016-12-07
17:30:01,660 - Group['users'] {} 2016-12-07
17:30:01,660 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,661 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,661 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': ['hadoop']} 2016-12-07
17:30:01,662 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['users']} 2016-12-07
17:30:01,662 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,663 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['users']} 2016-12-07
17:30:01,664 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,664 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups':
True, 'groups': ['users']} 2016-12-07
17:30:01,665 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,665 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,666 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,666 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,667 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,667 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,668 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,669 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True,
'groups': ['hadoop']} 2016-12-07
17:30:01,669 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content':
StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-12-07
17:30:01,671 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
{'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2016-12-07
17:30:01,677 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh
ambari-qa
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa']
due to not_if 2016-12-07
17:30:01,677 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'recursive':
True, 'mode': 0775, 'cd_access': 'a'} 2016-12-07
17:30:01,678 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content':
StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2016-12-07
17:30:01,679 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase
/home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase']
{'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2016-12-07
17:30:01,685 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase
/home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to
not_if 2016-12-07
17:30:01,685 - Group['hdfs'] {} 2016-12-07
17:30:01,686 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups':
['hadoop', 'hdfs']} 2016-12-07
17:30:01,686 - FS Type: 2016-12-07
17:30:01,686 - Directory['/etc/hadoop'] {'mode': 0755} 2016-12-07
17:30:01,687 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir']
{'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777} 2016-12-07
17:30:01,699 - Repository['HDP-2.4'] {'base_url':
'http://10.193.62.6/hdp/HDP/centos6/2.x/updates/2.4.2.0/', 'action':
['create'], 'components': ['HDP', 'main'], 'repo_template':
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0',
'repo_file_name': 'HDP', 'mirror_list': None} 2016-12-07
17:30:01,706 - File['/etc/yum.repos.d/HDP.repo'] {'content':
'[HDP-2.4]\nname=HDP-2.4\nbaseurl=http://10.193.62.6/hdp/HDP/centos6/2.x/updates/2.4.2.0/\n\npath=/\nenabled=1\ngpgcheck=0'} 2016-12-07
17:30:01,707 - Repository['HDP-UTILS-1.1.0.20'] {'base_url':
'http://10.193.62.6/hdp/HDP-UTILS-1.1.0.20/repos/centos6', 'action':
['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template':
'[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list
%}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif
%}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS',
'mirror_list': None} 2016-12-07
17:30:01,710 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content':
'[HDP-UTILS-1.1.0.20]\nname=HDP-UTILS-1.1.0.20\nbaseurl=http://10.193.62.6/hdp/HDP-UTILS-1.1.0.20/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'} 2016-12-07
17:30:01,710 - Package['unzip'] {'retry_on_repo_unavailability': False,
'retry_count': 5} 2016-12-07
17:30:01,822 - Skipping installation of existing package unzip 2016-12-07
17:30:01,822 - Package['curl'] {'retry_on_repo_unavailability': False,
'retry_count': 5} 2016-12-07
17:30:01,832 - Skipping installation of existing package curl 2016-12-07
17:30:01,833 - Package['hdp-select'] {'retry_on_repo_unavailability': False,
'retry_count': 5} 2016-12-07
17:30:01,842 - Skipping installation of existing package hdp-select 2016-12-07
17:30:01,978 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-12-07
17:30:01,982 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-12-07
17:30:01,988 - Package['rpcbind'] {'retry_on_repo_unavailability': False,
'retry_count': 5} 2016-12-07
17:30:02,097 - Skipping installation of existing package rpcbind 2016-12-07
17:30:02,098 - Package['hadoop_2_4_*'] {'retry_on_repo_unavailability': False,
'retry_count': 5} 2016-12-07 17:30:02,108 -
Installing package hadoop_2_4_* ('/usr/bin/yum -d 0 -e 0 -y install
'hadoop_2_4_*'' When i tried to install manually using /usr/bin/yum install hadoop_2_4_*, it gives me following error: Error Downloading Packages:
fuse-2.8.3-4.el6.x86_64: failure: Packages/fuse-2.8.3-4.el6.x86_64.rpm
from rhel-http: [Errno 256] No more mirrors to try.
procmail-3.22-25.1.el6_5.1.x86_64: failure:
Packages/procmail-3.22-25.1.el6_5.1.x86_64.rpm from rhel-http: [Errno 256] No
more mirrors to try.
hesiod-3.1.0-19.el6.x86_64: failure:
Packages/hesiod-3.1.0-19.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try.
sendmail-8.14.4-9.el6.x86_64: failure:
Packages/sendmail-8.14.4-9.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try.
crontabs-1.10-33.el6.noarch: failure:
Packages/crontabs-1.10-33.el6.noarch.rpm from rhel-http: [Errno 256] No more
mirrors to try.
redhat-lsb-graphics-4.0-7.el6.x86_64: failure:
Packages/redhat-lsb-graphics-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256]
No more mirrors to try.
redhat-lsb-printing-4.0-7.el6.x86_64: failure:
Packages/redhat-lsb-printing-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256]
No more mirrors to try.
redhat-lsb-compat-4.0-7.el6.x86_64: failure:
Packages/redhat-lsb-compat-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No
more mirrors to try.
fuse-libs-2.8.3-4.el6.x86_64: failure:
Packages/fuse-libs-2.8.3-4.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try. redhat-lsb-core-4.0-7.el6.x86_64:
failure: Packages/redhat-lsb-core-4.0-7.el6.x86_64.rpm from rhel-http: [Errno
256] No more mirrors to try.
cronie-anacron-1.4.4-15.el6.x86_64: failure:
Packages/cronie-anacron-1.4.4-15.el6.x86_64.rpm from rhel-http: [Errno 256] No
more mirrors to try.
cronie-1.4.4-15.el6.x86_64: failure:
Packages/cronie-1.4.4-15.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try.
redhat-lsb-4.0-7.el6.x86_64: failure:
Packages/redhat-lsb-4.0-7.el6.x86_64.rpm from rhel-http: [Errno 256] No more
mirrors to try.
... View more
Labels:
12-06-2016
02:53 PM
Hi, thanks so much for the information. But for now I deleted the hdfs /tmp folder contents which were lying there for a long time. This freed up about 500GB of space on hdfs in total, and that particular disk went down to 82% from 90%. How is it possible? The other disk also which had the same issue went down to 82%. My question is: did the disk usage go down just because of deleting the /tmp folder or does the disk size fluctuate because of some other running jobs too?? Also, I thought mapreduce utilizes local disk for storing intermediate data, so what is actually stored in hdfs tmp directory? I assumed this is where the intermediate data is stored, which is utilizing hdfs space. Thanks again in advance.
... View more
12-05-2016
06:57 PM
Hi aengineer! Thanks so much for the information. Is the fix good for any HDP release. I want to use it on HDP 2.1 & 2.4.2 Also, if i need to rebalance the disks, do i really need to decommission it or just stop the Datanode and start after a while...?? Thanks in advance..
... View more
12-05-2016
02:58 PM
Hi Qureshi, Here is the output for cat /proc/mounts [root@str14 ~]# cat /proc/mounts
rootfs / rootfs rw 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,relatime,size=132217636k,nr_inodes=33054409,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/mapper/VolGroup00-LogVol00 / ext4 rw,noatime,nodiratime,barrier=1,data=ordered 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
/dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0
/dev/sdb1 /data01 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdc1 /data02 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdd1 /data03 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sde1 /data04 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdf1 /data05 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdg1 /data06 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdh1 /data07 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdi1 /data08 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdj1 /data09 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdk1 /data10 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/sdl1 /data11 ext4 rw,noatime,nodiratime,commit=60,barrier=1,nobh,data=writeback 0 0
/dev/mapper/VolGroup00-LogVol04 /home ext4 rw,noatime,nodiratime,barrier=1,data=ordered 0 0
/dev/mapper/VolGroup00-LogVol03 /opt ext4 rw,noatime,nodiratime,barrier=1,data=ordered 0 0
/dev/mapper/VolGroup00-LogVol05 /tmp ext4 rw,noatime,nodiratime,barrier=1,data=ordered 0 0
/dev/mapper/VolGroup00-LogVol02 /var ext4 rw,noatime,nodiratime,barrier=1,data=ordered 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
... View more
12-04-2016
10:34 PM
Hi Binu, Thanks so much for your reply. But rebalancing is done among Datanodes, I dont think it will balance the disks within the datanode. Do you think it will still work or any other solution?
... View more
12-04-2016
08:56 PM
Same exact thing happened on another datanode ... Can somebody please help me with the cause and solutions...
... View more
- « Previous
- Next »