Member since
08-14-2016
16
Posts
3
Kudos Received
0
Solutions
05-17-2018
08:09 PM
after logging into ranger --- > audit ---- > access - i was getting below error Error Error running solr query, please check solr configs. Could not find a healthy node to handle the request. When i look at solr.log i was seeing below warning . But i dont reason why i was getting this error, path is my data directory and it exists but solr was not able to write anything into it. [main] WARN [ ] org.apache.solr.core.CoreContainer (CoreContainer.java:398) - Couldn't add files from /hadoop/ambari_infra_solr/data/lib to classpath: /hadoop/ambari_infra_solr/data/lib
... View more
Labels:
01-11-2017
09:39 PM
bin/kafka-topics.sh --list --zookeeper ip:2181 ip:2181 ip:2181 ATLAS_ENTITIES ambari_kafka_service_check ------------------------------------------------ i dont know whats wrong critical topics is missing (ATLAS_HOOK). what i have to do now
... View more
Labels:
10-06-2016
01:46 AM
i just created 1 node and then attaching rest nodes 1 by one. i dint find any other solution.
... View more
09-21-2016
04:53 PM
no, nothing is there. i spin up multiple clusters with this repo. all the time i was facing same issue http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.0.1/ambari.repo
... View more
09-21-2016
04:35 PM
I was getting this issue with 2.4.0.1 ambari, does this issue they have to fix in version?
... View more
09-19-2016
02:46 PM
2 Kudos
i tried on to add services on multiple clusters using above version i was facing same issue. screen was sitting like that.
... View more
09-19-2016
02:41 PM
1 Kudo
raise Fail(err_msg) resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install mysql-community-release' returned 1. Error: Nothing to do
... View more
09-13-2016
06:47 PM
i was not sure why i'm getting base urls two different versions in two different files. hdp.repo file #VERSION_NUMBER=2.4.2.0-258 [HDP-2.4.2.0] name=HDP Version - HDP-2.4.2.0 baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0 gpgcheck=1 gpgkey=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1 priority=1 [HDP-UTILS-1.1.0.20] name=HDP Utils Version - HDP-UTILS-1.1.0.20 baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos7 gpgcheck=1 gpgkey=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.2.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1 priority=1 ----------------------------------------------------------------------------- HDP.repo file [HDP-2.4] name=HDP-2.4 baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.4.3.0 path=/ enabled=1 gpgcheck=0
~
... View more
09-13-2016
06:34 PM
it was fresh installation, so why do i need to make those links?
... View more
09-13-2016
06:33 PM
its fresh installation ?
... View more
09-13-2016
05:33 PM
it does't works, even i was getting error to install hdfs-client, hive-client, mapreduce2-client raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist
----------------------------------------------------------------
raise Fail("Applying %s failed, looped symbolic links found while resolving %s" % (self.resource, path))
resource_management.core.exceptions.Fail: Applying Directory['/usr/hdp/current/hive-client/conf'] failed, looped symbolic links found while resolving /usr/hdp/current/hive-client/conf
-------------------------------------------------------------------
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] failed, parent directory /usr/hdp/current/hadoop-client/conf doesn't exist
... View more
09-13-2016
01:59 PM
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname)) resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/spark-historyserver/conf/spark-defaults.conf'] failed, parent directory /usr/hdp/current/spark-historyserver/conf doesn't exist
... View more
09-06-2016
07:43 PM
"tez.am.resource.memory.mb" : "6144" "tez.task.resource.memory.mb" : "6144" "tez.am.launch.cmd-opts" : "4915", "tez.runtime.unordered.output.buffer.size-mb" : "614" "hive.tez.container.size" : "6144", "hive.tez.java.opts" : "-Xmx4915m" I was keep changing memory from 4gb to 10gb. but still failing, can anyone of you able to help me. ############################################################################################ Status: Failed
Vertex failed, vertexName=Reducer 3, vertexId=vertex_1472766011697_0003_1_04, diagnostics=[Task failed, taskId=task_1472766011697_0003_1_04_000001, diagnostics=[TaskAttempt 0 failed, info=[Container container_e02_1472766011697_0003_01_000010 finished with diagnostics set to [Container failed, exitCode=1. Exception from container-launch.
Container id: container_e02_1472766011697_0003_01_000010
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
at org.apache.hadoop.util.Shell.run(Shell.java:487)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:303)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
]], TaskAttempt 1 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159)
at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
... View more
Labels:
09-06-2016
07:33 PM
So, do i need ti install hdfs client on all the nodes?
... View more
09-06-2016
06:01 PM
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py", line 39, in <module>
BeforeStartHook().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/hook.py", line 36, in hook
create_topology_script_and_mapping()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/rack_awareness.py", line 69, in create_topology_script_and_mapping
create_topology_mapping()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-START/scripts/rack_awareness.py", line 36, in create_topology_mapping
group=params.user_group)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 152, in action_create
sudo.makedirs(path, self.resource.mode or 0755)
File "/usr/lib/python2.6/site-packages/resource_management/core/sudo.py", line 55, in makedirs
os.makedirs(path, mode)
File "/usr/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 2] No such file or directory: '/usr/hdp/current/hadoop-client/conf' stdout: /var/lib/ambari-agent/data/output-1398.txt Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
Group['spark'] {}
Group['hadoop'] {}
Group['users'] {}
User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
Group['hdfs'] {}
User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
Directory['/etc/hadoop'] {'mode': 0755}
Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0777}
Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
Skipping Execute[('setenforce', '0')] due to not_if
Directory['/var/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}
Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}
Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}
Directory['/etc/hadoop/conf'] {'owner': 'hdfs', 'group': 'hadoop', 'recursive': True}
Creating directory Directory['/etc/hadoop/conf'] since it doesn't exist.
Following the link /etc/hadoop/conf to /usr/hdp/current/hadoop-client/conf to create the directory
... View more
Labels:
08-14-2016
09:00 PM
Here i just want to get my resource manager hostip and its port. example : -: http://IP:8088, http://IP:50070, http://IP:19888.
... View more
Labels: