Member since
09-19-2016
36
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1481 | 07-21-2018 08:06 AM | |
744 | 06-08-2017 09:11 AM |
11-12-2018
06:16 AM
Well...the command is not the same. This time I used oozie.coord.application.path instead of oozie.wf.application.path.
... View more
10-08-2018
12:23 PM
I have the same issue.could you find any solusion???
... View more
07-21-2018
08:06 AM
I solved this issue Just posting this to those who may have the same problem. I strengthen the link between my database and my big data servers. The link was slow so sqoop transmission rate got very low.
... View more
07-08-2018
08:53 AM
thanks for your quick reply. Is there any other solution to accomplish the import with less memory but slower?? My memory resources are limited. about 55G is assigned to yarn. an other question, what is the proper size of memory for mappers? I googled a lot for this question and I came with that I need to reduce my mappers memory , and increase them in number. as you said, e.g. 100 mappers. take a look at my ref. does this sound OK to you? p.s. my mapper mem is 3G and my reducers have 2G.
... View more
07-07-2018
01:17 PM
hi,
I enter this command to import some data from oracle. It is ok and the result has 1.3 million records. sqoop-import --connect jdbc:oracle:thin:@//serverIP:Port/xxxx --query "SELECT col1,col2,col3 FROM table where condition AND \$CONDITIONS " --target-dir /user/root/myresult --split-by col1 -m 10 --username xxx --password xxx but when I delete the condition to import whole table which has 12million records, It fails. always the first maps are loged as succeeded and the last one just hangs. but when I check mapreduce logs for succeeded maps, I see that they have been fail with this message: container killed by the applicationmaster. container killed on request. exit code is 143 container exited with a non-zero exit code 143. I googled and I found https://stackoverflow.com/questions/42306865/sqoop-job-get-stuck-when-import-data-from-oracle-to-hive as the same issue I have. but this post hasn't been answered yet. It'd be helpful if you take a look.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Sqoop
-
Apache YARN
07-31-2017
07:57 AM
1 Kudo
hello, I start my zeppelin an ambari, but after a few seconds it stops. I checked /var/log/zeppelin and nothing is logged. any idea?
... View more
Labels:
- Labels:
-
Apache Zeppelin
07-04-2017
03:08 PM
hi, I need to write in a local file using a action. but similarly I get the permission error. do you have any idea?
... View more
07-04-2017
02:33 PM
hi, I need my job.properties to be edited after each time a sqoop action is done through a oozie coodrinator. I have added a shell action after sqoop action in my workFlow. my job.properties is not in hdfs and is located in node2, where I run oozie job. (i have 5 nodes), but the shell faces 'permission denied' error, and sometimes it says "no such file or directory'. i moved it to hdfs but the same thing happened. has anyone done such a thing? I execute the shell commands locally and they work correctly, but somehow when it's running distributed I can't control permissions.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Oozie
07-04-2017
02:25 PM
I solved this issue by adding one line to my job.properties: oozie.action.sharelib.for.sqoop=sqoop,hive and two lines to workflow after query in sqoop part: <file>/user/root/test/shell/hive-site.xml#hive-site.xml</file>
<file>/user/root/test/shell/hive-site.xml#tez-site.xml</file>
... View more
07-03-2017
07:36 AM
Hi, H have a WF with two actions, one is a sqoop action and the other is a shell action. I have tested both actions in shell earlier. now that I need oozie to execute them, it fails in sqoop action with following error: ive/conf/HiveConf$ConfVars
java.lang.NoClassDefFoundError: org/apache/hadoop/hive/conf/HiveConf$ConfVars
at org.apache.hive.hcatalog.common.HCatConstants.<clinit>(HCatConstants.java:74)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureHCat(SqoopHCatUtilities.java:299)
at org.apache.sqoop.mapreduce.hcat.SqoopHCatUtilities.configureImportOutputFormat(SqoopHCatUtilities.java:848)
at org.apache.sqoop.mapreduce.ImportJobBase.configureOutputFormat(ImportJobBase.java:102)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:263)
at org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:748)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:509)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:615)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.main(Sqoop.java:243)
at org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:202)
at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:182)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:242)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf$ConfVars
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 31 more Sqoop action is meant to import some data from mySQL to a hive partitioned table. Does anyone know how to sole it? P.S. I have all my services on a 5 node ambari cluster. my hdp version is 2.5.
... View more
Labels:
06-24-2017
09:24 AM
I need to import data from mysql, at the end of everyday to my hdfs. I have scheduled my sqoop action using oozie. my sqoop command will put data in a new directory every time. I need to name this directory as it won't be repeated after the first time. so I can not provide a constant --target-dir. I tried to use 'current-date'. so every day it creates a new directory but this function is not supported in xml 1.0 and oozie does not support version 2.0. do you know any other way to create an incremental and unique id?
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Sqoop
06-24-2017
09:17 AM
I changed my command like: oozie job --oozie http://node3:11000/oozie --config '/usr/hdp/current/oozie-server/sqoop-sample/test02/job.properties' -D oozie.coord.application.path=hdfs://node2:8020/user/root/test/sqoop/coordinator.xml -run and done.
... View more
06-20-2017
05:31 AM
I tried but the same error appeared. have you ran a coordinator successfully? if yes, can you provide your oozie-site.xml file? I think my problem derives from there. but I don't know what is missing or misconfigured.
... View more
06-19-2017
08:31 AM
hi, I have a workflow that I've ran it successfully earlier. now I need a coordinator to schedule it. my command is: oozie job --oozie http://node3:11000/oozie --config '/usr/hdp/current/oozie-server/sqoop-sample/test02/job.properties' -D oozie.wf.application.path=hdfs://node2:8020/user/root/test/sqoop/coordinator.xml -run my workflow is: <?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.2" name="sqoop-wf">
<start to="sqoop-node"/>
<action name="sqoop-node">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/sqoop"/>
<mkdir path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<command>import --driver com.mysql.jdbc.Driver --connect jdbc:mysql://node1/mydb --table topop --target-dir /user/root/testdata2 --username user --password mypassword -m 1</command>
</sqoop>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Sqoop failed, error message</message>
</kill>
<end name="end"/>
</workflow-app> here is coordinator.xml: <?xml version="1.0" encoding="UTF-8"?>
<coordinator-app xmlns="uri:oozie:coordinator:0.2" name="sqoop-wf" frequency="${coord:days(1)}" start="2017-06-18T12:50Z" end="2018-06-18T12:15Z" timezone="United_kingdom/London">
<action>
<workflow>
<app-path>${nameNode}/user/root/test/sqoop</app-path>
</workflow>
</action>
</coordinator-app> and finally oozie-site.xml: <configuration>
<property>
<name>oozie.action.retry.interval</name>
<value>30</value>
</property>
<property>
<name>oozie.authentication.simple.anonymous.allowed</name>
<value>true</value>
</property>
<property>
<name>oozie.authentication.type</name>
<value>simple</value>
</property>
<property>
<name>oozie.base.url</name>
<value>http://node3:11000/oozie</value>
</property>
<property>
<name>oozie.credentials.credentialclasses</name>
<value>hcat=org.apache.oozie.action.hadoop.HCatCredentials,hive2=org.apache.oozie.action.hadoop.Hive2Credentials</value>
</property>
<property>
<name>oozie.db.schema.name</name>
<value>oozie</value>
</property>
<property>
<name>oozie.service.ActionService.executor.ext.classes</name>
<value>org.apache.oozie.action.email.EmailActionExecutor,
org.apache.oozie.action.hadoop.HiveActionExecutor,
org.apache.oozie.action.hadoop.ShellActionExecutor,
org.apache.oozie.action.hadoop.SqoopActionExecutor</value>
</property>
<property>
<name>oozie.service.AuthorizationService.security.enabled</name>
<value>true</value>
</property>
<property>
<name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
<value>*=/usr/hdp/current/hadoop-client/conf</value>
</property>
<property>
<name>oozie.service.HadoopAccessorService.kerberos.enabled</name>
<value>false</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.driver</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.password</name>
<value>SECRET:oozie-site:12:oozie.service.JPAService.jdbc.password</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.url</name>
<value>jdbc:mysql://node1/oozie</value>
</property>
<property>
<name>oozie.service.JPAService.jdbc.username</name>
<value>root</value>
</property>
<property>
<name>oozie.service.SchemaService.wf.ext.schemas</name>
<value>shell-action-0.1.xsd,email-action-0.1.xsd,hive-action-0.2.xsd,sqoop-action-0.2.xsd,ssh-action-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-workflow-0.5.xsd</value>
</property>
<property>
<name>oozie.service.SparkConfigurationService.spark.configurations</name>
<value>*=spark-conf</value>
</property>
<property>
<name>oozie.service.URIHandlerService.uri.handlers</name>
<value>org.apache.oozie.dependency.FSURIHandler,org.apache.oozie.dependency.HCatURIHandler</value>
</property>
<property>
<name>oozie.services.ext</name>
<value>org.apache.oozie.service.JMSAccessorService,org.apache.oozie.service.PartitionDependencyManagerService,org.apache.oozie.service.HCatAccessorService,org.apache.oozie.service.ActionService</value>
</property>
</configuration> and the error msg: Error: E0723 : E0723: Unsupported action type, node [workflow] type [org.apache.oozie.service.ActionService] any idea?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Oozie
06-10-2017
05:44 AM
Hi, I want to increase my namenode Java heap size. I put the namenode in maintenance mode, but configs are still disables. I even turned off all HDFS services. but it didn't work either. any Idea?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
06-08-2017
09:11 AM
Since error mg contains "invalid user: falcon", I tried to create user falcon manually: adduser -g falcon falcon but there was an error about /etc/gshadow.lock. I figured out that there was a uncomplete try of creating falcon user, it was not successful and gshadow.lock was created but not deleted.(normally it must be deleted after creating a user). So: rm /etc/gshadow.lock
yum install falcon And the problem is gone!
... View more
06-07-2017
09:15 PM
Hi, I have 5 openstack nodes. one as ambari-server and the other four are my agents. in deployment step of creating cluster, all services and slaves are installed in 3 nodes, except one. that one fails in "installing oozie". I checked logs and failure is due to falcon. I tried to install it manually by "yum install falcon", but the same error happens. here is stderr: Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 76, in <module>
OozieClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 37, in install
self.install_packages(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
retry_count=agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37' returned 1. There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
No Presto metadata available for HDP-2.5
/usr/bin/install: invalid user 'falcon'
/usr/bin/install: invalid user 'falcon'
error: %pre(falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch and stdout: 2017-06-06 14:40:50,770 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-06 14:40:50,771 - Group['livy'] {}
2017-06-06 14:40:50,772 - Group['spark'] {}
2017-06-06 14:40:50,772 - Group['zeppelin'] {}
2017-06-06 14:40:50,773 - Group['hadoop'] {}
2017-06-06 14:40:50,773 - Group['users'] {}
2017-06-06 14:40:50,773 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,775 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,776 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,776 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,777 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,778 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,778 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,779 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,780 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,780 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,782 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,783 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,784 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,787 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-06 14:40:50,792 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-06 14:40:50,793 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-06 14:40:50,794 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,795 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-06 14:40:50,799 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-06 14:40:50,800 - Group['hdfs'] {}
2017-06-06 14:40:50,800 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-06-06 14:40:50,801 - FS Type:
2017-06-06 14:40:50,801 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-06 14:40:50,813 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-06 14:40:50,814 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-06 14:40:50,835 - Initializing 2 repositories
2017-06-06 14:40:50,836 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-06 14:40:50,842 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,843 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-06 14:40:50,845 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,845 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,929 - Skipping installation of existing package unzip
2017-06-06 14:40:50,930 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,945 - Skipping installation of existing package curl
2017-06-06 14:40:50,945 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,959 - Skipping installation of existing package hdp-select
2017-06-06 14:40:51,735 - Package['zip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,811 - Skipping installation of existing package zip
2017-06-06 14:40:51,812 - Package['extjs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,825 - Skipping installation of existing package extjs
2017-06-06 14:40:51,826 - Package['oozie_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,839 - Skipping installation of existing package oozie_2_5_3_0_37
2017-06-06 14:40:51,840 - Package['falcon_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,853 - Installing package falcon_2_5_3_0_37 ('/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37')
Command failed after 1 tries any idea? also I did: yum-complete-transaction --cleanup-olny
yum erase falcon
yum install falcon but the same error happened again. then I downloaded falcon from git and built it with maven, but when I type "falcon" in command line, it does not know it. now ambari retry gives me timeout. Python script has been killed due to timeout after waiting 1800 secs
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Falcon
-
Apache Oozie
05-07-2017
11:52 AM
any other help?? I'm still having the issue...
... View more
05-04-2017
01:14 PM
well, there were no configurations file but the services file is uploaded. it was too large so I had to split it and also I changed the format to .txt from .json. other files are: hosts.json component-layout-validation.json stackadvisor.err ===>which is empty stackadvisor.out ====> content is copied bellow. StackAdvisor implementation for stack HDP, version 2.0.6 was loaded
StackAdvisor implementation for stack HDP, version 2.1 was loaded
StackAdvisor implementation for stack HDP, version 2.2 was loaded
StackAdvisor implementation for stack HDP, version 2.3 was loaded
StackAdvisor implementation for stack HDP, version 2.4 was loaded
StackAdvisor implementation for stack HDP, version 2.5 was loaded
Returning HDP25StackAdvisor implementation
ServiceAdvisor implementation for service METRON was loaded1.txt
... View more
05-04-2017
04:28 AM
well, actually I've checked it before and it is empty.
... View more
05-03-2017
11:41 AM
first I thank you for your response, this is a part of log file: ambari-server.txt I have 5 openstack nodes. node1 is ambari server and node2-5 are agents. the OS for all is centos 7. all have 6 cpu cores and 16Gb of RAM. also HDD is 300G fore each. after 3 dayes, there is no time out. it just keeps loading.
... View more
05-02-2017
12:59 PM
better to mention that I have checked stack-recommendation directory. It does exist and permission are OK. any other idea?
... View more
05-02-2017
11:31 AM
Hello, I'm installing metron with HDP 2.5 using this link. in last steps, in Ambari UI, I specify slaves and clients but when I hit the 'next' button, it just keeps loading for hours and nothing happens. any idea??
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Metron
05-01-2017
05:47 AM
hi, I had the same problem as Prakash. I read the pdf you've
attached. most steps were done before. the only change I made was
disabling transparent huge page. after that my ambari failed to
start. i tried stop/start and even remove/install ambari. but the error
resists. I even enabled THP again. actually when I start the service, it says it was successful,
but the answer to 'systemctl status ambari-server' is: ambari-server.service - LSB: ambari-server daemon
Loaded: loaded (/etc/rc.d/init.d/ambari- server; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2017-05-01 04:52:43 UTC; 40min ago
Docs: man:systemd-sysv-generator(8) May 01 04:52:43 node1.novalocal systemd[1]: Starting LSB: ambari-server daem....
May 01 04:52:43 node1.novalocal ambari-server[3400]: Need python version > 2.6
May 01 04:52:43 node1.novalocal systemd[1]: ambari-server.service: control p...1
May 01 04:52:43 node1.novalocal systemd[1]: Failed to start LSB: ambari-serv....
May 01 04:52:43 node1.novalocal systemd[1]: Unit ambari-server.service enter....
May 01 04:52:43 node1.novalocal systemd[1]: ambari-server.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
... View more
04-24-2017
09:10 AM
hi, I have 5 openstack nodes and I want to install metron on them using this link.
after I ran following command I get an error: mvn clean install -DskipTests -PHDP-2.5.0.0
[INFO] --- exec-maven-plugin:1.5.0:exec (rpm-build) @ metron-rpm ---
/bin/bash: ./build.sh: Permission denied
[ERROR] Command execution failed.
org.apache.commons.exec.ExecuteException: Process exited with an error: 126 (Exit value: 126)
at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:404)
at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:166)
at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:764)
at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:711)
at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:289)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
permissions for build.sh are OK. I tried to run it in terminal, and I got following error: [root@node1 rpm-docker]# ./build.sh
FULL_VERSION:
VERSION:
PRERELEASE:
error: Macro %_version has empty body
error: Macro %_version has empty body
error: Macro %_prerelease has empty body
error: Macro %_prerelease has empty body
(...)
error: Bad exit status from /var/tmp/rpm-tmp.68RZ1K (%install)
RPM build errors:
Macro %_version has empty body
Macro %_version has empty body
Macro %_prerelease has empty body
Macro %_prerelease has empty body
Bad exit status from /var/tmp/rpm-tmp.68RZ1K (%install)
why values related to versions are null?? since these are null, when it refers to files it needs in SOURCES, it can not find what it needs any help here? Does any one know valid values for them, so I can set them manually?
... View more
Labels:
- Labels:
-
Apache Metron
-
Docker
10-31-2016
07:51 AM
@Simon Elliston Ball @Timothy Spann thank you both for your helpful answers, actually it took me a while to go through your links, but now I know what I needed.
... View more
10-02-2016
11:07 AM
3 Kudos
hello to all, I have reviewed metron docs and it's been indicated (for many times) that telemetry correlation and anomaly detection are two of metron main tasks. Now i need to know which components do these tasks. I'm interested to see the source code doing correlation & anomaly detection. Has anyone any idea?does anybody know where can I find them? Thanks in advance.
... View more
- Tags:
- CyberSecurity
- Metron
Labels:
- Labels:
-
Apache Metron
09-20-2016
08:48 PM
Hi, I took a quick look at both links and I'm sure they both will be so helpful, thank you two.
... View more
09-19-2016
10:25 AM
1 Kudo
Hi every one, I have installed a singlenode 0.7rc version of metron. As you know everything is installed on a virtual machine using Vagrant technology. Now i need a multinode metron, I have a VM on a server. Using default installation procedure, I would have some VMs over my VM and this would definitely effect overall performance in a bad way. So my exact question is that is it possible to install metron on real nodes, probably using ansible playbooks,with no Vagrant and no virtual machines??? How? Any idea would be great.
... View more
Labels:
- Labels:
-
Apache Metron