Member since
05-22-2018
69
Posts
1
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3816 | 06-07-2018 05:33 AM | |
923 | 05-30-2018 06:30 AM |
12-05-2019
04:54 AM
I am using Hortonworks cloudbreak on Azure. I want to run pig job from Oozie but when a job enters into the RUNNING state it throws below error message and stuck in RUNNING state, Can not find the logs for the application: application_xxx_1113 with the appOwner: hdfs I run Oozie job as a hdfs user and the logs directory hdfs:///app-logs/hdfs/logs/ has all privileges. When I run the same pig script using 'pig -x tez script.pig' then it run successfully but when I run through Oozie workflow it throws the above error.
... View more
11-15-2018
02:05 PM
Hi All, I want to install scala in Linux as in sandbox centOS. I have already downloaded scala-2.10.5 and also create a soft link for it in /us/hdp/current folder. After that, I executed below commands to set PATH for SCALA_HOME. echo "export SCALA_HOME = /usr/hdp/current/scala"
or
echo "export SCALA_HOME = /usr/hdp/2.5.0.0-1245/scala"
echo "export PATH=$PATH:$SCALA_HOME/bin" After that, I jumped at root directory in local and executed below command to execute .bashrc and .bash_profile files. source ~/.bashrc
source ~/.bash_profile I thought that is all. But I couldn't able to go into scala shell from anywhere. Can anyone help me with this? Regards,
... View more
Labels:
- Labels:
-
Hortonworks Data Platform (HDP)
07-12-2018
06:22 AM
@Vinicius Higa Murakami Thanks for the reply, But as per my knowledge, it won't work. Cause I have too many input files in the same folder, how could Sqoop identify which file user wants to export? Regards, Jay.
... View more
07-11-2018
10:59 AM
Hi All, I want to export CSV data into MsSQL using Sqoop. I have created a table in MsSQL which have one auto-increment column named 'ID'. I have one CSV file in HDFS directory. I have executed below Sqoop export command; Sqoop Command: sqoop export --connect 'jdbc:sqlserver://xxx.xxx.xx.xx:xxxx;databasename=<mssql_database_name>'--username xxxx-passwordxxxx--export-dir /user/root/input/data.csv--table <mssql_table_name> I am facing the following error; Error: 18/07/11 10:30:48 INFO mapreduce.Job: map 0% reduce 0%
18/07/11 10:31:12 INFO mapreduce.Job: map 75% reduce 0%
18/07/11 10:31:13 INFO mapreduce.Job: map 100% reduce 0%
18/07/11 10:31:17 INFO mapreduce.Job: Job job_1531283775339_0005 failed with state FAILED due to: Task failed task_1531283775339_0005_m_000003
Job failed as tasks failed. failedMaps:1 failedReduces:0
18/07/11 10:31:18 INFO mapreduce.Job: Counters: 31
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=163061
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=261
HDFS: Number of bytes written=0
HDFS: Number of read operations=7
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Failed map tasks=3
Launched map tasks=4
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=87423
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=87423
Total vcore-milliseconds taken by all map tasks=87423
Total megabyte-milliseconds taken by all map tasks=21855750
Map-Reduce Framework
Map input records=0
Map output records=0
Input split bytes=240
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=346
CPU time spent (ms)=470
Physical memory (bytes) snapshot=106921984
Virtual memory (bytes) snapshot=1924980736
Total committed heap usage (bytes)=39321600
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
18/07/11 10:31:18 INFO mapreduce.ExportJobBase: Transferred 261 bytes in 71.3537 seconds (3.6578 bytes/sec)
18/07/11 10:31:18 INFO mapreduce.ExportJobBase: Exported 0 records.
18/07/11 10:31:18 ERROR mapreduce.ExportJobBase: Export job failed!
18/07/11 10:31:18 ERROR tool.ExportTool: Error during export: Export job failed! Sample Data: abc,1223
abck,1332
abckp,2113 Regards, Jay.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Sqoop
07-09-2018
06:55 AM
Hi, I am new to learning HBase and I am using Hortonworks sandbox on virtual box. I just have open HBase Shell and ran the first simple command to show status but it is giving me following error. hbase(main):010:0> status 'simple'
ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2402)
at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:778)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:57174)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
Here is some help for this command:
Show cluster status. Can be 'summary', 'simple', 'detailed', or 'replication'. The
default is 'summary'. Examples:
hbase> status
hbase> status 'simple'
hbase> status 'summary'
hbase> status 'detailed'
hbase> status 'replication'
hbase> status 'replication', 'source'
hbase> status 'replication', 'sink'
hbase(main):011:0> I have checked that HBase Master is running well on Ambari. I have restarted Sandbox as well. But I am facing same error. Regards, Jay.
... View more
Labels:
- Labels:
-
Apache HBase
07-06-2018
07:59 AM
@Shu Thank a ton !!! I would like to suggest to use While Loop. Regards, Jay.
... View more
06-22-2018
01:13 PM
Hi @Geoffrey Shelton Okot, Please have a look into my sqoop code. sqoop import --connect jdbc:sqlserver://<HOST>:<PORT>;databasename=<mssql_database_nameMS> --username xxxx --password xxxx --hive-database <hive_database_name> --table <mssql_table1>,<mssql_table2> --hive-import -m 1 Thank you, Jay.
... View more
06-22-2018
11:56 AM
hi @Geoffrey Shelton Okot I tried above solution, but it is also throwing below error; 18/06/22 11:53:59 ERROR manager.SqlManager: Error executing statement: com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name '<ms_SQL_tbalename>,<ms_SQl_Tablename>'. Regards, Jay.
... View more
06-18-2018
10:21 AM
@Felix Albani Hi, I have actually tried both, i.e with the full path of spark-submit and navigate directory and execute there there. But faced same error. Anyways, I have updated my question. Have look into that. Regards, Jay.
... View more
06-16-2018
07:10 AM
Hi All, I have created HDP cluster on AWS. Now I want to execute a spark-submit command using shell action. Spark-submit command is simple, that take input from HDFS and store output in HDFS and .jar file taken from Hadoop local. My spark-submit command is running well on a command line. It can read data and store output on HDFS in a specific directory. And I could also create a script and run on command line, it also worked well. But the problem is while executing oozie workflow for this. script.sh #!/bin/bash
/usr/hdp/current/spark2-client/bin/spark-submit --class org.apache.<main> --master local[2] <jar_file_path> <HDFS_input_path> <HDFS_output_path> job.properties nameNode=hdfs://<HOST>:8020
jobTracker=<HOST>:8050
queueName=default
oozie.wf.application.path=${nameNode}/user/oozie/shelloozie workflow.xml <workflow-app name="ShellAction" xmlns="uri:oozie:workflow:0.3">
<start to='shell-node' />
<action name="shell-node">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
<exec>script.sh</exec>
<file>${nameNode}/user/oozie/shelloozie/script.sh#script.sh</file>
</shell>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Script failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name='end' />
</workflow-app> Anyways, I have checked my yarn log, it is giving me following, I didn't get it what it is explaining. LogType:stderr
Log Upload Time:Sat Jun 16 07:00:47 +0000 2018
LogLength:1721
Log Contents:
Jun 16, 2018 7:00:24 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
Jun 16, 2018 7:00:24 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
Jun 16, 2018 7:00:24 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class
Jun 16, 2018 7:00:24 AM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
Jun 16, 2018 7:00:24 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
Jun 16, 2018 7:00:25 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
Jun 16, 2018 7:00:26 AM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"
log4j:WARN No appenders could be found for logger (org.apache.hadoop.mapreduce.v2.app.MRAppMaster).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
End of LogType:stderr Kindly help me to solve this. Thank You, Jay.
... View more
Labels:
- Labels:
-
Apache Oozie
-
Apache Spark