Member since
01-25-2016
345
Posts
86
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5011 | 10-20-2017 06:39 PM | |
3536 | 03-30-2017 06:03 AM | |
2591 | 02-16-2017 04:55 PM | |
16105 | 02-01-2017 04:38 PM | |
1146 | 01-24-2017 08:36 PM |
07-20-2016
09:30 PM
1 Kudo
R is an another Open Source Software , you need to install R and then try to connect hadoop using ODBC driver. for more info: https://cran.r-project.org/bin/windows/base/ http://www.rdatamining.com/big-data/r-hadoop-setup-guide <<section 4. Install R>> For RStudio https://www.rstudio.com/products/rstudio/download2/
... View more
07-19-2016
06:48 PM
Typo in sqoop command use mysql driver instead of using Teradata driver. Here is modified script: sqoop import --connect jdbc:mysql://192.168.218.128/sqoopdb --driver com.mysql.jdbc.Driver --username hadoop --password Hadoop@1 --query "select * from emp_add where city='sec-bad' AND \$CONDITIONS" --target-dir /Practice/SqoopToHDFSWhere/ --m 1;
... View more
07-18-2016
06:14 PM
@Suresh Kumar D try this sqoop import --connect jdbc:mysql://192.168.218.128/sqoopdb --driver "com.teradata.jdbc.TeraDriver" --username hadoop --password Hadoop@1 --query "select * from emp_add where city='sec-bad' AND \$CONDITIONS" --target-dir /Practice/SqoopToHDFSWhere/ --m 1;
... View more
07-13-2016
09:01 AM
I have added below properties in advanced log4j properties and spark is creating logs in local directory. log4j.appender.rolling=org.apache.log4j.RollingFileAppender log4j.appender.rolling.encoding=UTF-8 log4j.appender.rolling.layout=org.apache.log4j.PatternLayout log4j.appender.rolling.layout.conversionPattern=[%d] %p %m (%c)%n log4j.appender.rolling.maxBackupIndex=5 log4j.appender.rolling.maxFileSize=50MB log4j.logger.org.apache.spark=WARN log4j.logger.org.eclipse.jetty=WARN
log4j.rootLogger=INFO, rolling
#log4j.appender.rolling.file=${spark.yarn.app.container.log.dir}/spark.log log4j.appender.rolling.file=/var/log/spark/spark.log ${spark.yarn.app.container.log.dir}/spark.log doesn't work for me to write logs in HDFS.
... View more
07-13-2016
06:35 AM
I looked at "yarn.nodemanager.log-dirs" in YARN but it seems YARN will clear all the logs immediately after completion of the job.
... View more
07-13-2016
06:30 AM
Thanks for this..I tried it earlier but it's not creating any logs here. I'm seeing only .OUT files.
... View more
07-13-2016
05:42 AM
Hi, We are running spark jobs and knew that YARN will create logs on hdfs at /app-logs/<running User>/logs/application_1463538185607_99971 To know more details about logs we can run yarn logs -applicationId application_1463538185607_99971 But we are working on Spark Automation process and trying to keep the logs in Custom location. In-order to achieve this we added "log4j.appender.rolling.file" property in "Custom spark-log4j-properties" section through Ambari. log4j.appender.rolling.file= ${spark.yarn.app.container.log.dir}/spark.log Here I'm not sure where Spark is going to create logs for sucessfull/Failed jobs. Can you suggest me where can we check this spark logs?
... View more
Labels:
- Labels:
-
Apache Spark
07-11-2016
06:10 AM
@revan wabale it seems No, HDP didn't talk about Spark 1.6.1 downgrade compatibility with HDP2.4.2 ref link: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.2/bk_spark-guide/content/ch_introduction-spark.html
... View more
07-11-2016
05:14 AM
@Kiran Jilla
I think your dev is internet enabled servers but prod in not. Check you check your repo list if it's pointing HDP2.4.2.0 or not.
... View more
07-10-2016
05:50 AM
1 Kudo
@Mamta Chawla No, Sqoop will import the data to HDFS directories only either default directory [/apps/warehouse/hive/] or other specified hdfs location. Then you can move the data to your local directories or you can access hdfs data using nfs mount from your unix local directory.
... View more