Member since
03-14-2016
4721
Posts
1111
Kudos Received
874
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2438 | 04-27-2020 03:48 AM | |
4866 | 04-26-2020 06:18 PM | |
3972 | 04-26-2020 06:05 PM | |
3209 | 04-13-2020 08:53 PM | |
4904 | 03-31-2020 02:10 AM |
01-31-2017
09:15 AM
@Baruch AMOUSSOU DJANGBAN Following article contains most of the details on how to control the logging and rolling of logs based on size: How to control size of log files for various HDP components? https://community.hortonworks.com/articles/8882/how-to-control-size-of-log-files-for-various-hdp-c.html . You should use the following properties of the RollingFileAppender to efficiently control the Size and the number of backup index of old log files: maxFileSize:This is the critical size of the file above which the file will be rolled. Default value is 10 MB. maxBackupIndex: This property denotes the number of backup files to be created. Default value is 1. .
... View more
01-31-2017
08:17 AM
@Saurabh Following is a simple example: import java.io.*;
import java.util.*;
import java.net.*;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
// For Date Conversion from long to human readable.
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
public class FileStatusChecker {
public static void main (String [] args) throws Exception {
try{
FileSystem fs = FileSystem.get(new Configuration());
String hdfsFilePath = "hdfs://My-NN-HA/Demos/SparkDemos/inputFile.txt";
FileStatus[] status = fs.listStatus(new Path(hdfsFilePath)); // you need to pass in your hdfs path
for (int i=0;i<status.length;i++){
long lastAccessTimeLong = status[i].getAccessTime();
Date lastAccessTimeDate = new Date(lastAccessTimeLong);
DateFormat df = new SimpleDateFormat("EEE, d MMM yyyy HH:mm:ss Z");
System.out.println("The file '"+ hdfsFilePath + "' was accessed last at: "+ df.format(lastAccessTimeDate));
}
}catch(Exception e){
System.out.println("File not found");
e.printStackTrace();
}
}
}
... View more
01-31-2017
07:08 AM
@Saurabh One approach will be to look at the "/var/log/hadoop/hdfs/hdfs-audit.log" log file and find for the operation "cmd=open" for the mentioned file. F For example if i want to get when the "open" request was raised for file "/Demos/SparkDemos/inputFile.txt" then in the hdfs-audit.log i can get following kind of entry with the timestamp: tail -f /var/log/hadoop/hdfs/hdfs-audit.log
2017-01-31 07:04:07,766 INFO FSNamesystem.audit: allowed=true ugi=admin (auth:PROXY) via root (auth:SIMPLE) ip=/172.26.70.151 cmd=open src=/Demos/SparkDemos/inputFile.txt dst=null perm=null proto=webhdfs .
... View more
01-30-2017
07:25 PM
@Rafael Menezes Your current issue is *Completely* different from the original issue that was asked in the original thread. Hence i would suggest you to open a separate thread with the current Kerberos related error. That will help in managing good community threads.
... View more
01-30-2017
05:21 PM
@Rafael Menezes If you want to run the code completely using Maven itself then you should refer to : http://www.java2s.com/Tutorials/Java/Maven_Tutorial/2030__Maven_Run_Java_Main.htm Which talks about the Maven exec plugin exec:java
... View more
01-30-2017
04:49 PM
@Rafael Menezes
Good to see that now you are able to compile your code/build fine. However this time the error is more of "RunTime" (not compile time). As following: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/KafkaProducer
at com.tecnisys.App.main(App.java:27)
Caused by:
java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.KafkaProducer Your classpath is not correctly pointing to the kafka-client jar. Please see a complete demo here: https://github.com/mapr-demos/kafka-sample-programs/blob/master/pom.xml . If you are running your code manually then you might do the following: export CLASSPATH = $CLASSPATH:/PATH/TO/kafka-client.jar:.:
java com.tecnisys.App .
... View more
01-30-2017
03:27 PM
@Rafael Menezes Also looks like in your code the following import statement is missing: import org.apache.kafka.clients.producer.KafkaProducer; . Please add proper dependency for this class in your pom.xml as well. Something like following: <dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>${YOUR_DESIRED_VERSION}</version>
</dependency>
.
... View more
01-30-2017
03:25 PM
@Rafael Menezes Good to know that initial issue is resolved. You can use 1.6/ 1.7/ 1.8 anything there (but not any version prior to 1.5). Because Generics feature was introduce in prior to 1.5 version. Regarding your new compilation error , we will need to exactly see the line number "25" of your code "App.java" [ERROR] /root/Documents/MavenExamples/abud/src/main/java/com/tecnisys/App.java:[25,2] error: cannot find symbol . Can you please paste the code in correct formatting so that we can see what is there at line "25" that is causing compilation error. .
... View more
01-30-2017
02:19 PM
@Rafael Menezes
The reason for build failure seems to be the following: [ERROR] /root/Documents/MavenExamples/abud/src/main/java/com/tecnisys/App.java:[20,19] error: generics are not supported in -source 1.3 [ERROR] -> [Help 1] . Which indicates that in your "com.tecnisys.App" (App.java) code you are using Java Generics feature which is not available in "Source 1.3" . Hence please check your pom.xml file to see if you are using any "maven-compiler-plugin" there. If yes then try changing the "source" and "target" version to 1.6/1.7 ..etc <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3</version>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin> .
... View more
01-29-2017
01:57 PM
@doron zukerman You can remove "--privileged" if you don't intend to use Kerberos. By default, Docker containers are “unprivileged” and cannot, for example, run a Docker daemon inside a Docker container. This is because by default a container is not allowed to access any devices, but a “privileged” container is given access to all device. Please see: https://docs.docker.com/engine/reference/run/#/runtime-privilege-and-linux-capabilities
... View more