Most code for current big data projects and for the code you are going to write is going to be JVM based (Java and Scala mostly). There is certainly a ton of R, Python, Shell and other languages. For this tutorial we will focus on JVM tools.
The great thing about that is that Java and Scala Static Code Analysis Tools will work for analyzing your code. JUnit test are great for testing the basic code and making sure you isolate out functionality from Hadoop and Spark specific interfacing.
import static org.junit.Assert.assertEquals;
import org.junit.Test;
/**
* Test method for
* {@link com.dataflowdeveloper.deprofaner.ProfanityRemover#fillWithCharacter(
* int, java.lang.String)}.
*/
@Test
public void testFillWithCharacterIntString() {
assertEquals("XXXXX", Util.fillWithCharacter(5, "X") );
}
As you can see this is just a plain old JUnit Test, but it's one step in the process to make sure you can test your code before it is deployed. Also Jenkins and other CI tools are great at running JUnits are part of their continuous build and integration process.
A great way to test your application is with a small Hadoop cluster or simulated one. Testing against a Sandbox downloaded on your laptop is a great way as well.