you should be able to use show table extended partition to see if you can get info on it and not try to open anyone who is zero bytes. Like this:
scala> var sqlCmd="show table extended from mydb like 'mytable' partition (date_time_date='2017-01-01')"
sqlCmd: String = show table extended from mydb like 'mytable' partition (date_time_date='2017-01-01')
scala> var partitionsList=sqlContext.sql(sqlCmd).collectAsList
partitionsList: java.util.List[org.apache.spark.sql.Row] =
[[mydb,mytable,false,Partition Values: [date_time_date=2017-01-01]
Location: hdfs://mycluster/apps/hive/warehouse/mydb.db/mytable/date_time_date=2017-01-01
Serde Library: org.apache.hadoop.hive.ql.io.orc.OrcSerde
InputFormat: org.apache.hadoop.hive.ql.io.orc.OrcInputFormat
OutputFormat: org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
Storage Properties: [serialization.format=1]
Partition Parameters: {rawDataSize=441433136, numFiles=1, transient_lastDdlTime=1513597358, totalSize=4897483, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numRows=37825}
]]
Let me know if that works and you can avoid the 0 byter's with such or if you still get null pointer..
James