12-21-2015 09:19 AM
I have a pig job that writes into HBase. However, from time to time, for a successful job, in the logs I have:
Input(s): Successfully read 2588027 records (1523635920 bytes) from: "my_database.my_table" Output(s): Successfully stored 2588027 records in: "hdfs://StandbyNameNode/user/agherman/my_write_table" Counters: Total records written : 2588027 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0
And when I have
Total bytes written : 0
in fact, my data wasn't write.
However, when I run the second time (the same job) this is sometimes working
Could you please let me know what this means? what I could do to stop it ?
how could I identify that in fact the job wasn't working (besides checking that I have bytes written)